Applying and Adapting the Reformer as a Computationally Efficient Approach to the SQuAD 2.0 Question-Answering Task
![NLP Newsletter: Reformer, DeepMath, ELECTRA, TinyBERT for Search, VizSeq, Open-Sourcing ML,… | by elvis | DAIR.AI | Medium NLP Newsletter: Reformer, DeepMath, ELECTRA, TinyBERT for Search, VizSeq, Open-Sourcing ML,… | by elvis | DAIR.AI | Medium](https://miro.medium.com/max/1400/1*mgWc3FhHPRfCxdPir6wSeg.png)
NLP Newsletter: Reformer, DeepMath, ELECTRA, TinyBERT for Search, VizSeq, Open-Sourcing ML,… | by elvis | DAIR.AI | Medium
GitHub - cerebroai/reformers: Efficient Transformers for research, PyTorch and Tensorflow using Locality Sensitive Hashing
![Reformer: The Efficient Transformer", Anonymous et al 2019 {G} [handling sequences up to L=64k on 1 GPU] : r/MachineLearning Reformer: The Efficient Transformer", Anonymous et al 2019 {G} [handling sequences up to L=64k on 1 GPU] : r/MachineLearning](https://preview.redd.it/all3awzieh421.gif?width=640&crop=smart&format=png8&s=abc8c335fb1012e777baa9abbfa4029164af5c83)