ABR has recently developed a new kind of recurrent neural network that is provably optimal at compressing information over time called the Legendre Memory Unit (LMU). We have used the LMU to set new state-of-the-art records for a variety of time-series problems, with minimal compute. In this talk we introduce the LMU, and discuss how it achieves state-of-the-art performance on industry standard keyword spotting datasets. We also show that the LMU outperforms and outscales transformers on large NLP problems. We demonstrate how our easy-to-use cloud offering, NengoEdge, allows anyone to use the LMU on their time series problems, optimizing directly for a variety of edge hardware targets. Looking ahead, we discuss our LMU AI chip that is in development and will deliver low-power, low-cost full-speech and NLP inference at the edge.