
Liquid AI, a Boston start-up from the foundation model, which he threw out from the Massachusetts Institute of Technology (myth), tries to move the technology industry beyond its rely on transformer architecture underlying the hottest models of enormous languages (LLM), such as the LLM series, such as the GPT OPENAI and Google Gemini series.
Yesterday the company announced “Edge hyena“A new model based on fitness, a multi -synar model designed for smartphones and other edge devices before International conference on the learning representation (ICLR) 2025.
The conference, one of the most vital events in the field of machine research, will happen this 12 months in Vienna in Austria.
The new model based on the weave guarantees faster, more saving AI on the edge
Hyena Edge is designed to elevate strong basic transformers base lines for each computing performance and the quality of the language model.
In real tests on the Samsung Galaxy S24 Ultra smartphone, the model provided a lower delay, less memory trace and higher comparative results compared to the transformer model tailored to the parameters.
New architecture of the new era Edge AI
Unlike most small models designed for mobile implementation-in this SMOLLM2, Phi Models and Llam 3.2 1B-Hiena Edge departs from traditional projects. Instead, it strategically replaces two-thirds of the grouped (GQA) remarks with closed with the hyena-y family.
The new architecture is the results of a liquid synthesis of AI architecture framework (Star), which uses evolutionary algorithms for automatic design of the model’s spine and was announced in December 2024.
Star examines a wide selection of operator composition, rooted in mathematical theory of linear input changing systems, in order to optimize in terms of many specifics specific to equipment, such as delay, memory and quality.
Benchmarked directly on consumer equipment
To confirm that Hyen Edge Edge’s readiness, liquid artificial intelligence conducted tests directly on the Samsung Galaxy S24 smartphone.
The results show that the Edge hyena achieved up to 30% faster congestion and decoding of delays compared to its equivalent Transformer ++, with speed benefits increased at longer sequence lengths.
Pre-filling delays at short sequence lengths also overtook the bottom line of the transformer-curitic performance record for responsive applications on the device.
In terms of memory, Hiene Edge consistently used less RAM during the application in all tested sequence lengths, setting it as a strong candidate for environments with strict resource restrictions.
Exceeding transformers in the field of comparative tests
Hyena Edge has been trained for 100 billion tokens and assessed in standard comparative tests for small language models, including Wikitext, Lambada, Piqa, Hellaswag, Winogrande, Arc-Easy and Arc-Challenge.

At any reference point, Hyen Edge either matched or exceeded the performance of the GQA-Transformer ++ model, with a noticeable improvement in the results of embarrassment on Wikitext and Lambada and a higher accuracy indicator for PIQA, Hellaswag and Winogrande.
These results suggest that the increase in the performance of the model does not result from the costs of the predictive quality-the support for many architectures optimized by the edges.
HIENE EDGE EVOLUTION: Looking at performance trends and operator
For those that are looking for a deeper immersion in the Hyena Edge development process, recently Video fight Provides a convincing visual summary of the model’s evolution.
The film emphasizes how key performance indicators – including preliminary delay, delaying decoding and memory consumption – improved compared to subsequent generations of architecture improvement.
It also offers a rare behind -the -scene look at how the internal composition of the hyena edge during development has moved. Viewers can see dynamic changes in the arrangement of operators, such as self -improvement mechanisms (SA), various variants of hyenas and sigl layers.
These changes offer insight into the principles of architectural design, which helped the model achieve its current level of performance and accuracy.
By visualizing the compromises and dynamics of the operator in time, the video is a beneficial context for understanding the architectural breakthroughs underlying Hyen Edge’s performance.
Open Source plans and a wider vision
Liquid Ai said that in the coming months he is planning an open source series of liquid foundation models, including Hyen Edge. The company’s goal is to build talented and efficient AI General purpose, which will be scaled from the cloud data centers to personal devices.
Hyena Edge’s debut also emphasizes the growing potential of other architecture to query transformers in practical settings. Because mobile devices will more and more often operate natively sophisticated a great deal of artificial intelligence, models such as HYENA Edge can set a new bottom line for what AI optimized by EDGE can achieve.
Hyena Edge’s success – each in raw performance indicators and in the presentation of automated architectural design – positions liquid artificial intelligence as one of the emerging players to observe in the evolving landscape of the AI model.