AI labs are racing to build data centers as big as Manhattan, each costs billions of dollars and uses as much energy as a small city. These efforts stem from a deep belief in “scaling” — the concept that adding more computing power to existing artificial intelligence training methods will ultimately result in superintelligent systems able to performing all types of tasks.
But a growing group of AI researchers say the scaling of enormous language models could also be reaching limits and that other breakthroughs could also be needed to enhance AI performance.
That’s the bet Sara Hooker, former vp of artificial intelligence research at Cohere and Google Brain alumnus, is making when creating her latest startup: Adaptation laboratories. She co-founded the company with fellow Cohere and Google veteran Sudip Roy. Its business is based on the premise that scaling LLM has turn into an inefficient method to squeeze more performance from AI models. Hooker, who left Cohere in August, quietly announced startup will start wider recruitment this month.
In an interview with TechCrunch, Hooker says that Adaption Labs builds artificial intelligence systems that may continuously adapt and learn from their real-world experiences, and do so extremely effectively. She declined to share details about the methods behind the approach or whether the company relies on LLM or one other architecture.
“There was a turning point where it became clear that the formula of just scaling these models – scaling-based approaches that are attractive but incredibly boring – was not producing intelligence that could navigate and interact with the world,” Hooker said.
According to Hooker, adaptation is “the essence of learning.” For example, stub your toe while walking past the dining room table, and next time you’ll learn to walk more rigorously around it. AI labs have tried to capture this concept through reinforcement learning (RL), which allows AI models to learn from errors under controlled conditions. However, today’s RL methods do not help AI models in production – systems already in use by customers – learn from mistakes in real time. They just keep stubbing their finger.
Some AI labs offer consulting services to assist enterprises adapt AI models to their custom needs, but this comes at a cost. OpenAI apparently requires this from customers spend over $10 million with the company to supply its tuning consultancy services.
Techcrunch event
San Francisco
|
October 27-29, 2025
“We have some pioneering labs that define a set of AI models that are made available to everyone in the same way and are very expensive to customize,” Hooker said. “And I actually think that’s no longer necessarily true, and AI systems can learn very effectively from the environment. Proving that will completely change the dynamics of who will control and shape AI, and really who these models ultimately serve.”
Adaption Labs is the latest sign that the industry’s faith in LLM scaling is waning. A recent paper by MIT researchers found that the world’s largest A.I may soon show diminishing returns. The climate in San Francisco appears to be changing, too. The AI world’s favorite podcaster, Dwarkesh Patel, recently had some incredibly skeptical conversations with famous AI researchers.
Richard Sutton, the Turing Award winner who is considered the “father of RL,” told Patel in September LLMs cannot really scale because they do not learn from real world experiences. This month, OpenAI’s first worker, Andrej Karpathy, told Patel that: he had reservations on the long-term potential of RL to enhance artificial intelligence models.
These varieties of fears are not unheard of. In late 2024, some AI researchers expressed concerns that scaling AI models through pre-training – where AI models learn patterns from stacks of datasets – was yielding diminishing returns. Until then, pre-training was OpenAI and Google’s secret method to improve their models.
These pre-training scaling concerns are now visible in the data, but the AI industry has found other ways to enhance models. In 2025, breakthroughs in AI reasoning models, which require additional time and computational resources to research problems before providing answers, have pushed the capabilities of AI models even further.
AI labs seem convinced that scaling RL and AI reasoning models is the latest frontier. OpenAI researchers previously told TechCrunch that they developed their first AI reasoning model, o1, because they believed it will scale well. More recently, researchers from Meta and Periodic Labs published an article exploring how RL can further scale performance – a study that supposedly cost over $4 million, highlighting how expensive current approaches are.
Adaption Labs’ goal is to search out the next breakthrough and prove that learning from experience will be much cheaper. According to three investors who reviewed his presentations, the startup was in talks to lift between $20 million and $40 million in early fall. They say the round has now closed, although the final amount is unclear. Hooker declined to comment.
“We’re going to be very ambitious,” Hooker said when asked about her investors.
Hooker previously led Cohere Labs, where she trained small AI models for enterprise applications. Compact AI systems now routinely outperform their larger counterparts on benchmarks in coding, math and reasoning – a trend Hooker desires to proceed to build on.
It has also gained a popularity for expanding access to artificial intelligence research around the world, hiring research talent from underrepresented regions similar to Africa. Although Adaption Labs will soon open an office in San Francisco, Hooker says it plans to rent employees around the world.
If Hooker and Adaption Labs are right about scaling limits, the consequences might be huge. Billions have already been invested in scaling LLM, on the assumption that larger models will result in general intelligence. However, it is possible that true adaptive learning may prove not only simpler, but also much more efficient.
