Reflection AI raises $2 billion to become an open US AI lab challenging DeepSeek

AI reflectiona startup founded just last 12 months by two former Google DeepMind researchers raised $2 billion at a valuation of $8 billion, a massive 15-fold jump from Valuation at $545 million just seven months ago. The company, which originally focused on autonomous coding agents, now positions itself as each an open-source alternative to closed frontier labs like OpenAI and Anthropic and a Western counterpart to Chinese artificial intelligence firms like DeepSeek.

The startup was founded in March 2024 by Misha Laskin, who led reward modeling for DeepMind’s Project Gemini, and Ioannis Antonoglou, who co-founded AlphaGo, an artificial intelligence system that became famous for defeating the world champion in the board game Go in 2016. Their experience in developing these highly advanced AI systems is central to their concept, which is that the right AI talent can build pioneering models outside of established tech giants.

- Advertisement -

With the recent round, Reflection AI announced that it has recruited a team of the most talented specialists from DeepMind and OpenAI and built an advanced artificial intelligence training suite that it guarantees will likely be open to everyone. Perhaps most significantly, Reflection AI says it has “identified a scalable commercial model that aligns with our open intelligence strategy.”

According to Laskin, the company’s CEO, Reflection AI’s team currently numbers about 60 people – mostly AI researchers and engineers focused on infrastructure, data training and algorithm development. Reflection AI has secured a compute cluster and hopes to release a pioneer language model next 12 months that is trained on “tens of trillions of tokens,” it told TechCrunch.

“We have built something that was once thought possible only in the best labs in the world: a large-scale LLM and reinforcement learning platform capable of training massive Mix of Experts (MoE) models at a pioneering scale,” Reflection AI he wrote in a post on X. “We saw first-hand the effectiveness of our approach when we applied it to the critical domain of autonomous coding. With this milestone unlocked, we now bring these methods to general agentic reasoning.”

MoE refers to the specific architecture that supports pioneering LLMs – systems that previously only large, closed AI labs could possibly be trained at scale. DeepSeek had a breakthrough moment when it discovered how to train these models at scale in an open way, followed by Qwen, Kimi and other models in China.

“DeepSeek, Qwen and all these models are a wake-up call for us because if we don’t do something about it, the reality is that the global intelligence standard will be built by someone else,” Laskin said. “America won’t build it.”

Techcrunch event

San Francisco
|
October 27-29, 2025

Laskin added that this puts the U.S. and its allies at a drawback because firms and sovereign states often won’t use Chinese models due to potential legal consequences.

“So you can either live at a competitive disadvantage or rise to the occasion,” Laskin said.

American technologists largely celebrated Reflection AI’s recent mission. David Sacks, White House Artificial Intelligence and Crypto Car, published in October: “It’s great to see more open-source U.S. AI models. A significant segment of the global market will prefer the cost, customization, and control that open source software offers. We want the U.S. to win in this category, too.”

Clem Delangue, co-founder and CEO of Hugging Face, an open and collaborative platform for AI developers, told TechCrunch of the round: “This is indeed great news for US open-source AI.” Added Delangue: “The challenge now will be to demonstrate the high speed of release of open AI models and datasets (similar to what we see in the open source AI dominant labs).”

Reflection AI’s definition of “open” seems to be focused on access fairly than development, similar to Meta’s strategy with Lama or Mistral. Laskin said Reflection AI will make model weights – the fundamental parameters that determine how an AI system performs – available for public use, while retaining largely proprietary datasets and full training pipelines.

“Actually, the most impactful thing is the model weights because anyone can take advantage of them and start tinkering with them,” Laskin said. “An infrastructure stack that only a select few companies can actually use.”

This balance is also at the heart of Reflection AI’s business model. Laskin said scientists will likely be free to use the models, but revenue will come from large firms building products based on Reflection AI models and from governments developing “sovereign AI” systems, which are AI models developed and controlled by individual countries.

“Once you get into a large enterprise, by default you want an open model,” Laskin said. “You want something you can own. You can run it on your infrastructure. You can control its costs. You can adapt it to different workloads. Because you’re paying some ungodly amount of money for AI, you want to be able to optimize it as much as possible, and that’s really the market we serve.”

According to Laskin, Reflection AI has not yet released its first model, which will likely be largely text-based and with multimodal capabilities in the future. Funding from the latest round will likely be used to obtain computing resources needed to train recent models, the first of which the company intends to release to the market early next 12 months.

Investors in Reflection AI’s latest round include Nvidia, Disruptive, DST, 1789, B Capital, Lightspeed, GIC, Eric Yuan, Eric Schmidt, Citi, Sequoia, CRV and others.

Latest Posts

Advertisement

More from this stream

Recomended