French startup FlexAI is coming out of stealth with a $30 million investment to make AI computing easier to access

French startup FlexAI is coming out of stealth with a  million investment to make AI computing easier to access

French startup has raised a sizeable seed investment to “redesign the compute infrastructure” for developers looking to build and train AI applications more efficiently.

FlexAIas the company is called, it has been operating secretly since October 2023, but the Paris-based company officially launches on Wednesday with 28.5 million euros ($30 million) in funding while announcing its first product: an on-demand cloud service for AI Training.

- Advertisement -

This is quite a change for a seed round, which normally means a really significant founder’s pedigree – and this is the case here. Co-founder and CEO of FlexAI Brijesh Tripathi was previously a senior design engineer at GPU giant and current AI darling Nvidia before taking on various senior engineering and architecture positions at Apple; Tesla (working directly under Elon Musk); Zoox (before Amazon acquired the autonomous vehicle startup); and most recently, Tripathi was vp of AXG, an offshoot of Intel’s AI and supercomputing platform.

Co-founder and CTO of FlexAI Come to China he also has an impressive resume, having held various technical roles at corporations similar to Nvidia and Zynga, and most recently served as chief technology officer at French startup Lifen, which is developing digital infrastructure for the healthcare industry.

The seed round was led by Alpha Intelligence Capital (AIC), Elaia Partners and Heartcore Capital, with participation from Frst Capital, Motier Ventures, Partech and InstaDeep CEO Karim Beguir.

FlexAI team in Paris

A computational puzzle

To understand what Tripathi and Kilani are trying to apply to FlexAI, it’s price first understanding what problems AI developers and practitioners face when it comes to accessing “computation”; this refers to the computing power, infrastructure, and resources needed to perform computational tasks similar to processing data, running algorithms, and executing machine learning models.

“Using any infrastructure in the AI ​​space is complex; it is not for the faint of heart or for the inexperienced,” Tripathi told TechCrunch. “Too much knowledge is required about building infrastructure before it can be used.”

And the public cloud ecosystem that has evolved over the last few many years is a great example of how the industry emerged from developers’ need to build applications without having to worry too much about the back end.

“If you’re a small developer and you want to write an application, you don’t need to know where it’s running or what the backend is — you just spin up an EC2 (Amazon Elastic Compute Cloud) instance and you’re good to go,” Tripathi said. “You can’t do that with AI computing today.”

In the realm of artificial intelligence, developers need to figure out how many GPUs (graphics processing units) they need to connect on what type of network managed through a software ecosystem that they are entirely responsible for configuring. If the GPU or network fails, or if anything in this chain goes improper, the onus is on the developer to fix it.

“We want to bring AI computing infrastructure to the same level of simplicity that general-purpose cloud has achieved — 20 years later, yes, but there’s no reason why AI computing couldn’t deliver the same benefits,” Tripathi said. “We want to get to the point where running AI workloads doesn’t require becoming a data center expert.”

As the current version of the product is tested with a handful of beta customers, FlexAI will launch its first business product later this 12 months. Essentially, it’s a cloud service that permits developers to access “virtual heterogeneous computing,” meaning they will run their workloads and deploy AI models across multiple architectures, paying based on usage relatively than renting GPUs by dollar per hour.

Graphics processors are vital elements in the development of artificial intelligence, used, for example, to train and run large language models (LLM). Nvidia is one of the leading players in the graphics processor market and one of the predominant beneficiaries of the AI ​​revolution initiated by OpenAI and ChatGPT. In the 12 months since OpenAI launched an API for ChatGPT in March 2023, allowing developers to embed ChatGPT functionality into their very own applications, Nvidia’s stock has risen from roughly $500 billion to over $2 trillion.

LLM firms are pouring out of the tech industry and demand for GPUs is skyrocketing. However, GPUs are expensive to run, and renting them from a cloud provider for smaller tasks or ad hoc uses doesn’t at all times make sense and may be prohibitively expensive; that is why AWS is toying with fixed-term rentals for smaller AI projects. But a rental is still a rental, so FlexAI wants to strip away the underlying complexities and enable customers to access AI computations as needed.

“Multicloud for artificial intelligence”

FlexAI’s place to begin is that almost all developers don’t care what GPUs or chips they use, whether it’s Nvidia, AMD, Intel, Graphcore, or Cerebras. Their predominant concern is the ability to develop artificial intelligence and create applications inside budget constraints.

This is where FlexAI’s concept of “universal AI computing” comes into play, where FlexAI takes user requirements and assigns them to whatever architecture makes sense for that specific task, taking care of all needed cross-platform conversions, regardless of whether this Intel’s Gaudi infrastructure, AMD Rocm Or Nvidia WONDERS.

“This means that the developer focuses solely on building, training and using models,” Tripathi said. “We deal with everything underneath. We manage failures, recovery and reliability, and you pay for what you use.”

In many ways, FlexAI intends to speed up the AI ​​of what’s already happening in the cloud, which implies greater than replicating a pay-per-use model: it means the ability to go “multi-cloud” based on the different advantages of different GPU and chip infrastructures.

For example, FlexAI will goal a customer’s specific workload based on their priorities. If a company has a limited budget for training and tuning AI models, they will set it up on the FlexAI platform to get maximum ROI. This may mean taking advantage of cheaper (but slower) computing power from Intel, but if a developer has a small production job requiring the fastest possible performance, then that may be routed through Nvidia as a substitute.

Under the hood, FlexAI is essentially a “demand aggregator,” renting the hardware itself in a traditional way and using its “strong connections” with Intel and AMD employees to provide preferential pricing, which it distributes to its own customer base. This does not necessarily mean a move away from the dominant Nvidia, but perhaps it signifies that in large part – thanks to Intel and AMD are fighting for leftover GPUs left in Nvidia’s wake – there is a huge incentive for them to play ball with aggregators like FlexAI.

“If I can make it work for customers and attract dozens or even tons of of customers to their infrastructure, then they [Intel and AMD] we can be very comfortable,” Tripathi said.

This contrasts with similar cloud GPU players similar to the well-funded CoreWeave i Lambda Labsthat focus exclusively on Nvidia hardware.

“I want AI computing to reach the level where general-purpose cloud computing is today,” Tripathi noted. “You can’t use multiple clouds based on artificial intelligence. You have to choose the specific hardware, number of GPUs, infrastructure, connectivity, and then maintain it yourself. Currently, this is the only way to achieve AI calculations.”

Asked who the exact launch partners are, Tripathi said he couldn’t name all of them due to lack of “formal commitments” from some of them.

“Intel is a strong partner, it definitely provides the infrastructure, and AMD is the infrastructure partner,” he said. “But there is a second tier of partnerships with Nvidia and a few other silicon companies that we are not ready to share yet, but they are all in the pipeline and memorandums of understanding [memorandums of understanding] are currently being signed.”

The Elon effect

Tripathi is greater than prepared to face the challenges ahead, having worked at some of the largest technology corporations in the world.

“I know enough about GPUs; I used to build GPUs,” Tripathi said of his seven-year stint at Nvidia, which ended in 2007 when he joined Apple during the launch of the first iPhone. “At Apple, I focused on solving real customer problems. I was there when Apple started building its first SoC [system on chips] to the phones.”

Tripathi also spent two years at Tesla, from 2016 to 2018, as a hardware engineering manager, where he spent the last six months working directly under Elon Musk after two people above him abruptly left the company.

“What I learned at Tesla and what I incorporate into my startup is that there are no limits beyond science and physics,” he said. “The way it is done today is not what it should be or how it should be done. You should be guided by what is right, guided by first principles, and for that you should remove all the black boxes.”

Tripathi was involved in Tesla’s transition to producing its own chips, a move that has since been emulated by GM and Hyundai, in addition to other automakers.

“One of the first things I did at Tesla was figure out how many microcontrollers were in the car, and to do that we had to literally sort through some of these big black boxes with metal screens and casing around them to find these really tiny microcontrollers in there.” ” Tripathi said. “We ended up putting it on the table, unfolding it, and saying, ‘Elon, there are 50 microcontrollers in the car. And sometimes we pay a 1000 times markup for them because they are shielded and protected in a large metal casing. And he said, “Let’s go make our own.” And we did it.”

GPU as security

Looking further into the future, FlexAI also has aspirations to expand its own infrastructure, including data centers. According to Tripathi, this can be financed through debt financing, building on a recent trend that has seen rivals in the space including CoreWeave AND Lambda Labs uses Nvidia chips as security for loans – as a substitute of giving freely more capital.

“Bankers now know how to use GPUs as collateral,” Tripathi said. “Why give away capital? Until we become a true provider of computing power, our enterprise value will not be enough to provide us with the hundreds of millions of dollars needed to invest in data center construction. If we only operated on an equity basis, we would disappear when the money ran out. But if we actually put it on the GPU as collateral, they can take it and put it in another data center.”

Latest Posts

Advertisement

More from this stream

Recomended