Alchemist Accelerator has a recent group of AI corporations showcasing their wares today if you would like to watch, and the program itself moves to Tokyo and Doha internationally. Read our picks from this lot.
Speaking with Alchemist CEO and founder Ravi Belani ahead of demo day (today at 10:30 a.m. PT) about this cohort, it was clear that AI startup ambitions have declined, which is not a bad thing.
No early-stage startup is currently likely to grow to be the next OpenAI or Anthropic – their lead is currently too great in the core models of large languages.
“The cost of building a basic LLM is too high; you invest hundreds of millions of dollars to get it out. The question is: How can you compete as a startup?” – Belani said. “VCs don’t want packaging around LLM. We look for companies where there is a vertical play, where they own the end user and there is also a network effect and time dependency.”
I also read this because all the corporations chosen for this group have very specific applications, they use artificial intelligence, but they solve a specific problem in a specific domain.
An example of this is healthcare, where artificial intelligence models that help with diagnosis, care planning, etc. are being increasingly, but still fastidiously, tested. The specter of liability and bias hangs over this highly regulated industry, but there are also many outdated processes that could be replaced with real, tangible advantages.
AI equality it is not trying to revolutionize cancer care or anything – the goal is to make sure that the models it implements don’t violate necessary anti-discrimination protections in AI regulation. This is a serious risk because if your model of care or diagnosis is found to show a bias against a protected class (for example, attributing greater risk to a Muslim or a queer person), it could sink your product and expose you to lawsuits.
Do you would like to trust the model maker or seller? Or perhaps you wish a disinterested (in the original sense of the word, without conflicting interests) specialist who knows the secrets of politics and knows how to properly evaluate the model?
“We all deserve the right to trust that the artificial intelligence behind the medical curtain is safe and effective,” CEO and founder Maia Hightower told TechCrunch. “Healthcare leaders are struggling to keep pace with a complex regulatory environment and rapidly changing artificial intelligence technology. Over the next few years, AI compliance and litigation risks will continue to increase, leading to widespread adoption of responsible AI practices in healthcare. The risk of non-compliance and penalties as severe as loss of certification make our solution very timely.”
It’s a similar story for Cerevox, which works to eliminate hallucinations and other errors from today’s LLMs. But not only in a general sense: they work with corporations to structure their pipelines and data structures so that these bad habits of AI models could be minimized and observed. The point is not to prevent ChatGPT from inventing a physicist when you ask it about a non-existent discovery from the nineteenth century, but to prevent the risk assessment engine from extrapolating from data in a column that ought to exist but doesn’t.
They first partner with financial technology and insurance firms, which Belani admitted is “an unsexy use case, but it’s a way to build a product.” The path with paying customers, which is, you know, how a business starts.
Quickr Bio is based on the recent world of biotechnology built on Crispr-Cas9 gene editing, which brings recent threats as well as recent opportunities. How to check if the changes made are correct? 99% certainty is not enough (again, regulations and liability), but testing to increase certainty could be time-consuming and expensive. Quickr claims that its method of quantifying and understanding actual modifications made (as opposed to theoretical ones – ideally they are an identical) is up to 100 times faster than existing methods.
In other words, they do not create a recent paradigm, but only try to be the best solution to strengthen the existing one. If they show even a significant percentage of the declared effectiveness, they might grow to be a necessity in many laboratories.
You can try the rest of the cohort here — you’ll see that the above mentioned represent the climate. Shows begin at 10:30 a.m. Pacific Time.
As for the program itself, there is considerable support for the programs in Tokyo and Doha.
“We think this is a turning point in Japan. It will be an exciting place to get stories from companies that they will want to come to,” Belani said. A recent change in tax policy should unlock capital for early-stage startups, and investment escaping China is going to Japan, especially Tokyo, where he believes a recent (or somewhat refurbished) tech hub can be built. The indisputable fact that OpenAI is building a satellite there that is pretty much all you wish to know, he suggested.
Mitsubishi is investing through one branch or one other, and the Japan External Trade Organization is also involved. I’d definitely love to see it what Japan’s woke up startup economy produces.
Alchemist Doha receives a $13 million commitment from the government, with an interesting twist.
“The jobs there are focused on founders in emerging markets, which is 90% of the world that has been orphaned by a lot of technological innovation,” Belani said. “We found that some of the best companies in the U.S. are not from the U.S. There’s something about having an outside perspective that creates amazing companies. There is also a lot of instability and this talent needs a home.”
He noted that they may make larger investments under this program, ranging from $200,000 to $1 million, which could change the type of corporations that participate.