The decision on AI models is the same technical decision and is strategic. But the alternative of open, closed or hybrid models has compromises.
During this 12 months’s VB transformation, model architecture experts from General Motors, Zoom and IBM discussed how their firms and customers are considering selecting the AI model.
Barak Turovsky, who became the first director of AI GM in March, said that with each recent edition of the model there is a lot of noise and every time the leaders’ table changes. Long before the leader they were the mainstream debate, Turovsky helped launch the first large language model (LLM) and reminded how the masses and training data of AI Open Sourcing led to serious breakthroughs.
“It was probably one of the biggest breakthroughs that helped OpenAI and start launching others,” said Turovsky. “So it’s actually a funny anecdote: Open Source has helped create something that has been closed and now it can go back to opening.”
Decision aspects differ and include costs, efficiency, trust and security. Turovsky said that enterprises sometimes prefer a mixed strategy – using an open model for internal use and a closed production model and customers or vice versa.
AI IBM strategy
Armand Ruiz, vp of the AI IBM platform, said that IBM initially founded his platform with his own LLM, but then he realized that this is not enough – especially when stronger models appeared on the market. Then the company expanded to supply integration with platforms reminiscent of hugging their face so that customers could select any Open Source model. (The company has recently debuted a recent model gate that provides enterprises API to modify between LLM.)
More enterprises select more models than many suppliers. When Andreessen Horowitz surveyed 100 CIO, 37% of respondents said that they use 5 or more models. Last 12 months, only 29% used the same amount.
The alternative is crucial, but sometimes too much selection causes confusion, said Ruiz. To help customers in their approach, IBM is not nervous about what LLM they use during the proof of the concept or pilot phase; The primary goal is feasibility. Only later do they begin to look at whether to distilla the model or adapt one based on the client’s needs.
“First, we try to simplify all this paralysis of the analysis with all these options and focus on the case of use,” said Ruiz. “Then we’ll find out what the best production path is.”
How is Zoom AI approaching
Zoom customers can make a choice from two configurations for his companion AI, said Cto Zoom Xuedong Huang. One includes the federation of the company’s own LLM with other larger foundation models. Another configuration allows clients concerned about using too many models to make use of the model only Zoom. (The company has also recently established cooperation with Google Cloud in order to simply accept an agent-agent report for AI Companion for Enterprise Worksfls.)
Huang said that the company has created its own model of a small language (SLM) without using customer data. With 2 billion parameters, LLM is actually very small, but it may possibly still exceed other industry models. SLM works best on complex tasks while working with a larger model.
“It’s really the power of a hybrid approach,” said Huang. “Our philosophy is very simple. Our company is very similar to Mickey’s mouse and an elephant dancing together. The small model will perform a very specific task. We do not say that the small model will be good enough … Mickey mouse and elephant will work as one team.”
