
Choosing the right AI model is not just a technical challenge but also a strategic one, as highlighted by experts from General Motors, Zoom, and IBM during this year’s VB Transform conference. The discussion focused on the trade-offs associated with open, closed, and hybrid AI models, which have become crucial in enterprise applications. Barak Turovsky, GM’s inaugural chief AI officer, emphasized the overwhelming noise surrounding new model releases and shifting leaderboards. Turovsky, who played a pivotal role in launching the first large language model (LLM), reflected on how open-sourcing AI model weights and training data has spurred significant advancements. He stated, “Open-source has been a catalyst for breakthroughs, paving the way for companies like OpenAI.” He also noted that enterprises often adopt a mixed strategy, utilizing open models for internal operations while reserving closed models for production or customer-facing applications. Armand Ruiz, IBM’s VP of AI platform, shared how IBM initially focused on its proprietary LLMs but soon recognized the need for diversification as more powerful models emerged. The company has since integrated with platforms like Hugging Face, allowing clients to select from a variety of open-source models. IBM’s new model gateway provides enterprises with an API to seamlessly switch between different LLMs. The trend of enterprises leveraging multiple models is on the rise. A survey by Andreessen Horowitz revealed that 37% of 100 CIOs reported using five or more models—a noticeable increase from 29% the previous year. While having options is important, Ruiz cautioned that excessive choice can lead to confusion. To streamline the process, IBM focuses on feasibility during the proof of concept phase, only later determining whether to customize a model for a client’s specific requirements. Zoom's CTO, Xuedong Huang, discussed the company’s AI Companion, which offers two configurations: one that federates Zoom’s own LLM with larger foundational models, and another that allows clients to solely use Zoom’s model. Huang noted that the company developed a small language model (SLM) without relying on customer data. Though it has 2 billion parameters and is relatively small, it can outperform many industry-specific models, particularly when paired with a larger model. Huang remarked, “This hybrid approach exemplifies our philosophy, where the small model and larger model work together effectively.”
Cerebras, an emerging player in the AI chip market, is reportedly making headway as it seeks a potential initial public ...
CNBC | Mar 11, 2026, 24:55
Apple Inc. is making a bold move in India with the introduction of the MacBook Neo, priced at Rs 69,900, aiming to trans...
Business Today | Mar 11, 2026, 05:00
A NASA satellite, which has spent over ten years exploring the Van Allen radiation belts that envelop our planet, is on ...
Ars Technica | Mar 10, 2026, 23:05
A groundbreaking development has occurred just outside Dublin, Ireland, where a new data center has become the first fac...
CNBC | Mar 11, 2026, 06:20
In a surprising turn of events, the FDA has chosen not to approve the use of the generic drug leucovorin for treating au...
Ars Technica | Mar 10, 2026, 22:15