Silicon Valley was where the world’s leading artificial intelligence experts went to perform innovative research, but experts warn that things have changed, and now AI is all about the product. AI researchers and experts also believe that since OpenAI released ChatGPT in late 2022, the tech industry has prioritized commercialization over research by building consumer-ready AI products.
Analysts from Morgan Stanley predicted that the AI industry has the potential to reach $1 trillion in annual revenue by 2028. Industry experts said they are concerned about safety as leading players pursue artificial general intelligence (AGI).
AI research reverts to profits as Silicon Valley prioritizes products over safety
OpenAI , founded as a nonprofit research lab in 2015, is now in the midst of an effort to transform into a for-profit entity, according to its CEO, Sam Altman. Jan Leike, OpenAI’s former head of alignment, whose team focused on AI safety, resigned last year and said that over the past years, “safety culture and processes have taken a backseat to shiny products.”
James White, chief technology officer at cybersecurity startup Calypso, argued that tech companies are taking an increasing number of shortcuts when it comes to rigorously testing their AI models before releasing them to the public. He noted that newer AI models are sacrificing security for quality, that is, better responses by the AI chatbots.
“The models are getting better, but they’re also more likely to be good at bad stuff. It’s easier to trick them to do bad stuff.”
– James White, Chief Technology Officer at Calypso.
In March, Google seemed to prioritize its product over safety after it released its latest AI model, Gemini 2.5, saying it was the firm’s most intelligent AI model. The company failed to release the Gemini 2.5’s model card for weeks, meaning it didn’t share information about how the AI model worked or its limitations and potential dangers upon its release.
See also FTC backs DOJ’s plan to expose Google’s search data to rivals
The firm acknowledged on April 2 that it evaluates its most advanced models for potentially dangerous capabilities before their release. Google later updated the blog post and omitted the words “prior to their release.”
The company published an incomplete model card on April 16 and updated it on April 28, more than a month after the AI model’s release, to include information about Gemini2.5’s dangerous capability evaluations.
AI tech companies shift from research toward revenue-generating products
Former Meta employees said they weren’t surprised when Joelle Pineau, a Meta vice president and the head of the company’s FAIR division, announced last month that she would leave her post on May 30. They mentioned that they viewed it as solidifying the company’s move away from AI research and toward prioritizing developing practical products.
Pineau has led FAIR since 2023, and according to employees at Meta , small research teams could work on various bleeding-edge technologies that may or may not pan out. The shift started when the tech firm laid off 21,000 employees, or nearly a quarter of its workforce, in late 2022. Former employees familiar with the matter said that FAIR researchers were directed to work more closely with product teams as part of the cost-cutting measures.
See also Nvidia in 10-15% GPU price hike as production costs surge on tariffs
Two people familiar with the matter said that earlier this year, one of FAIR’s directors, Kim Hazelwood, left the company as part of Meta’s plan to cut 5% of its workforce. The people noted that OpenAI’s launch of ChatGPT created a sense of urgency at Meta to pour more resources into large language models (LLMs).
The individuals also revealed that Meta Chief Product Officer Chris Cox has been overseeing FAIR to bridge the gap between research and the product-focused GenAI group. They added that GenAI, under Cox, has been siphoning more computer resources and team members from FAIR due to its elevated status at the company.
Last month, Meta released security and safety tools for developers to build apps with the company’s Llama 4 AI models. Meta said that the tools help mitigate the chances of Llama 4 unintentionally leaking sensitive information or producing harmful content.
KEY Difference Wire : the secret tool crypto projects use to get guaranteed media coverage