THE GUARDIAN VIEW ON AI: SAFETY STAFF DEPARTURES RAISE WORRIES ABOUT INDUSTRY PURSUING PROFIT AT ALL COSTS
By Guardian - Editorial - Sun 15 Feb 2026 17.25 GMT
Cash-hungry Silicon Valley firms are scrambling for revenue. Regulate them now before the tech becomes too big to fail. Hardly a month passes without an AI grandee cautioning that the technology poses an existential threat to humanity.
Many of these warnings might be hazy or naive. Others may be self-interested. Calm, level-headed scrutiny is needed. Some warnings, though, are worth taking seriously.
Last week, some notable ground-level AI safety researchers quit, warning that firms chasing profits are sidelining safety and pushing risky products. In the near term, this suggests a rapid âenshittificationâ in pursuit of short-term revenue. Without regulation, public purpose gives way to profit. Surely AIâs expanding role in government and daily life â as well as billionaire ownersâ desire for profits â demand accountability.
The choice to use agents â chatbots â as the main consumer interface for AI was primarily commercial. The appearance of conversation and reciprocity promotes deeper user interaction than a Google search bar. The OpenAI researcher ZoĂŤ Hitzig has warned that introducing ads into that dynamic risks manipulation. OpenAI says ads do not influence ChatGPTâs answers. But, as with social media, they may become less visible and more psychologically targeted â drawing on extensive private exchanges.

It is worth noting that Fidji Simo, who built Facebookâs ad business, joined OpenAI last year. And OpenAI recently fired its executive Ryan Beiermeister for âsexual discriminationâ. Several reports say she had strongly opposed the rollout of adult content. Together, these moves suggest that commercial pressures are shaping the firmâs direction â and probably that of the wider industry. The way Elon Muskâs AI Grok tools were left active long enough to generate misuse, then restricted behind paid access before finally being halted after investigations in the UK and EU, raises questions about monetising harm.
It is harder to evaluate more specialised systems being built for social purposes such as education and government. But since the frenetic pursuit of profit tends to introduce irresistible bias to every human system we have, the same will be true of AI.
This is not a problem within a single company. A more vague resignation letter by the Anthropic safety researcher Mrinank Sharma warned of a âworld in perilâ, and that he had ârepeatedly seen how hard it is to truly let our values govern our actionsâ. OpenAI was once ostensibly entirely non-profit; after it committed to commercialisation starting in 2019, Anthropic emerged promising to be the safer, more cautious alternative. Mr Sharmaâs departure suggests that even firms founded on restraint are struggling to resist the same pull of profits.
The cause of this realignment is clear. Firms are burning through investment capital at historic rates, their revenues arenât growing fast enough and, despite impressive technical results, itâs not clear yet what AI can âdoâ to generate profits. From tobacco to pharmaceuticals, we have seen how profit incentives can distort judgment. The 2008 financial crisis showed what happens when essential systems are driven by short-term needs and weak oversight.
Strong state regulation is needed to solve this problem. The 2025 Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet, which set out principles around inclusion, sustainability, human rights and international cooperation, was endorsed by 60 countries. However the US and UK declined to sign it. That is a worrying sign that they are choosing to shield industry rather than bind it.
This article was amended on 16 February 2026. An earlier version referred to the 2026 International AI Safety Report, when the intended reference was to the 2025 Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet.
