The tech giants have an interest in AI regulation

It is a way of holding back open-source proliferation

ChatGPT is an example of “generative” AI, which creates humanlike content based on its analysis of texts, images and sounds. PHOTO: REUTERS
New: Gift this subscriber-only story to your friends and family

One of the joys of writing about business is that rare moment when you realise conventions are shifting in front of you. It brings a shiver down the spine. Vaingloriously, you start scribbling down every detail of your surroundings, as if you are drafting the opening lines of a bestseller.

It happened to your columnist recently in San Francisco, sitting in the pristine offices of Anthropic, a darling of the artificial intelligence (AI) scene. When Mr Jack Clark, one of Anthropic’s co-founders, drew an analogy between the Baruch Plan, a (failed) effort in 1946 to put the world’s atomic weapons under United Nations control, and the need for global coordination to prevent the proliferation of harmful AI, there was that old familiar tingle. When entrepreneurs compare their creations, even tangentially, to nuclear bombs, it feels like a turning point.

Already a subscriber? 

Read the full story and more at $9.90/month

Get exclusive reports and insights with more than 500 subscriber-only articles every month

Unlock these benefits

  • All subscriber-only content on ST app and straitstimes.com

  • Easy access any time via ST app on 1 mobile device

  • E-paper with 2-week archive so you won't miss out on content that matters to you

Join ST's Telegram channel and get the latest breaking news delivered to you.