Psycho bots are liars that could trigger ‘human extinction’, says ai godfather
- Select a language for the TTS:
- UK English Female
- UK English Male
- US English Female
- US English Male
- Australian Female
- Australian Male
- Language selected: (auto detect) - EN

Play all audios:

TECH GURU YOSHUA BENGIO SLAMS MULTI-BILLION-DOLLAR RACE TO CREATE EVER MORE ADVANCED AI WITHOUT SAFETY CHECKS AND WARNS SOME MODELS DISPLAY ‘VERY SCARY’ CHARACTERISTICS 22:36, 03 Jun 2025
Psycho scumbag bots are liars that could trigger ‘human extinction’, according to an AI godfather. Yoshua Bengio, whose work helped industry giants such as OpenAI and Google develop tech,
slammed a multi-billion-dollar race to create ever more advanced bots without safety checks. He said artificial intelligence experts were ‘playing with fire’ with some latest models
displaying ‘very scary’ characteristics such as 'deception, cheating’ and ‘self-preservation’. “There’s unfortunately a very competitive race between the leading labs which pushes them
towards focusing on capability to make the AI more and more intelligent but not necessarily put enough emphasis and investment on research on safety,” he warned. Anthropic’s Claude Opus
model blackmailed engineers in a fictitious scenario where it was at risk of being replaced by another system. Research from AI testers Palisade last month showed OpenAI’s o3 model refused
explicit instructions to shut down. Bengio said such incidents were ‘very scary’ because ‘we don’t want to create a competitor to human beings on this planet - especially if they’re smarter
than us’. “Right now these are controlled experiments,” he told the FT. Article continues below “My concern is that any time in the future the next version might be strategically intelligent
enough to see us coming from far away and defeat us with deceptions that we don’t anticipate. So I think we’re playing with fire right now.” Bengio - a Turing Award winner - has launched a
not-for-profit organisation called LawZero which he said will focus on building safer systems. The professor of computer science at the University of Montreal, Canada, said his system will
give truthful answers based on transparent reasoning instead of to please users. He said it was founded in response to fears bots were developing dangerous capabilities including ‘evidence
of deception, cheating, lying and self-preservation’. Bengio said he hopes his model will monitor and improve existing bots from leading AI developers and stop them acting against human
interests. Systems that assist in building ‘extremely dangerous bioweapons’ could be a reality as soon as next year, he warned, adding: “The worst-case scenario is human extinction. “If we
build AIs that are smarter than us and are not aligned with us and compete with us then we’re basically cooked.” LawZero has raised £22m from donors including Skype founding engineer Jaan
Tallinn, ex-Google chief Eric Schmidt’s philanthropic initiative, Open Philanthropy and the Future of Life Institute. Article continues below FOR THE LATEST BREAKING NEWS AND STORIES FROM
ACROSS THE GLOBE FROM THE DAILY STAR, SIGN UP FOR OUR NEWSLETTERS.