AI is your Friend | TheArticle


AI is your Friend | TheArticle

Play all audios:


Dystopian scenarios whirling around Artificial Intelligence (AI) predict the end of the world, when machines will run amok and turn on us. But such stories refer to the far, far distant


future and there’s no evidence; it’s mere sci-fi.


There are far more immediate concerns – such as AI’s power to spread disinformation and fake news. As the EU competition commissioner, Margrethe Vestager, told the Guardian, “Probably the


risk of extinction may exist, but I think the likelihood is quite small. I think the AI risks are more that people will be discriminated [against], they will not be seen as who they are.”


When AI is brought in to decide who can receive a loan, be granted parole or have access to social services, it’s vital that no one suffers discrimination because of gender, colour or post


code biases which AI has leached from the internet.


Back in 2022, dystopian scenarios of machines taking over the world loomed large. But then came Large Language Models like ChatGPT and GPT-4, created by OpenAI, an AI research lab in San


Francisco. Two days after ChatGPT was rolled out in 2022, 100 million people were already using it. ChatGPT can generate human-like conversation, poetry and prose, respond to emails, write


movie scripts and even code. What’s more, ChatGPT and its successor GPT-4, rolled out in 2023, can tell you why a joke is funny. This surely is a show-stopper, because to do this the machine


has to get the joke.


This and other properties of ChatGPT and GPT-4 give the distinct impression that there’s something going on in them that goes way beyond numbers and probabilities. Just as plenty more goes


in our brains besides numbers and probabilities, as the neurons there fire in response to incoming perceptions, something like creativity can emerge in machines as well as in humans. As a


result, these huge artificial neural networks have taken the air out of the doomsday scenarios.


The more machines develop creativity, the more we will be able to communicate with them. Machines have already exhibited creativity, such as when the algorithm AlphaGo defeated the Go world


champion in 2016. Although this year another human master took revenge against AlphaGo, he was able to do so only thanks to accurate analysis from another AI programme.


Humans and machines now share language and can converse. Creativity in machines opens the door to them thinking in a more human way and perhaps even acquiring ethical values.


The big question is, whose ethical values? There is much discussion about regulating machines, particularly after the advent of ChatGPT and GPT-4, which have the ability to create utterly


convincing misinformation. Their writings are often tainted with sexual and gender bias absorbed from the web, technical problems which are being solved.


Personally I think that there is much too much criticism written about AI and not enough about the extraordinary advances and opportunities it is providing us with every day. Artists,


musicians, novelists and poets are already using AI, collaborating with machines to further their creative pursuits. We will soon have machines creating art autonomously, composing music and


writing operas and plays. Far from being out to get us, machines could well prove to be benevolent companions and friends.


In March 2023, some 27,000 AI researchers — among them Elon Musk, Sam Altman of OpenAI and Demis Hassibis of Google DeepMind — published an open letter, proposing a six month moratorium on


research on chatbots like GPT-4, in order to allow us to understand them better. The language was broad and there were conflicting relationships, with Altman speculating on what comes after


GPT-4 and Musk in the process of building his own AI start-up. There were presidential statements, congressional committees and US senators were urged to get up to speed on AI.


But the real question is, who exactly will agree to a moratorium on AI research? The Chinese surely won’t, among other nations, and nor will the giants of AI who thrive on competition and


need to reward their shareholders. Who will be on the overseeing committees to regulate AI and who will evaluate their findings? AI is the very symbol of capitalism and it will be hard to


regulate the regulators. Regulation would involve revealing trade secrets to outsiders, while self-regulation implies the difficult concept of self-censorship.


Emily Bender, a prominent AI specialist, complained that the letter was “dripping with #AIHype,” while Andrew Ng, a leading AI researcher, wrote bluntly, “There is no realistic way to


implement a moratorium.”


At the moment Altman is on a publicity tour for AI, a regular procedure among soon-to-be regulated companies asserting their freedom to innovate over the interests of society. Geoffrey


Hinton, the godfather of AI, recently left his position at Google so that he can criticise AI practices freely. We await what he has to say.


The real problem with AI is not that it’s out to get us, but that there’s not enough of it. It is a key part of the internet of things, driving household appliances and driverless cars and


supporting hospitals, medical diagnoses and medical research, including developing the Covid vaccines. In a recent survey of 195 questions frequently asked of doctors, GPT-4 scored 79%


higher than human doctors on the quality of its response: perhaps not unexpected, owing to its huge database, but – strikingly — it scored tenfold higher than human doctors in empathy.


Perhaps there’s something going on in these machines we are not yet aware of. As the great scientist Arthur Stanley Eddington wrote in another context, “Something unknown is doing we don’t


know what.”


The horse is out of the stable. We will have to come to terms with AI, and quickly. One way is through education, publishing books on AI’s past and present and speculating soberly on its


future. There is also the need for a body akin to the International Atomic Energy Agency, that will promote the safe, secure and peaceful use of Artificial Intelligence. Crucially, as we


learn to live with AI we will come to see it, not as a threat, but as a friend.


Arthur I. Miller is an emeritus professor at UCL (arthurimiller.com). His most recent book is The Artist in the Machine: The World of AI-Powered Creativity (MIT Press).


By proceeding, you agree to our Terms & Conditions and our Privacy Policy.


If an account exists for this email address, you will shortly receive an email from us. You will then need to:


Please note, this link will only be valid for 24 hours. If you do not receive our email, please check your Junk Mail folder and add [email protected] to your safe list.