Is AI future closer than originally thought?
I was in high school when “The Terminator” movie was released.
I didn’t go to the theater to see the movie. I’m not a fan of action movies nor science fiction.
Years later, when I was dating the man I eventually would marry, we were switching around TV stations in search of something good to watch when, suddenly, there it was.
The minute I heard the excitement my then-boyfriend exuded about “The Terminator,” I knew immediately what I’d be watching for the next two hours.
If you’ve never seen it, the Terminator is a “Cyborg assassin” disguised as a human who time travels from 2029 to 1984 to kill Sarah Connor to prevent the future birth of her son. Sent to protect the woman is a soldier in the “human resistance.” You see, in the movie’s post-apocalyptic future, most of humanity already is wiped out from a 1997 nuclear war sparked by an artificial intelligence known as “Skynet.” Survivors are led by Sarah Connor’s grown son, fighting extinction against the sentient computer system’s genocidal war on humanity.
Sounds very romantic and entertaining, I know.
I recall thinking how far-fetched the movie was. But then, isn’t the point of sci-fi to propel you into unlikely futuristic situations?
Brace yourself. The future is now.
Last week, scientists and tech industry leaders, including high-level executives at Microsoft and Google, issued a new warning about perils that artificial intelligence poses to humankind.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” said a statement reported by The Associated Press.
Sam Altman, CEO of ChatGPT maker OpenAI, and Geoffrey Hinton, a computer scientist known as the godfather of artificial intelligence, were among hundreds of leading figures signing the statement.
Worries about artificial intelligence systems outsmarting humans and running wild have intensified with the rise of a new generation of highly capable AI chatbots such as ChatGPT. It has sent countries around the world scrambling to come up with regulations for the developing technology.
“There’s a variety of people from all top universities in various different fields who are concerned by this and think that this is a global priority,” said Dan Hendrycks, executive director of the San Francisco-based nonprofit Center for AI Safety, which organized the move. “So we had to get people to sort of come out of the closet, so to speak, on this issue because many were sort of silently speaking among each other.”
More than 1,000 researchers and technologists, including Elon Musk, signed a much longer letter earlier this year calling for a six-month pause on AI development, saying it poses “profound risks to society and humanity.”
That letter was a response to OpenAI’s release of a new AI model, GPT-4, but leaders at OpenAI, its partner Microsoft and rival Google didn’t sign on and rejected the call for a voluntary industry pause, the AP reported.
Last week’s statement doesn’t propose specific remedies, but some propose an international regulator along the lines of the U.N. nuclear agency.
Granted, some critics complain dire warnings about existential risks voiced by makers of AI have contributed to hyping up capabilities of their products and distracting from calls for more immediate regulations to rein in their real-world problems.
Hendrycks said there’s no reason why society can’t manage the “urgent, ongoing harms” of products that generate new text or images, while also starting to address “potential catastrophes around the corner.”
He compared it to nuclear scientists in the 1930s warning people to be careful even though “we haven’t quite developed the bomb yet.”
Now, you might recall I recently wrote a column discussing risks that come with AI, particularly related to the journalism industry. I pointed out, however, that despite quickly evolving technology, I doubted robots soon would be driving around gathering first-hand information like our reporters do. I also doubted they would be able to build trust and cultivate news sources allowing them to keep their “fake metal fingers on the pulse of community happenings.”
Yes, I still believe that — for now, at least. But the future just might be closer than I originally thought.