'We know how this ends:' Twitter and '60 Minutes' host spooked by AI's 'mysterious' capabilities

On March 22nd, 2023, the Future of Life Institute published an open letter calling for a minimum six-month pause on experiments and training on all artificial intelligence systems "powerful than GPT-4."
More than thirty thousand people have signed on to the initiative, which states:
Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that 'At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.' We agree. That point is now.
FLI's demand has had an impact. The stunning capabilities that AI has developed were featured on Sunday's edition of 60 Minutes, which zeroed in on an unexpected linguistic talent demonstrated by Google's AI chatbot, Bard.
READ MORE: Would 'artificial superintelligence' lead to the end of life on Earth? It's not a stupid question
"Of the AI issues we talked about, the most mysterious is called emergent properties. Some AI systems are teaching themselves skills that they weren't expected to have. How this happens is not well understood. For example, one Google AI program adapted on its own after it was prompted in the language of Bangladesh, which it was not trained to know," CBS News Chief White House Correspondent Scott Pelley began.
"We discovered that with very few amounts of prompting in Bengali, it could now translate all of Bengali. So now all of a sudden we now have a research effort where we are now trying to get to a thousand languages," said James Manyika, Google's first senior vice president of technology and society.
Google Chief Executive Officer Sundar Pichai explained that "there is an aspect of this, which we call, uh, all of us in the field, call it as a black box. You know, you don't fully understand and you can't quite tell why it said this or why it got wrong. We have some ideas and our ability to understand this gets better over time, but that's where the state of the art is."
Pelley had an "unsettling" realization.
READ MORE: Widow says AI chatbot encouraged husband to commit suicide: 'Without Eliza, he would still be here'
"You don't fully understand how it works, and yet, you've turned it loose on society?" Pelley asked.
"Yeah. Let me put it this way. I don't think we fully understand how a human mind works either," Pichai replied.
"Was it from that black box, we wondered, that Bard drew its short story that seems so disarmingly human?" Pelley noted in a voiceover.
He then pointed out to Pichai that Bard "talked about the pain that humans feel. It talked about redemption." That raised the question, "How did it do all of those things if it's just trying to figure out what the next right word is?"
According to Pichai, the answer is complicated.
"I mean, I've had these experiences, uh, talking with Bard as well. There are two views of this. You know, there are a set of people who view this as, look, these are just algorithms. They're just repeating what it's seen online. Then there is the view where these algorithms are showing emergent, properties to be creative, to reason, to plan, and so on," he said, cautioning that "personally I think we need to be, uh, we need to approach this with humility. Part of the reason I think it's good that some of these technologies are getting out is so that society, you know, people like you and others can process what's happening. And we begin this conversation and debate and I think it's important to do that."
Like Pelley, social media users were unnerved, recalling the outcomes of science fiction movies.
Luke Zaleski: "We're all gonna die. Yay."
John "Vaccinated 5G Hotspot" Grossund: "We're not quite sure what it is doing, or how it works, but we're certain it is absolutely safe and no harm will come to us."
Mattison: "AI will find the film 'Terminator,' learn from it, and SkyNet will be born. Smarter."
Sarah Burgess: "What could go wrong? We know how this ends."
Ray_ing in Richmond: "At what point are we going to call it Skynet?"
Yassine Mrabet: "Clarification: current AI systems are not yet sentient, and they cannot become sentient on their own. They always need to be instructed what to do. That said, we're soon reaching the stage where they can be 'instructed to become sentient' by a human."
Benji: "'We don't fully understand how a human mind works either' but humans aren't processing all of human knowledge and history every second and leveraging that information with the speed of computers."
Chris: "This movie never ends well."
Watch below or at this link.
\u201cOne AI program spoke in a foreign language it was never trained to know. This mysterious behavior, called emergent properties, has been happening \u2013 where AI unexpectedly teaches itself a new skill. https://t.co/v9enOVgpXT\u201d— 60 Minutes (@60 Minutes) 1681687340
READ MORE: Will artificial intelligence overthrow its capitalist overlords?