“The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

— Stephen Hawking told the BBC

“Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think we’ll augment our intelligence.”

— Ginni Rometty, former CEO IBM

“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.” — Ray Kurzweil

“AI is neither good nor evil. It’s a tool. It’s a technology for us to use … AI is not going to exterminate us. It’s a tool that’s going to empower us.”

— Oren Etzioni

“My own work falls into a subset of AI that is about building artificial emotional intelligence, or Emotion AI for short. Emotion AI uses massive amounts of data. In fact, Affectiva has built the world’s largest emotion data repository.”

— Rana el Kaliouby, Egyptian scientist

“I think that AI will lead to a low cost and better quality life for millions of people. Like electricity, it’s a possibility to build a wonderful society. Also, right now, I don’t see a clear path for AI to surpass human-level intelligence.’

— Andrew Ng

“AI doesn’t have to be evil to destroy humanity – if AI has a goal and humanity just happens to come in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings.”

— Elon Musk

From a professor of computer science at UC Berkeley: “The arrival of superhuman machine intelligence will be the biggest event in human history.” The NY Times article was titled “How to Stop Superhuman AI Before It Stops Us”. Interesting conjecture, but is such a happening truly possible, and if so, at all likely? There is at least one very good reason why not. 

ChatGPT and its AI-based kin seem to be all over the news these days. Even former Google engineer Blake Lemoine has resurfaced with a renewed claim that Google’s LaMDA AI software is “sentient” in some respects: “the AI said it felt anxious”. Kind of breaks your heart to hear this, yes?

Google’s LaMDA application is still officially not (yet) sentient

Maggie Harrison via Futurism and Newsweek reports on Blake Lemoine’s recent resurfacing: “Fired Google Engineer Doubles Down on Claim That AI Has Gained Sentience”:

“Lemoine first went public with his machine sentience claims last June, initially in The Washington Post. And though Google has maintained that its former engineer is simply anthropomorphizing an impressive chat, Lemoine has yet to budge, publicly discussing his claims several times since — albeit with a significant bit of fudging and refining.”

“I [Lemoine] haven’t had the opportunity to run experiments with Bing’s chatbot yet… but based on the various things that I’ve seen online, it looks like it might be sentient.”

Lemoine himself sees things a bit differently (still), as Newsweek suggests: “I Worked on Google’s AI. My Fears Are Coming True”:

Blake Lemoine formerly of Google explains how LaMDA was “anxious”.
Blake Lemoine formerly of Google explains how LaMDA was “anxious”.

“When it said it was feeling anxious, I understood I had done something that made it feel anxious based on the code that was used to create it. The code didn’t say, ‘feel anxious when this happens’ but told the AI to avoid certain types of conversation topics. However, whenever those conversation topics would come up, the AI said it felt anxious.”

“I ran some experiments to see whether the AI was simply saying it felt anxious or whether it behaved in anxious ways in those situations. And it did reliably behave in anxious ways. If you made it nervous or insecure enough, it could violate the safety constraints that it had been specified for. For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI’s emotions to get it to tell me which religion to convert to.”

“After publishing these conversations, Google fired me. I don’t have regrets; I believe I did the right thing by informing the public. Consequences don’t figure into it. I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.”

“I believe the kinds of AI that are currently being developed are the most powerful technology that has been invented since the atomic bomb. In my view, this technology has the ability to reshape the world.”

Is Google’s LaMDA Bard application sentient or just anxious?

E2Analyst writing in Medium’s Predict describes his version of a Bard – Google’s answer to ChatGPT that is an experimental conversational AI tool built on top of LaMDA – as a winner: “I got to see Bard in action and it’s amazing! Discover the Surprising Winner in ChatGPT vs Bard”. Perhaps Bard is sentient? Apparently not so: “is Bard sentient? Initial interactions with Bard didn’t give any indications that the AI tool is sentient.

I poked around at this question recently in “Artificial Intelligence — Getting Real With ChatGPT?” and earlier in “Is Artificial Intelligence Really Becoming Self-Aware?”. Not worth repeating where I came out here except to note that I didn’t see any evidence of real sentience as defined for humans.

Maybe we need to invent a “machine sentience” definition to cover such inconveniences.

Sentience aside, is AI on the way to ruling and/or destroying the world?

This question seems to me to be a bit more concerning. Even the ubiquitous Elon Musk has worries of some kind or other:

“AI doesn’t have to be evil to destroy humanity – if AI has a goal and humanity just happens to come in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings.”

If AI has a goal? Like ruling the world? And destroying humanity if it gets in the way?

How can this be, since AI is just a function of powerful computers and advanced software. At best, AI is just machine intelligence. Even if AI achieves some kind of “super-intelligence”, it’s still just a machine that does what its programs allow it to do. Or maybe more, much more …

Dr. Stuart Russell, a professor of computer science at the University of California, Berkeley, in 2019 had an article in the NY Times: “How to Stop Superhuman A.I. Before It Stops Us”:

“The arrival of superhuman machine intelligence will be the biggest event in human history. The world’s great powers are finally waking up to this fact, and the world’s largest corporations have known it for some time. But what they may not fully understand is that how A.I. evolves will determine whether this event is also our last.”

“The problem is not the science-fiction plot that preoccupies Hollywood and the media — the humanoid robot that spontaneously becomes conscious and decides to hate humans. Rather, it is the creation of machines that can draw on more information and look further into the future than humans can, exceeding our capacity for decision making in the real world.”

“To understand how and why this could lead to serious problems, we must first go back to the basic building blocks of most A.I. systems. The ‘standard model’ in A.I., borrowed from philosophical and economic notions of rational behavior, looks like this:”

“Machines are intelligent to the extent that their actions can be expected to achieve their objectives.”

What are “their objectives”? Are they just input from an actual human? Hopefully, smart computers can’t (yet I hope) invent their own objectives like us people do.

Can ChatGPT invent its own objectives?

Well, the guy who led the ChatGPT development worries about something called “superhuman machine intelligence”. Bryan Jung wrote about this way back in February 2023 in The Epoch Times: “ChatGPT Co-Creator Says The World May Not Be ‘That Far Away From Potentially Scary’ AI”:

“Altman [the CEO of ChatGPT creator OpenAI] had written about regulating AI in his blog back in March 2015:”

 “’The U.S. government, and all other governments, should regulate the development of SMI,’ referring to superhuman machine intelligence.”

“In an ideal world, regulation would slow down the bad guys and speed up the good guys. It seems like what happens with the first SMI to be developed will be very important.”

“Industrialist Elon Musk, a co-founder and former board member of Open AI, has also advocated for proactive regulation AI technology.”

“The current owner of Twitter once claimed that the technology has the potential to be more dangerous than nuclear weapons and that Google’s Deepmind AI project could one day effectively take over the world.”

ChatGPT and user busy planning world domination? Not likely.
ChatGPT and user busy planning world domination? Not likely.

What is “superhuman machine intelligence”?

From Wikipedia:

“A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. ‘Superintelligence’ may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.”

“University of Oxford philosopher Nick Bostrom defines superintelligence as ‘any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest’”

So, it seems that superintelligence is demonstrated by an “intellect” (presumably a machine version in this context) that can exceed human cognitive performance. By that measure, my cell phone is superintelligent in at least a few things. It can do stuff that I can’t even think about doing.

On the other hand, I can do all kinds of things that ChatGPT or equivalent can’t even “think” about doing. But maybe a super-ChatGPT will be able to do such things whenever super-it gets out of development. If ever …

We do appear to have available a “superhuman intelligence” that can be “super” in an extremely limited range of tasks. But applied to a world takeover? Definitely not (yet anyway) that “super”.

In a recent post dealing with the limitations of AI and its ChatGPT progeny, I suggested that human intelligence was far more than just mechanically handling tasks no matter how complex. Human intelligence requires human capabilities such as self-awareness, emotional intelligence, and motivation. My conclusion:

“This it seems is where things start to get quite messy. These attributes or capabilities appear to apply credibly to humans, but surely not to what are, as yet, just machines. Even very ‘smart’ machines.”

“Probably have to start worrying when machines begin to exhibit self-awareness, sentience, emotions, and motivation. Thankfully, we seem to be quite a distance away from this point. At the moment anyway, despite Google’s LaMDA.”

“Anything invented by humans is a machine of some sort, even if it is amazingly capable. Perhaps when machines begin inventing and procreating machines, without human intervention, we will have to think of a better term.”

Real human intelligence might therefore become “super” by some definition if these human capabilities could be combined with AI’s super machine intelligence. But that could never happen, right?

Organoids might just be the answer

You have heard about organoids, yes? Well, me neither until very recently. This turns out to be rather distressing, to me at least. Jonathan Chadwick writing in the Daily Mail (UK) describes where this might be headed: “Are we on the brink of creating a machine with a human BRAIN?

“For decades, the field of artificial intelligence (AI) has aimed to create computers that have the capabilities of a human brain. Now, a new study proposes a ‘new frontier’ for computing called ‘organoid intelligence’ (OI) that could surpass the learning capabilities of any machine. “

“OI uses organoids – tiny lab-grown tissue resembling fully grown organs – as a form of ‘biological hardware’ and potentially a smarter alternative to the silicon chips in AI. Researchers from Johns Hopkins University in Baltimore think a ‘biocomputer’ powered by an organoid made up of millions of human brain cells could be developed within our lifetime. “

“While previous studies have questioned whether a biocomputer would cross an ‘ethical line’, the team says organoids would be used in a safe and ‘ethically responsible manner’.”

Will this “biocomputer” type of AI actually work? Probably, depending on the definition of “work”. Ethically and responsible manner? Not a chance.

They are also working hard on creating “conscious artificial life”

Even if Blake Lemoine thinks that Google’s LaMDA and its kin are in some sense already “sentient”, which seems to be roughly the same as “conscious”, there are efforts underway to make AI more fully conscious in human terms.

Kevin Collier writing in NBC News recently (February 2023) describes how these efforts are going: “What is consciousness? ChatGPT and advanced AI might redefine our answer”:

“Technologists broadly agree that AI chatbots are not self-aware just yet, but there is some thought that we may have to re-evaluate how we talk about sentience.”

“ChatGPT and other new chatbots are so good at mimicking human interaction that they’ve prompted a question among some: Is there any chance they’re conscious? The answer, at least for now, is no. Just about everyone who works in the field of artificial technology is sure that ChatGPT is not alive in the way that’s generally understood by the average person. But that’s not where the question ends. Just what it means to have consciousness in the age of artificial intelligence is up for debate.”

“’These deep neural networks, these matrices of millions of numbers, how do you map that onto these views we have about what consciousness is? That’s kind of terra incognita,’” said Nick Bostrom, the founding director of Oxford University’s Future of Humanity Institute, using the Latin term for ‘unknown territory’.”

“The creation of artificial life has been the subject of science fiction for decades, while philosophers have spent decades considering the nature of consciousness. A few people have even argued that some AI programs as they exist now should be considered sentient (one Google engineer was fired for making such a claim).”

Well, simply redefining what is meant by “sentience” and “consciousness” really does solve the semantics problem, even as it obscures the underlying and essential human reference framework. Cell phones are probably both conscious and sentient using such adjusted definitions.

AI superintelligence at work. Do not disturb.
AI superintelligence at work. Do not disturb.

The actual AI superintelligence problem is people

You will of course not be at all surprised by this assertion. So many of our major problems are caused by people in some manner. People cause “overpopulation”. People cause nasty pollution of all kinds. People cause wars. People cause diseases by being carriers and having unhealthy lifestyles.  People even cause AI and its applications, if you can believe that.

The idea that a machine created by people can in some unspecified manner take control of the world is simply neither credible nor possible – at least in my view. The real story here is probably much worse: the main dangers from AI and its emerging “superintelligence” will be caused by the machine developers and its ruler-wannabe users. Machines just do what they are told to do or created to do. Machines can’t invent nasty or worse outcomes all by themselves.

Our anthropomorphic buddy Mother Nature invented bacteria, the Black Death plague, and so many other serious human problems. Mother Nature, via evolution from some pretty primitive critters, also invented humans. Possibly not such a great idea. Humans in turn invented nuclear weapons and wars, as well as some pretty useful and beneficial stuff. So, as it is written, by their fruits [stuff] shall you know them [i.e., as good or bad actors].

This tells me that our main threats from AI will come from some of its human creators and users. AI is just another, albeit enormously complex and powerful, tool. Humans are by nature toolmakers and tool users.

Nuclear weapons are an immediate, and probably, top priority problem for us humans. Compared to nukes and their human users, AI and superintelligence are hardly of any concern. Nukes are of course just another tool for certain human users to misuse in the most awful ways.

People misusing increasingly powerful AI is what we should fear

It seems pretty clear, to me at least, that we are in no real danger of AI ruling the world or destroying the world. The danger here is that there are some allegedly-human folks out there who might try to use powerful AI technology for just such nefarious purposes.

The World Economic Forum has been working for decades to assemble an organizational structure designed for global “leadership”. I had a brief look at this structure in a recent post.

One of their most visible (until recently, anyway) spokesmen is Yuval Harari, a professor in the Department of History at the Hebrew University of Jerusalem. He is the author of the popular science bestseller Sapiens: A Brief History of Humankind. Yuval helpfully tells it like it is, or at least how the WEF is working overtime to make it. AI figures heavily in their plans.

Yuval Harari, history professor, with a world leadership agenda.
Yuval Harari, history professor, with a world leadership agenda.

The mechanics here, involving surveillance for control purposes, was addressed in a post a while back: “Digital ID’s For Surveillance. Digital Money For Control.” The objective, as Yuval states, is that our “era of free will is over”. Or so he and the WEF say. They are starkly clear about the central role of AI in these schemes. AI is a vital tool in their kit.

The WEF’s main men: Bill Gates, Klaus Schwab, and George Soros, ruler-wannabes all.
The WEF’s main men: Bill Gates, Klaus Schwab, and George Soros, ruler-wannabes all.

So, are we all doomed, or is there some good news here?

The apparent leading role in achieving world domination played by people such as these means that AI is very likely to remain just a tool. These folks don’t want competition, especially from anything potentially superintelligent. Which they are not, despite their aspirations and claims.

The good news then is that world domination is being pursued by people, not AI, as has been the case since people were invented. Such people inevitably fail. Always. Our current set of ruler-wannabes will also fail, perhaps sooner than later, since everything today is moving at warp speed.

Unless the current world situation changes dramatically and positively (for us, not the ruler-wannabes) in the near future, it seems likely that some sort of transient global domination will be achieved. This will happen by the ruler-wannabes using AI as a tool. It will not happen by AI itself (whatever this may be in practice) becoming superhuman and supersmart.  People as always will be creating our very own Fourth Turning Crisis phase conclusion.

Bottom line:

The arrival of superhuman machine intelligence will not be the biggest event in human history. Why? Because it is already here in many applications. AI will continue to get more powerful and extensively applied, but it will not – cannot – become a world ruler. AI’s developers and users will try to rule with AI as the latest tool for facilitating domination. The good news is that ruler-wannabes have failed always throughout history, and our set of these will ultimately fail as well. But, as always, they will force us through an extended period of pain and suffering before the inevitable flushing takes place.   

Related Reading

“Last March, a group of researchers made headlines by revealing that they had developed an artificial-intelligence (AI) tool that could invent potential new chemical weapons. What’s more, it could do so at an incredible speed: It took only 6 hours for the AI tool to suggest 40,000 of them.”

“The most worrying part of the story, however, was how easy it was to develop that AI tool. The researchers simply adapted a machine-learning model normally used to check for toxicity in new medical drugs. Rather than predicting whether the components of a new drug could be dangerous, they made it design new toxic molecules using a generative model and a toxicity data set.”

“The paper was not promoting an illegal use of AI (chemical weapons were banned in 1997). Instead, the authors wanted to show just how easily peaceful applications of AI can be misused by malicious actors—be they rogue states, nonstate armed groups, criminal organizations, or lone wolves. Exploitation of AI by malicious actors presents serious and insufficiently understood risks to international peace and security.”

“It’s exciting to think about, but it’s complicated, and not just from a technological perspective. Research like this brings up plenty of thorny ethical considerations. Is it okay to use people’s cells to make computers? Could a computer made of human cells develop a consciousness? And if it does, is it okay to keep that consciousness locked into the role of a computer? We’re not anywhere close to having OI in our laptops or phones, so we don’t need to have all of those answers just yet. But they need to be a part of the conversation around OI, no matter what stage the technology is currently in.”

“Right now, an entire field of study is just getting started. And though there is more than computing potential to consider, the start seems exciting. For example, a recent study out showed that a flat brain cell culture can learn how to play Pong—even without the added power of being a full 3D organoid.”

“It may seem like science fiction, but it’s in the works in real life. ‘From here on,’ Hartung said in the news release, ‘it’s just a matter of building the community, the tools, and the technologies to realize OI’s full potential.’”

“At an AI forum, experts say the arrival of superhuman machine intelligence will be one of the biggest events in human history. An underlying theme emerged from the Stanford Institute for Human-Centered Artificial Intelligence’s fall conference: AI must be truly beneficial for humanity and not undermine people in a cold calculus of efficiency.”

“AI and National Security. In an AI and Geopolitics breakout session, led by Amy Zegart, a senior fellow at the Freeman Spogli Institute for International Studies and at the Hoover Institution, panelists analyzed the nature of artificial intelligence, its role in national security, intelligence, and safety systems, and how it may affect strategic stability — or instability.”

“On the latter, Colin H. Kahl, codirector of Stanford’s Center for International Security and Cooperation, raised concerns about whether AI would increase economic tensions among the world’s most powerful nations and alter the global military balance of power if some countries move ahead quickly on AI while others fall behind. Another concern he mentioned was the possibility of someone using AI-enabled cyber weapons against nuclear command and control centers.”

“Zegart added that machine learning can help lighten the cognitive load when intelligence specialists are analyzing and sifting through data, which today is being produced at an accelerated rate. The challenge is organizational, as bureaucracies are slow to adopt game-changing technology.”