“Human beings, viewed as behaving systems, are quite simple. The apparent complexity of our behavior over time is largely a reflection of the complexity of the environment in which we find ourselves.”

― Herbert A. Simon, The Sciences of the Artificial

“The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year time frame. 10 years at most.”

— Elon Musk, in a comment on Edge.org

“We must address, individually and collectively, moral and ethical issues raised by cutting-edge research in artificial intelligence and biotechnology, which will enable significant life extension, designer babies, and memory extraction.”

— Klaus Schwab

“Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think we’ll augment our intelligence.”

— Ginni Rometty, former IBM CEO

“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”

—  Stephen Hawking

“A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.”

— Alan Turing

“I definitely fall into the camp of thinking of AI as augmenting human capability and capacity.”

— Satya Nadella

“I think a lot of people don’t understand how deep AI already is in so many things.”

— Marc Benioff

“The sad thing about artificial intelligence is that it lacks artifice and therefore intelligence.”

— Jean Baudrillard

Artificial intelligence (AI) is showing up everywhere today. Not much seems truly intelligent despite the hype. What defines real AI? Certainly not just an ability to generate conversations and chats within the narrow confines of its data and rules. Is Open AI’s ChatGPT intelligent by any practical definition, or simply just another more powerful chatbot? How can we identify intelligence in practice?

In a post a while back, I took a look at AI becoming “self-aware” as an indication of intelligence. This was prompted by a Google (former) engineer Blake Lemoine claiming that Google’s AI LaMDA technology had become in some respects “sentient”: “Google Engineer Claims AI Computer Has Become Sentient”.

This amazing accomplishment seems to have drifted into the great pool of officially-vanished claims. Interesting idea, though.

AI however has not vanished but instead has become central to increasing numbers of applications. Or so the developers say. My question at this point, and perhaps yours as well, is one of identifying the exact nature of the claimed “intelligence”, artificial or otherwise.

Are these applications in fact just large, highly-sophisticated, complex calculators?

And just what is “intelligence”?

Wikipedia, as always, offers a helpful starting point:

“Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving. More generally, it can be described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.”

“Intelligence is most often studied in humans but has also been observed in both non-human animals and in plants despite controversy as to whether some of these forms of life exhibit intelligence. Intelligence in computers or other machines is called artificial intelligence.”

To distinguish intelligence in animals and plants from whatever it is that we humans have, Wikipedia also has an article on “human intelligence”:

“Human intelligence is the intellectual capability of humans, which is marked by complex cognitive feats and high levels of motivation and self-awareness. High intelligence is associated with better outcomes in life.”

“Through intelligence, humans possess the cognitive abilities to learn, form concepts, understand, apply logic and reason, including the capacities to recognize patterns, plan, innovate, solve problems, make decisions, retain information, and use language to communicate. There are conflicting ideas about how intelligence is measured, ranging from the idea that intelligence is fixed upon birth, or that it is malleable and can change depending on an individual’s mindset and efforts. Several subcategories of intelligence, such as emotional intelligence or social intelligence, are heavily debated as to whether they are traditional forms of intelligence. They are generally thought to be distinct processes that occur, though there is speculation that they tie into traditional intelligence more than previously suspected.”

Self-awareness? Emotional intelligence? Motivation?

This it seems is where things start to get quite messy. These attributes or capabilities appear to apply credibly to humans, but surely not to what are, as yet, just machines. Even very “smart” machines.

Probably have to start worrying when machines begin to exhibit self-awareness, sentience, emotions, and motivation. Thankfully, we seem to be quite a distance away from this point. At the moment anyway, despite Google’s LaMDA.

Anything invented by humans is a machine of some sort, even if it is amazingly capable. Perhaps when machines begin inventing and procreating machines, without human intervention, we will have to think of a better term.

Artificial intelligence then is more accurately described as “machine intelligence”, with “intelligence” referring to a mechanical subset of human intelligence capabilities.

So, for example, Open AI’s ChatGPT chatbot is just an intelligent machine by this definition. It is intelligent only in quite narrow terms.

There seems however to be a more subtle distinction possible here. Alan Turing (1912-1954), an English mathematician, computer scientist, logician, cryptanalyst, philosopher, and theoretical biologist, long ago suggested this definition of intelligence:

“A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.”

This definition gets a bit tricky: “deceive a human”? Which human? There are some folks out there who would be very hard to deceive, and then there are others, perhaps many, who could be deceived even by a friendly chatbot like ChatGPT.

Now we have a much more serious issue: what kind of machine intelligence is required to deceive the majority of people? For nefarious purposes, especially?
 

An X-ray of a future human or transhuman?
An X-ray of a future human or transhuman?

Transhumanism perhaps?

From Britannica:

“Transhumanism, a philosophical and scientific movement that advocates the use of current and emerging technologies—such as genetic engineering, cryonics, artificial intelligence (AI), and nanotechnology—to augment human capabilities and improve the human condition. Transhumanists envision a future in which the responsible application of such technologies enables humans to slow, reverse, or eliminate the aging process, to achieve corresponding increases in human life spans, and to enhance human cognitive and sensory capacities. The movement proposes that humans with augmented capabilities will evolve into an enhanced species that transcends humanity—the ‘posthuman.’”

“The term transhumanism was popularized by the English biologist and philosopher Julian Huxley in his 1957 essay of the same name.”

Fortunately, we have today the visionary Elon Musk to point out the way, as Nicole Becker writes in The Collector: “5 Ways in Which Transhumanism is Changing Our Lives”:

“Then we have Neuralink, Elon Musk’s neural interface technology company. This can be seen as the biggest leap toward a Transhumanist society thus far. Musk has assisted in creating a computer-like chip that would be inserted into your brain. Right off the bat, I know how this sounds: who in the world would want a chip in their brain? Is this the ‘mark of the beast’? The start of a sci-fi horror movie? We can only hope that it’s none of those things, so let’s break down what we know before we let fear outweigh scientific discovery.”

“With this chip, Musk wants to accomplish a few things:

  • To give paraplegics the ability to operate their technology using a hands-free, brain-powered control system
  • To help those with mental disorders rewire their neurological makeup and perhaps alter the way their mind works
  • To eventually outsmart Artificial Intelligence (AI)”

 “Musk has stated previously that AI is one of the biggest existential threats to mankind at this very moment. Having worked alongside AI for many years, he has seen firsthand how intelligent these machines are and how easy it would be for them to decide they have had enough human intervention.”

Transhumanism then seems to offer a neat solution to the problem of AI alone being unable to exhibit our special human capabilities: self-awareness, sentience, emotions, and motivation. Just pop some AI machinery into an actual person and you can have a true super-person. An artificial-intelligence-enhanced real live human.

Google Brain

Robert McMillan writing in Wired in 2014 described what one of the world’s major technology companies is up to, brain-wise: “Inside the Artificial Brain That’s Remaking the Google Empire”:

AI as a Service. Google Brain—an internal codename, not anything official—started back in 2011, when Stanford’s Andrew Ng joined Google X, the company’s ‘moonshot’ laboratory group, to experiment with deep learning. About a year later, Google had reduced Android’s voice recognition error rate by an astounding 25 percent. Soon the company began snatching up every deep learning expert it could find. Last year, Google hired Geoff Hinton, one of the world’s foremost deep-learning experts. And then in January, the company shelled out $400 million for DeepMind, a secretive deep learning company.”

“With deep learning, computer scientists build software models that simulate—to a certain extent—the learning model of the human brain. These models can then be trained on a mountain of new data, tweaked and eventually applied to brand new types of jobs. An image recognition model built for Google Image Search, for example, might also help out the Google Maps team. A text analysis model might help Google’s search engine, but it might be useful for Google+ too.”

Transhumanism seems to have become passé, thankfully

George Dvorsky writing in Gizmodo in 2022 provides an explanation of transhumanism’s fading glory, plus everything and more than you ever wanted to know about it: “What Ever Happened to the Transhumanists?”:

“What was once a piercing roar has retreated to barely discernible background noise. Or at least that’s how it currently appears to me. For reasons that are both obvious and not obvious, explicit discussions of ‘transhumanism’ and ‘transhumanists’ have fallen by the wayside. The reason we don’t talk about transhumanism as much as we used to is that much of it has become a bit normal—at least as far as the technology goes, as Anders Sandberg, a senior research fellow from the Future of Humanity Institute at the University of Oxford, told me.”

“Nigel Cameron, an outspoken critic of transhumanism, said the futurist movement lost much of its appeal because the naive ‘framing of the enormous changes and advances under discussion’ got less interesting as the distinct challenges of privacy, automation, and genetic manipulation (e.g. CRISPR) began to emerge. In the early 2000s, Cameron led a project on the ethics of emerging technologies at the Illinois Institute of Technology and is now a Senior Fellow at the University of Ottawa’s Institute on Science, Society and Policy.”

“For the most part, however, transhuman-flavored technologies are understandably scary and relatively easy to cast in a negative light. Uncritical and starry-eyed transhumanists, of which there are many, weren’t of much help.”

“For Cameron, transhumanism looks as frightening as ever, and he honed in on a notion he refers to as the ‘hollowing out of the human,’ the idea that ‘all that matters in Homo sapiens can be uploaded as a paradigm for our desiderata.’ In the past, Cameron has argued that ‘if machine intelligence is the model for human excellence and gets to enhance and take over, then we face a new feudalism, as control of finance and the power that goes with it will be at the core of technological human enhancement, and democracy…will be dead in the water.’”

ChatGPT isn’t transhuman, but is it more than an “intelligent machine”?

To me, for whatever it may be worth, ChatGPT is intelligent only in a lower-level, mechanical sense. Highly capable in some quite narrow contexts, but still only a machine that can enhance some of whatever it is that humans do. This however does not stop the believers.

Athan Koutsiouroumbas via RealClear Wire and ZeroHedge explains how “generative artificial intelligence” is going to banish highly compensated professionals to obsolescence: “Can Chat GPT3 Make Pennsylvania A Red State?”:

“In the past three weeks, policymakers had their worlds rocked by generative artificial intelligence. The problem is that they don’t know it – yet.”

“First, a team of researchers demonstrated that Open AI’s Chat GPT3 can pass the stringent United States Medical Licensing Exam. Days later, Chat GPT 3 passed a bar exam. Finally, Chat GPT3 passed the prestigious Wharton Business School’s rigorous core examination.”

“The Wharton researcher writes, ‘OpenAI’s Chat GPT3 has shown a remarkable ability to automate some of the skills of highly compensated knowledge workers in general and specifically the knowledge workers in the jobs held by MBA graduates including analysts, managers, and consultants.’”

“Lawyers, doctors, administrators, managers, and consultants are some of the most highly compensated professionals in the United States. Generative artificial intelligence is banishing them to obsolescence.”

“With only 375 employees, the unprofitable Chat GPT3 was acquired by behemoth Microsoft at a valuation reportedly northward of $30 billion.”

“But generative artificial intelligence is poised to inflict the same level of economic devastation on suburban elites as suffered by the working class through globalization. Some elites will undoubtedly find sure footing in the pending economy created by generative artificial intelligence. But many others will not.”

ChatGPT – a form of generative artificial intelligence, whatever that may be.
ChatGPT – a form of generative artificial intelligence, whatever that may be.

You do know just what generative artificial intelligence is, yes?

In case not, like myself until a few days ago, here is a definition from the folks at technopedia.com: “What Does Generative AI Mean?”:

“Generative AI is a broad label that’s used to describe any type of artificial intelligence (AI) that can be used to create new text, images, video, audio, code or synthetic data.”

“While the term generative AI is often associated with ChatGPT and deep fakes, the technology was initially used to automate the repetitive processes used in digital image correction and digital audio correction. Arguably, because machine learning and deep learning are inherently focused on generative processes, they can be considered types of generative AI, too.”

“Any time an AI technology is generating something on its own, it can be referred to as ‘generative AI.’ This umbrella term includes learning algorithms that make predictions as well as those that can use prompts to autonomously write articles and paint pictures.”

I would beg to quibble here if it might be worthwhile. Not.

What does an output from ChatGPT look like?

I should note in passing that …

GPT stands for “generative pre-trained transformer.” The “generative” part is obvious — the models are designed to spit out new words in response to inputs of words. And “pre-trained” means they’re trained using this fill-in-the-blank method on massive amounts of text.

Well, the consulting giant McKinsey & Co., many of whose professionals are about to be made obsolete by GPT, had a nice short example of what GPT output looks like: “What is generative AI?”:

“Generative AI systems fall under the broad category of machine learning, and here’s how one such system—ChatGPT—describes what it can do:”

Ready to take your creativity to the next level? Look no further than generative AI! This nifty form of machine learning allows computers to generate all sorts of new and exciting content, from music and art to entire virtual worlds. And it’s not just for fun—generative AI has plenty of practical uses too, like creating new product designs and optimizing business processes. So why wait? Unleash the power of generative AI and see what amazing creations you can come up with!

“Did anything in that paragraph seem off to you? Maybe not. The grammar is perfect, the tone works, and the narrative flows.”

“Generative AI outputs are carefully calibrated combinations of the data used to train the algorithms. Because the amount of data used to train these algorithms is so incredibly massive—as noted, GPT-3 was trained on 45 terabytes of text data—the models can appear to be ‘creative’ when producing outputs. What’s more, the models usually have random elements, which means they can produce a variety of outputs from one input request—making them seem even more lifelike.”

“AI-generated art models like DALL-E (its name a mash-up of the surrealist artist Salvador Dalí and the lovable Pixar robot WALL-E) can create strange, beautiful images on demand.”

This article is an especially good one for introducing generative AI with a minimum of hype and tech-talk.

ChatGPT is just another tool – it’s problem is people (users)

A hammer is just another tool also, but it can be very dangerous and damaging if used improperly or maliciously. The list of such tools is extremely long. This means that the real problem with ChatGPT and its kin does not lie with the tool itself, nor with its AI underpinnings. The automobile in the early 1900’s was a new transportation “tool”, but was widely considered to be too dangerous to allow on horse-and-buggy streets. Only until folks discovered how useful it could be despite its ongoing tendency to maim and kill large numbers of them.

Just as the auto became not just accepted but essential, AI-based tools like ChatGPT will almost certainly become accepted and widely used. What we can rely on for our safety here is the continued questioning and challenging of potential misuses. We fortunately have some great folks out there who are both very smart and very active in this vital respect.

Bottom line:

While artificial intelligence is showing up everywhere today, not much is truly intelligent in human terms. Missing are vital human intelligence capabilities such as self-awareness, emotional intelligence, and motivation. Nevertheless, the lower-level machine intelligence now available has some pretty amazing abilities and power. Open AI’s ChatGPT is just one example of many out there, and under development. It seems quite capable of handling a great many routine writing chores, possibly obsoleting a sizable number of us writers, journalists, consultants, and similar wordsmithing folks.

Related Reading

“As a journalist and commentator, I have closely followed the development of Open AI, the artificial intelligence research lab founded by Elon Musk, Sam Altman, and other prominent figures in the tech industry. While I am excited about the potential of AI to revolutionize various industries and improve our lives in countless ways, I also have serious concerns about the implications of this powerful technology.”

“One of the main concerns is the potential for AI to be used for nefarious purposes. Powerful AI systems could be used to create deepfakes, conduct cyberattacks, or even develop autonomous weapons. These are not just hypothetical scenarios – they are already happening. We’ve seen instances of deepfakes being used to create fake news and propaganda, and the use of AI-powered cyberattacks has been on the rise in recent years.”

“Another concern is the impact of AI on the job market. As AI-powered systems become more sophisticated, they will be able to automate more and more tasks that were previously done by humans. This could lead to widespread job loss, particularly in industries such as manufacturing, transportation, and customer service. While some argue that new jobs will be created as a result of the AI revolution, it’s unclear whether these jobs will be sufficient to offset the losses.”

“If you aren’t worried yet, I’ll let you in on a little secret: The first three paragraphs of this column [i.e., those just above] were written by ChatGPT, the chatbot created by OpenAI. You can add ‘columnist’ to the list of jobs threatened by this new technology, and if you think there is anything human that isn’t threatened with irrelevance in the next five to 10 years, I suggest you talk to Mr. Neanderthal about how relevant he feels 40,000 years after the arrival of Cro-Magnon man.”

“My prompt was relatively simple: ‘Write a column in the style of Frank Miele of Real Clear Politics on the topic of OpenAI.’ There was no hesitation or demurral in response even though I thought it might say it didn’t have enough information about Frank Miele to process the request. But it apparently knows plenty about me – and probably about you, especially if you have a social media presence.”

“The tool, from a power player in artificial intelligence called OpenAI, lets you type natural-language prompts. ChatGPT offers conversational, if somewhat stilted, answers and responses. The bot remembers the thread of your dialogue, using previous questions and answers to inform its next responses. It derives its answers from huge volumes of information on the internet.”

“ChatGPT is a big deal. The tool seems pretty knowledgeable in areas where there’s good training data for it to learn from. It’s not omniscient or smart enough to replace all humans yet, but it can be creative, and its answers can sound downright authoritative. A few days after its launch, more than a million people were trying out ChatGPT.”

“ChatGPT is an AI chatbot system that OpenAI released in November to show off and test what a very large, powerful AI system can accomplish. You can ask it countless questions and often will get an answer that’s useful.”

“For example, you can ask it encyclopedia questions like, ‘Explain Newton’s laws of motion.’ You can tell it, ‘Write me a poem,’ and when it does, say, ‘Now make it more exciting.’ You ask it to write a computer program that’ll show you all the different ways you can arrange the letters of a word.”

“Here’s the catch: ChatGPT doesn’t exactly know anything. It’s an AI that’s trained to recognize patterns in vast swaths of text harvested from the internet, then further trained with human assistance to deliver more useful, better dialog. The answers you get may sound plausible and even authoritative, but they might well be entirely wrong, as OpenAI warns.”

“There’s a holy trinity in machine learning: models, data, and compute. Models are algorithms that take inputs and produce outputs. Data refers to the examples the algorithms are trained on. To learn something, there must be enough data with enough richness that the algorithms can produce useful output. Models must be flexible enough to capture the complexity in the data. And finally, there has to be enough computing power to run the algorithms.”

“The biggest breakthrough came in the jump from GPT2 to GPT3 in 2020. GPT2 had about 1.5 billion parameters, which would easily fit in the memory of a consumer graphics card. GPT3 was 100 times bigger, with 175 billion parameters in its largest manifestation. GPT3 was much better than GPT2. It can write entire essays that are internally consistent and almost indistinguishable from human writing.”

“But there was also a surprise. The OpenAI researchers discovered that in making the models bigger, they didn’t just get better at producing text. The models could learn entirely new behaviors simply by being shown new training data. In particular, the researchers discovered that GPT3 could be trained to follow instructions in plain English without having to explicitly design the model that way.”

“Instead of training specific, individual models to summarize a paragraph or rewrite text in a specific style, you can use GPT-3 to do so simply by typing a request. You can type ‘summarize the following paragraph’ into GPT3, and it will comply. You can tell it, ‘Rewrite this paragraph in the style of Ernest Hemingway,’ and it will take a long, wordy block of text and strip it down to its essence.”

“So instead of creating single-purpose language tools, GPT3 is a multi-purpose language tool that can be easily used in many ways by many people without requiring them to learn programming languages or other computer tools. And just as importantly, the ability to learn commands is emergent and not explicitly designed for in the code. The model was shaped by training, and this opens the door to many more applications.”

We look forward to a future in which writing about almost anything can be done so much better and faster by ChatGPT AI machines.
We look forward to a future in which writing about almost anything can be done so much better and faster by ChatGPT AI machines.

“KEY POINTS:

> Google is testing ChatGPT-like products that use its LaMDA technology, according to sources and internal documents acquired by CNBC.

> The company is also testing new search page designs that integrate the chat technology.

> More employees have been asked to help test the efforts internally in recent weeks.”