“The illustrious and noble ought to place before them certain rules and regulations, not less for their hours of leisure and relaxation than for those of business.”— Marcus Tullius Cicero
“There are reasons to have rules and regulations. That I understand. Authority is a different thing. Authority is to maintain its own position by increasing its power and domination over those people it is supposedly protecting.”— James Cromwell
“Control leads to compliance; autonomy leads to engagement.”— Daniel H. Pink
“We’ve built a way of life that depends on people doing what they are told because they don’t know how to tell themselves what to do.”— John Taylor Gatto
“Compliance” is just a subset of “governance” and not the other way around.”— Pearl Zhu
“All I want is compliance with my wishes, after reasonable discussion.”— Winston Churchill
“A coerced “choice” does not reflect virtue, only compliance.”— Wendy McElroy
“A law is not a law without coercion behind it.”— James A. Garfield
“Coercion cannot but result in chaos in the end.”— Mahatma Gandhi
“Technologies can be liberating, but it can also be a tool of coercion and control.”— Noam Chomsky
Will AI rule the world? Yes and no – depends on how you define “rule”. Defined as “control, direct, influence”, the answer seems clearly “yes”, but the reason why turns out to be far from clear or generally understood. We can be, and in fact are, ruled in some very unsuspected and invisible ways. Welcome to the future – it’s here.
There is much discussion today about whether and how artificial intelligence (AI) should be managed and generally restricted. You know, like using policies and regulations, and so much similar stuff that almost never works, or works much differently than expected. A very recent example of just what is coming:
From The White House, no less: “Blueprint for an AI Bill of Rights. MAKING AUTOMATED SYSTEMS WORK FOR THE AMERICAN PEOPLE”:
“Among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public. Too often, these tools are used to limit our opportunities and prevent our access to critical resources or services. These problems are well documented. In America and around the world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or consent.”
“These outcomes are deeply harmful—but they are not inevitable. Automated systems have brought about extraordinary benefits, from technology that helps farmers grow food more efficiently and computers that predict storm paths, to algorithms that can identify diseases in patients. These tools now drive important decisions across sectors, while data is helping to revolutionize global industries. Fueled by the power of American innovation, these tools hold the potential to redefine every part of our society and make life better for everyone.”
“Being ruled” doesn’t necessarily mean tyranny
Stalin, Hitler, Mao, and many other nasties ruled their people in a tyrannical manner. You did what you were told, or you would be punished harshly or worse. Could AI, possibly in the form of a cyborg critter, rule a country in such a manner? Why would a cyborg want to do such a thing, unless it was programmed (or trained) to do this. Programmed entities of whatever flavor are created by humans, who may thereby supply all of the goodness or nastiness they wish.
AI entities can only do what their creators and programmers build into them. They are in essence just machines, no matter how powerful and complex. Whatever they are able to do was built into them by humans – the good and the bad.
Cyborg tyranny then seems simply to be a reflection of what their creators and programmers wanted them to do. This means that AI cannot rule unless it is designed to do some ruler stuff, with the real rulers hiding behind and pulling the digital strings. World domination in any form is done by people, not independently by their AI machines. AI machines just do what they are told.
This seems to mean that we can never be ruled by AI except indirectly through human rulers. If the human rulers are good and do beneficial things for us non-rulers, then their AI machines will do good also. Same for bad human rulers who will direct their AI machines to do bad stuff.
Sounds like the world is moving forward as always, regardless of the machinery used by our current batch of rulers. Today, bad rulers will do bad things using AI instead of now increasingly-obsolete machinery and techniques.
Their AI machines will not rule, but instead just do whatever their builders made them able to do. These machines are tools. So we don’t have to worry about being ruled by AI creations, yes?
The real answer here seems to go much deeper. What does “ruled” actually mean in practice – today especially?
“Ruled” means controlled, directed, influenced
Are we being ruled according to this dictionary definition? Of course. And we-the-people have always been ruled in these ways. Even rulers are so ruled.
But are we ruled in this sense today by artificial intelligence in its many flavors? We sure are, as described by an enormous number of articles and papers. Again, it is not AI that is ruling us, but the folks who designed, built, and operated the huge number of AI-based machines and processes in use today.
Ruled has another meaning: coerced. Coercion means persuasion by threats or force to do something that people are otherwise unwilling to do. Wikipedia offers this definition: “Coercion involves compelling a party to act in an involuntary manner by the use of threats, including threats to use force against that party.” It appears to be a stronger version of rule than control, direction, or influence.
The “compelling” term here seems to capture the essence of ruling. It means in general that we have no choice in practice but to comply or obey. Or else.
So, if we are looking for evidence of “being ruled” by AI, we probably have to focus on coercion that forces by various means our compliance or obedience. Especially coercion that is hidden, embedded, disguised, or otherwise made hard to see and understand. Actions we take because there is no apparent choice.
What is AI up to these days in our lives?
The short answer is: almost everything. The range of active AI applications is astonishing, even to someone like myself who thinks that he is pretty much keeping up on such great technology. Not even close.
It is vital to keep in mind here that AI applications are machines, tools, used by people to carry out whatever it is that people do. And have always done. AI may be a more powerful toolset, but only does what it was designed to do and for human purposes.
Wikipedia has an extensive but likely far from comprehensive article on “Applications of artificial intelligence”. If you are interested in seeing the full list, the preceding link will take you there. A couple of examples might be useful in any case:
“X-ray of a hand, with automatic calculation of bone age by a computer software”
“A patient-side surgical arm of Da Vinci Surgical System”
“AI in healthcare is often used for classification, to evaluate a CT scan or electrocardiogram or to identify high-risk patients for population health. AI is helping with the high-cost problem of dosing. One study suggested that AI could save $16 billion. In 2016, a study reported that an AI-derived formula derived the proper dose of immunosuppressant drugs to give to transplant patients.”
“Microsoft’s AI project Hanover helps doctors choose cancer treatments from among the more than 800 medicines and vaccines. Its goal is to memorize all the relevant papers to predict which (combinations of) drugs will be most effective for each patient. Myeloid leukemia is one target. Another study reported on an AI that was as good as doctors in identifying skin cancers. Another project monitors multiple high-risk patients by asking each patient questions based on data acquired from doctor/patient interactions. In one study done with transfer learning, an AI diagnosed eye conditions similar to an ophthalmologist and recommended treatment referrals.”
“Another study demonstrated surgery with an autonomous robot. The team supervised the robot while it performed soft-tissue surgery, stitching together a pig’s bowel judged better than a surgeon.”
“Artificial neural networks are used as clinical decision support systems for medical diagnosis, such as in concept processing technology in EMR software.”
“Other healthcare tasks thought suitable for an AI that are in development include:
> Heart sound analysis
> Companion robots for elder care
> Medical record analysis
> Treatment plan design
> Medication management
> Assisting blind people
> Drug creation (e.g. by identifying candidate drugs and by using existing drug screening data such as in life extension research)
> Clinical training
> Outcome prediction for surgical procedures
> HIV prognosis
> Identifying genomic pathogen signatures of novel pathogens or identifying pathogens via physics-based fingerprints (including pandemic pathogens)
> Helping link genes to their functions, otherwise analyzing genes and identification of novel biological targets
> Help development of biomarkers
> Help tailor therapies to individuals in personalized medicine/precision medicine”
Internet and e-commerce
“Recommendation systems. See also: Netflix, Amazon (company), and YouTube. A recommendation system predicts the ‘rating’ or ‘preference’ a user would give to an item. Recommendation systems are used in a variety of areas, such as generating playlists for video and music services, product recommendations for online stores, or content recommendations for social media platforms and open web content recommendation.”
“Web feeds and posts. Machine learning is also used in Web feeds such as for determining which posts show up in social media feeds. Various types social media analysis also make use of machine learning and there is research into its use for (semi-)automated tagging/enhancement/correction of online misinformation and related filter bubbles.”
“Targeted advertising and increasing internet engagement. See also: AdSense and Facebook. AI is used to target web advertisements to those most likely to click or engage on them. It is also used to increase time spent on a website by selecting attractive content for the viewer. It can predict or generalize the behavior of customers from their digital footprints.”
“Online gambling companies use AI to improve customer targeting. Personality computing AI models add psychological targeting to more traditional social demographics or behavioral targeting. AI has been used to customize shopping options and personalize offers.”
“Virtual assistants. Intelligent personal assistants use AI to understand many natural language requests in other ways than rudimentary commands. Common examples are Apple’s Siri, Amazon’s Alexa, and a more recent AI, ChatGPT by OpenAI.”
“Language translation. See also: Microsoft Translator, Google Translate, and DeepL Translator. AI has been used to automatically translate spoken language and textual content. Additionally, research and development is in progress to decode and conduct animal communication.
While no system provides the ideal of fully automatic high-quality machine translation of unrestricted text, many fully automated systems produce reasonable output. The quality of machine translation is substantially improved if the domain is restricted and controlled. This enables using machine translation as a tool to speed up and simplify translations, as well as producing flawed but useful low-cost or ad-hoc translations.”
“Facial recognition and image labeling. See also: Face ID and DeepFace. AI has been used in facial recognition systems, with a 99% accuracy rate. Some examples are Apple’s FaceID, Android’s Face Unlock (Both used to secure mobile devices).
Image labeling has been used by Google to detect products in photos and to allow people to search based on a photo. Image labeling has also been demonstrated to generate speech to describe images to blind people.”
Such AI applications seem to be valuable and non-threatening
I don’t know about you, but these examples along with most of the rest of Wikipedia’s list appear to me to be potentially valuable and sensible applications of AI technology. At least from this high-level viewpoint, there is nothing evident that suggests coercion mischief. These applications surely provide direction and influence, and possible some degree of control via the limited set of use choices typically offered.
So, maybe we aren’t ruled by AI after all?
As you probably suspected, this is far from the whole story on AI’s potential mischief. The main problem with AI, as noted above, is that it is a very powerful tool used by, shall we say, occasionally imperfect humans. AI will likely cause just as much trouble as may be intended and executed by the people who are in control of AI applications.
These imperfect folks, even though they may be relatively few, can readily mismanage information that we use for decision making. They can offer just the information that they want us to see. They can limit options on how we can use this information via filtering – the ability to discard unwanted information and arrange what remained. All done via AI-enabled applications.
We are in reality controlled and even coerced by the people in power who are hiding behind the AI curtain. Today, we are extensively surveilled and analyzed. Many of us already have a digital ID even if they don’t call it that. And of course they do use this mass of data for a great variety of purposes, some of which are assuredly bad, or even worse. By this means, we are largely ruled right now – not by AI, but by the always-present rulers and ruler-wannabes.
With AI embedded in virtually everything today, it’s creators and users effectively rule us. This can be done in ways never before imagined or possible. Hiding behind an AI tool does not change this fact.
We’re rapidly heading towards locking in a technological, economic, and legal regime of information control, censorship, surveillance, and vilification. AI is just the primary tool.
So many willing to trust AI is reassuring, yes?
The good news in all of this is that far from everybody trusts AI and whatever it may be doing. A recent post by Tyler Durden in ZeroHedge: “In AI We Trust”
“As data from a survey conducted by KPMG Australia and the University of Queensland shows, residents of India, China, South Africa and Brazil, the biggest so-called emerging markets, are far less critical of the continued implementation of AI systems.”
“75 percent of Indians surveyed between September and October 2022 would place their trust in AI, followed by 67 percent of Chinese and 57 percent of South African respondents.”
“According to the accompanying study, respondents claimed to trust AI used in healthcare and security contexts the most compared to other possible use cases.”
And why do so many people not trust AI systems? Our ever-helpful government may well be fueling even greater distrust.
Consider Senate Bill 686 – The Restrict Act
A very recent addition to the realm of AI applications wielded by our ever-helpful government is the aptly-named Restrict Act.
Baxter Dmitry via NewsPunch offers a pretty strong view of Senate Bill 686: “Senate Bill 686 Gives WEF Full Control Over America, Gives Citizens 20 Years in Prison For Dissenting”:
“Klaus Schwab warned us last month that whoever controls AI will control the world. And the Davos elites have wasted no time in setting the stage for their final takeover of society.”
“Senate Bill 686, also known as the TikTok Ban Bill, gives Americans 20 YEARS in prison for spreading disinformation. And what is disinformation, you ask? Disinformation is anything that the globalist elites say it is.”
“This is the most dangerous bill since the Patriot Act stripped Americans of long-held rights and freedoms in the immediate aftermath of 9/11.”
“The so-called TikTok Ban Bill is a Trojan Horse pretending to be a bill targeting ByteDance, the Chinese owners of TikTok, while it actually targets ordinary Americans who disagree with the Biden administration and the globalist elite agenda coming out of Davos.”
“How do we know they are the ones behind this bill? Because members of Congress have admitted for years that they have absolutely no clue about what is in the bills they have been passing.”
“Ever since then-House Speaker Nancy Pelosi famously admitted, ‘We have to pass the [health care] bill, so you can find out what’s in it,’ it has been clear that there is a higher power at play than Congress.”
“Nothing exemplifies government overreach and arrogance more than those 16 words, uttered by Pelosi at the Legislative Conference for the National Association of Counties in March 2010.”
“But Senate Bill 686, also known as the Restrict Act, is far worse than anything we have seen for decades. This bill has bipartisan support to turn the internet over to an AI spy system to control everything Americans say and do. [emphasis added]”
And just what is the Restrict Act, you may wisely inquire
In case you don’t enjoy reading lengthy legislative documents, like most everyone including our legislators, here is a summary and a link to the document itself:
“The summary below was written by the Congressional Research Service, which is a nonpartisan division of the Library of Congress, and was published on Mar 27, 2023.”
“Restricting the Emergence of Security Threats that Risk Information and Communications Technology Act or the RESTRICT Act.”
“This bill requires federal actions to identify and mitigate foreign threats to information and communications technology (ICT) products and services. It also establishes civil and criminal penalties for violations under the bill.”
“Specifically, the Department of Commerce must identify, deter, disrupt, prevent, prohibit, investigate, and mitigate transactions involving ICT products and services (1) in which any foreign adversary has any interest, and (2) that pose an undue or unacceptable risk to U.S. national security or the safety of U.S. persons.”
This appears to be AI-enabled coercion in one of its worst forms. We will be ruled – not by AI, but by its government users. Much if not all of the technological foundation for this bill is already in place.
Just in time, a supporting distraction appears
While we are being restricted mightily behind the Restrict Act curtain, calls for a “suspension” of “all major AI projects” are showing up. AI, they claim without evidence, is an “existential threat to human life”. We’re all going to die, says a “top AI researcher”. And of course Elon Musk.
Ethan Huff writing in Natural News … “Elon Musk signs Future of Life Institute petition calling for all major AI developments to be PAUSED”:
“The Future of Life Institute is circulating a petition that calls for an immediate end to all major artificial intelligence (AI) projects, citing their existential threat to human life.”
“Signed by Elon Musk and numerous other bigwigs in the tech industry, the petition cites ‘extensive research’ showing that ‘AI systems with human-competitive intelligence can pose profound risks to society and humanity.’”
“’Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?’ the petition further reads.”
“’Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.’”
“Only once, or if, a consensus agrees that the effects of AI system implementation ‘will be positive and their risks will be manageable’ should the world allow for the wide-scale adoption of AI as a welcome addition to the world, the petition concludes.”
“For at least the next six months, the petition states, there should be a ‘public and verifiable’ pause on all AI systems that are more powerful than the ChatGPT-4 AI robot that is all the rage in recent months. An independent oversight board with ‘rigid auditing’ capabilities must be able to ensure that these advanced AI robots are ‘safe beyond a reasonable doubt’ before such projects are ever allowed to proceed.”
Pause “all AI development”, or perhaps just that which is potentially useful in opposition to favored AI developments and in violation of restrictions now being invented?
We’re all gonna die, as usual
Eliezer Yudkowsky, a co-founder of the Machine Intelligence Research Institute (MIRI), has written an opinion piece for TIME magazine: “‘Everyone on Earth will die,’ top AI researcher warns”:
“Humanity is unprepared to survive an encounter with a much smarter artificial intelligence, Eliezer Yudkowsky says”
“Shutting down the development of advanced artificial intelligence systems around the globe and harshly punishing those violating the moratorium is the only way to save humanity from extinction, a high-profile AI researcher has warned.”
“Eliezer Yudkowsky, a co-founder of the Machine Intelligence Research Institute (MIRI), has written an opinion piece for TIME magazine on Wednesday, explaining why he didn’t sign a petition calling upon ‘all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4,’ a multimodal large language model, released by OpenAI earlier this month.”
“Yudkowsky argued that the letter, signed by the likes of Elon Musk and Apple’s Steve Wozniak, was ‘asking for too little to solve’ the problem posed by rapid and uncontrolled development of AI.”
“’The most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die,’ Yudkowsky wrote.”
“Surviving an encounter with a computer system that ‘does not care for us nor for sentient life in general’ would require ‘precision and preparation and new scientific insights’ that humanity lacks at the moment and is unlikely to obtain in the foreseeable future, he argued.”
An AI computer system that doesn’t care for us? Inconceivable, yes?
Will AI rule the world? Definitely yes – if rule is defined as “control, direct, influence”, and especially coerce. Today, AI is part of nearly everything we do and use. The number of AI-based applications and processes is huge and growing explosively. But AI remains nothing more than a very powerful and versatile tool that is designed and managed by humans, for better or for worse. AI does only what it is told and what it is designed to do. Nothing more, despite popular concerns to the contrary.
AI will do good things if designed and used by good people, and bad things or worse if designed and used by bad or worse people. This is how the world has worked since the first people and their tools were invented. At the present moment, however, some rather questionable humans seem to be running the show. Mostly rulers, ruler-wannabes, and their supporters.
- Brandon Smith writing in Alt-Media.us via ZeroHedge: “Governance By Artificial Intelligence: The Ultimate Unaccountable Tyranny”:
“It’s no secret that globalist institutions are obsessed with Artificial Intelligence as some kind of technological prophecy. They treat it as if it is almost supernatural in its potential and often argue that every meaningful industrial and social innovation in the near future will owe its existence to AI. The World Economic Forum cites AI as the singular key to the rise of what they call the ‘Fourth Industrial Revolution.’”
“In their view, there can be no human progress without the influence of AI algorithms, making human input almost obsolete. This delusion is often promoted by globalist propagandists. For example, take a look at the summarized vision of WEF member Yuval Harari, who actually believes that AI has creative ability that will replace human imagination and innovation. Not only that, but Harari has consistently argued in the past that AI will run the world much better than human beings ever could.”
“Harari’s examples of AI creativity might sound like extreme naivety to many of us, but he knows exactly what he is doing in misrepresenting the capabilities of algorithms. Games like chess and Go are games of patterns restricted by rules, there only so many permutations of these patterns in any given scenario and AI is simply faster at spotting them than most humans because that is what it is designed to do by software creators. This is no different that solving a mathematical equation; just because a calculator is faster than you does not mean it is ‘creative.’”
“There is a big difference between cognitive automation and cognitive autonomy. AI is purely automation; it will play the games it is programmed to play and will learn to play them well, but it will never have an epiphany one day and create a new and unique game from scratch unless it is coded to do so. AI will never have fun playing this new game it made, or feel the joy of sharing that game with others, so why would it bother? It will never seek to contribute to the world any more than it is pre-programmed to do.”
“How is political bias possible for a piece of software unless it was programmed to display that bias? There is no objectivity to be found in AI, nor any creativity, it will simply regurgitate the personal opinions or biases of the people that created it and that engineered how it processes data.”
“The elites will present AI as the great adjudicator, the pure and logical intercessor of the correct path; not just for nations and for populations at large but for each individual life. With the algorithm falsely accepted as infallible and purely unbiased, the elites can then rule the world through their faceless creation without any oversight – For they can then claim that it’s not them making decisions, it’s the AI. How does one question or even punish an AI for being wrong, or causing disaster? And, if the AI happens to make all its decisions in favor of the globalist agenda, well, that will be treated as merely coincidental.”
- Ethan Huff has the latest story in Natural News: “Anti-TikTok legislation a thinly-disguised Patriot Act for the internet”:
“Sens. Mark Warner (D-Va.) and Tom Thune (R-S.D.) have introduced bipartisan legislation called the Restrict Act that is claimed to be about blocking or disrupting financial transactions and holdings linked to foreign adversaries that pose a risk to national security. In truth, the bill appears to be little more than an extension of the USA Patriot Act from the George W. Bush years.”
“If passed, the Restrict Act would hand the government enormous new powers to punish free speech. This appears to be the intent of the bill, which was concocted by someone with an extensive history of standing in direct opposition to the First Amendment to the United States Constitution.”
“Exposed by Michael Krieger in 2018 and confirmed in the recent Twitter Files drops as someone who pushed for the ‘weaponization’ of Big Tech, Warner crafted the Restrict Act to ‘take swift action against technology companies suspected of cavorting with foreign governments and spies, to effectively vanish their products from shelves and app stores when the threat they pose gets too big to ignore,’ according to Wired.”
“Listed in the legislation as bad actors are China, Russia, Cuba, Iran, North Korea, and Venezuela. Not listed in the Restrict Act is any specific mention of TikTok, which appears more as the Trojan Horse or excuse to erase even more of Americans’ constitutional rights.”