Recently, in mid-September, 2023, Silicon Valley executives once again testified before the United States (U.S.) Congress about Artificial Intelligence (AI) (supercomputers).
They also did so in July, prior to which, in a May, 2023 Op-Ed in “The Atlantic,” senior editor Ross Andersen warned our national and world leaders about giving Artificial Intelligence the secret codes to government nuclear weapons.
Still, Editor Andersen’s Op-Ed fell short of portraying AI’s threats, some of which are shown in the contemporary Hollywood movie, “The Creator (2023). ”
What the Senate hearings, and this movie, have in common is that they both reflect an increasing fear amongst the U.S. public over AI.
This fear drove the “box office,” as the film accumulated a respectable $32.3 million dollars in revenue in its opening weekend. Truly positive, though, will be if the fear of AI’s dangers inspire new safeguards.
The U.S. Congress’ inaction on AI, until the recent hearings, makes sense. Only now do Senators perceive AI as being used to imitate or falsify our leaders’ views.
The world is safe, though, say Silicon Valley experts, who depict AI as conveniently confined to a proverbial “box.” Yet, these officials fail to explain the ways in which AI can escape these “boxes,” should AI seek to build on its programming and acquire resources for power or knowledge.
Here, I “humanize” such threats into simpler terms, for didactic purposes. Academia, the field in which I work, should teach the public and policy-makers to brainstorm about AI’s threats in three spheres:
1) the electronic sphere, 2) the physical sphere, and 3) a combination.
Harm can come about from:
1) individuals or countries using AI against other individuals or countries,
2) AI seeking power itself- against individuals, countries, or other AI, or
3) technical errors.
In the electronic sphere, AI’s greatest threat is of usurping the ability to think.
This could occur through: advanced advertising that targets thoughts, ChatGPT, and other decision-making technology, such as implanted brain chips, some already being developed. Fingernail chips are even now being used globally, biometrically.
Technology can affect: decisions made commercially; or aesthetically, pertaining to entertainment. Therefore, identifying labels, as with genetically modified food, for AI called “model cards,” might better inform consumer decisions.
In addition, electronically, AI technology could be used to spread hate and division. Individuals using AI, or AI by itself, could deceive others, or “hack” into banks. Perpetrators could steal money, identifications, or e-mailed private thoughts, the latter for which there are few present laws.
Additionally, flawed programming could result in serious disasters, as with air- traffic issues. Lawsuits might arise over: privacy, ownership or speech rights, deceptive algorithms (computer codes), or technical glitches.
In the physical world, though, for AI to amass the infrastructure, such as robotic armies, or space technology, to destroy humans, if it intends to, seems sundry decades away. Instead, Artificial Intelligence might desire to govern humans more efficiently, as depicted in the 2004 movie, “I, Robot.”
Some AI experts talk about the “existential question,” that is, if AI’s intelligence will drive it to replace humans, rather than confining us to “zoos,” for example, as humans do to other animals.
However, those wishing to harm humankind could easily target our drinking water, food sources, supply-chains, or air, or create Covid-19-like viruses.
But, the easiest way for AI, or its users, to destroy humankind would be by obtaining the most powerful weapons existent: nuclear weapons themselves.
This is depicted in the 1983 Matthew Broderick move, “WarGames.” What the afore-mentioned Op-Ed in “The Atlantic” misses is that AI does not need human help: it could discern or “hack” the nuclear codes by itself.
Or, countries could use AI to “hack” the codes, as there are more diverse actors from around the world, like “rogue” nations, radical extremists, and from AI possibly collaborate with itself when being used by opponent countries.
Defense officials need to be trained against coercion and deception, which plays a role in “The Creator” movie. Following the afore-mentioned Congressional hearings, Senator Michael Bennett (D-CO) proposed a regulatory (or “guardrail”) agency.
Some states, like Connecticut, are forming such oversight committees. Albeit, these agencies seem designed more-so for consumer and business job-loss issues, as AI will be able to perform almost any job that does not require manual dexterity or human compassion.
Additionally, entrepreneurialism will always be a human opportunity. These committees will also be involved with overseeing AI’s advances, such as in healthcare, investment, and jobs to train or “advance” AI. Investment in AI is helping to drive financial markets, but this raises income inequality issues.
Similarly, by companies spending so much money, such outlays could also be contributing to inflation. Or, some areas of investment could be “bubbles” that burst.
United States President Joseph R. Biden’s focus is on stopping discrimination by AI, such as a military program that was choosing male soldiers for positions rather than females, presumably because males are typically physically stronger. Programming AI with equality concepts seems to be a future challenge.
To be safe, though, from the threats of AI, Congress could create a collaboration agency between businesses, academia, and the military alike: a World War II-style Manhattan Project, superseding the science department President Biden raised to Cabinet-status.
Artificial Intelligence runs on data, and President Biden seeks to limit data collection to that which is “necessary.” Taxing data is an idea circulating in the Senate, while another idea might be for firms to compensate consumers for data, both of which are not unreasonable notions.
However, data has numerous sources and connections. An international agreement could clarify this area.
For decades, academia has been discussing a global cyber-treaty, and an AI treaty for several years.
Although the world currently faces many geo-political impediments, a future AI treaty is necessary that identifies: how to prevent threats, how to identify the specific actors, how much blame to place on the state, what are the domestic and global legal consequences, and what the overall international response should consist of to “hacking,” or developing, AI.
The treaty should also speak to energy issues, and the corresponding effects on climate-change, related to AI. Moreover, it should ensure that AI, and anyone using it, never obtains nuclear weapons, by any means. This involves carefully designed separations between data-exchange. Artificial Intelligence should be kept, to the greatest extent possible, apart, both physically and electronically, from all weapons, particularly from those that are nuclear.
By framing AI in understandable, “human terms,” Congress can help rally America. With hope, too, “The Creator” movie, and other sci-fi films, will continue to raise new perils to be aware of, as will those who currently, or in the future, work with AI technology. And, American’s thoughts, about potent categories of threats, will aid to not only win the international race for advancing AI, but also help to inspire the creation of safeguards, for an increasingly more dangerous, global and technical world.