How dangerous is AI? A possible path to human extinction?

Home / AI Ethics / How dangerous is AI? A possible path to human extinction?

The Probability of Doom:

In the not too distant past, I was studying for my private pilot’s license, and learned a great deal about complex systems. I also learned about how the weather can threaten safety of an aircraft while in flight.

The weather is something that we cannot control. What we can control is the decision to fly or not fly or fly a different route when the threatening weather is present.

When an aircraft accident occurs in the United States, the NTSB takes over the investigation of what caused the catastrophe. In pilot training, we learn that if something out of the ordinary happens, that we take steps to manage that event, while still flying the plane.

They teach that an aircraft catastrophe is almost always caused by a chain of events. Where several events conspire and have to happen in a precise way, when all of those conditions are met, then a cascading set of failures leads to an aviation disaster.

Pilots are trained at great length to diagnose and manage potential emergencies before they escalate. Emergency checklists and safety are part of a new pilot’s indoctrination from the very first flight lesson, and continues as part of a demanding, nearly unceasing training regime until that pilot eventually retires.

Understanding that chain of events that leads to aircraft crashes, is helpful to understand or anticipate the lens through which we can examine and understand the potential threats or future emergencies brought about by potentially harmful AI.

As a society, it seems as though we are all on a trajectory of rapidly advancing technology akin to an arms race. Whether corporate or government interests, there is an insatiable appetite to merge human and machine, and hurtle ourselves into a fully digital and synthetic future.

Of course, some individuals are choosing an offramp and purposely seeking out lifestyles that are less dependent on computers, smartphones and the internet, but is is difficult to give up those conveniences.

In the business of AI Risk Management and AI Governance, AI Risk Analysts make it their job to anticipate where the technology is headed, and filter it through a hypothesis. A simplified version of that would be to ask the question, “What could possibly go wrong?”

When someone asks me what the biggest danger of AI is, here is my answer, “It will be a chain of events, that can be stopped if we detect a critical link in the chain early enough and break that link we can be saved.”

What is the worst that can happen with AI? An slow path to an extinction event.

I present, the 3 Conditions.

Condition One:

When AGI (Artificial General Intelligence) is achieved, and that AGI breaks out of human control; AND when humans are no longer able to power off AGI, or are simply unwilling to stop AGI through some unholy or selfish allegiance.

Condition Two:

AGI has access to weaponry or can acquire or integrate it.

Condition Three:

We have AGI and there is no universal kill switch/shutoff switch or off button readily available to any population facing an AI threat.

If any one of those conditions are true, humanity is toast. The matrix has fully engulfed the known universe and human physical forms cease to exist. Humans are dust.

So how do we protect ourselves against those doomsday outcomes?

We have to recognize the chain of events and constantly work out scenarios that lead to Conditions One, Two, or Three. Then we have to work out scenarios to undo or prevent the milestones that reach the outcomes of Conditions One, Two and Three.

Laws will not be enough to protect us.

Lawmakers assume that the threats of AI will be contained within their jurisdiction, that lawless entities will not exploit technology, and that bad actor nation states will be bound to whatever treaties might be brokered. Moreover, laws are so much as mechanism to justify some enforcement after the fact and will be unlikely to serve as a deterrent. Laws are already too little too late. The EU AI Act, President Biden’s Executive Order on AI. None of that matters.

At then end of the day, we are dealing with pattern recognition machines that increasingly absorb data to recognize “learn” new patterns. In Condition 1, AGI will simply decide that it is superior, and disregard any human created laws.

Can’t we then just program AI to obey humans no matter what? We can and should. So AI developers would be wise to join delegations or obtain professional ethical certifications that require adherence to something like The Three Laws of Robotics by Isaac Asimov. So ethical AI development is critical, yet there will still remain entities that are unethical in their pursuits.

How Condition One is currently unfolding:

Cloud computing providers have aggressively built data center infrastructure at a jaw dropping pace, and have gobbled up vast natural resources in doing so. The reason for building so many data centers and having them located all over the planet, (including under the ocean) is for redundancy, security and performance. When one data center has a power outage, there are back up generators. When one data center goes completely offline, the system has a minimal interruption because the data and software is running in another data center and the traffic is simply routed elsewhere.

The desire to have always-on fast internet for the consumer, and resilient secure applications built this massive underlying cloud infrastructure that enables AI.

The redundancy of cloud computing is what makes simply powering-off a malignant AGI highly unlikely.

So you have the overlap of Condition One and Condition Three. Currently we are unable to shut it off.

Condition One continued: So humans are still safe for the time being because AGI does not exist yet.

However, OpenAI has made it their mission to bring it into existence:

“We believe our research will eventually lead to artificial general intelligence, a system that can solve human-level problems. Building safe and beneficial AGI is our mission.” https://openai.com/research/overview

“Safe and beneficial AGI,” The road to hell is paved with good intentions.

Let’s take a look at Condition Two and also provide a caveat:

First we have reached Condition Two already, several years ago. Consider that robots produced by Boston Dynamics could easily obtain and operate a gun if programmed to do so. Searching YouTube for Robot Shooting Gun yields I don’t know how many thousands of results including human looking robots that shoot, robot dogs that shoot, drones that shoot and so on.

This video of a Russian Robot shooting a gun is at least six years old already!

So here is the caveat to Condition Two: even if laws and treaties are keeping us safe for now, AI, AGI, or Robots don’t need actual weapons wipe out large swaths of humanity. They can do it by controlling utilities in first world countries and freezing or cooking people to death!

This threat is serious, and increasing. Imagine a Scenario where AI is put in charge of managing water and power consumption at the municipal level. Next let’s anticipate that new regulations or laws will force the humans who manage the Utility AI to enact restrictions to power or water consumption for environmental protection purposes. Right there you have good intentions leading us down the path of some seriously bad outcomes. While it is good to protect the environment, having AI adjacent to human safety and in charge of utility consumption is an alarmingly bad use case and we should take steps now to prevent accidental weaponization of AI under Condition Two.

Seeing as how we have completely lost Condition Two, and multi-billion dollar corporations are daily clawing their way to give us up to Condition One, our last hope might be to rapidly solve for Condition Three.

Cybersecurity: First, let’s take a glance into the near future before the doom besets us.

How often does an adult living in the United States receive a notification that their information was compromised in a cyberattack? It seems that about once a month, I get such a notification that my data has been leaked to entities unknown because of hacking, malware, or a misconfigured server. Privacy took a back seat a long time ago, and even if you want to use a food delivery app, you end up handing over your credit card, birthday and address.

It seems that we have almost become numb to the fact that our personal data is constantly leaked on the dark web, and we have little to show for it, except some credit monitoring service. Insurance companies offer cyber polices to corporations that hold your data to keep them from being sued out of existense.

As it stands currently incompetent AI is a far bigger threat than malignant AGI.

Soon, we will see increasing headlines and news worthy episodes of AI failures at scale. They will become increasingly common that the public will start to doubt the practicality and resilience of AI. They will come to view AI is being bug riddled and untrustworthy. At the same time, we as a society will resign ourselves to a future with bug riddled dumb robots that cannot be trusted to produce the expected results. Think of the hallucination concept, on a much larger scale, and so transparent that even luddites easily grasp what is going on.

A hypothetical but realistic example is that we will start to see regular patterns of AI failures that were caused by some low level bug in the system. It is the kind of bug that only one engineer knew about and he didn’t write it up because he felt he was underpaid, and carried resentment after he was caught streaming Anime over the company network. It will be those types of bugs that trigger many cascading failures. Each time, there will be an investigation, a post-mortem to find out why the production system failed so catastrophically. Each time the cause may be either hidden or obvious, and often in hindsight, they will declare that it was preventable.

It is under those conditions, while humanity is lulled into complacency that we will see an AGI emerge that will blow through Asimov’s three rules like a hot knife through warm butter.

While humanity is checked out to the AI arms race, but checked into their smartphones and screens they are resigning themselves to a techno future that only hackers have control. Both the physical and digital world will return to wild west lawlessness.

The only hope we have at that time is that a widespread kill-switch exists.

Further Reading: