Button TextButton Text
Download the asset
Back
Article

Frequently Asked Questions: Can AI be Evil?

Frequently Asked Questions: Can AI be Evil?

Can artificial intelligence be evil? It's important to clarify that current AI technology is not sentient. Despite some misleading claims or attempts to suggest otherwise, we have not achieved sentient AI. This fact is worth emphasizing and cannot be overlooked. 

Certain headlines may assert that today's AI possesses sentience, but such claims are unfounded. Some individuals may try to evade the topic by suggesting that we are on the brink of sentient AI or that we observe hints of sentience. This subtle approach is a way to avoid acknowledging the absence of sentient AI.

A good part of my work can be summed up as talking about fascinating things with curious people. Q&A time after I give a talk is probably my favorite part of what I do. I’ve been fact-checked live on stage (I was indeed wrong), posed with complex questions about ethics, and often faced with questions that I just can’t stop thinking about.

The most common question I receive also happens to be my favorite.

 It was most recently asked after I spoke at a summit in New York City by a 14-year-old wearing a sick bowtie:

“Are you afraid of AI taking over and becoming evil?”

My response to this question is a brief thought exercise:

“We are talking about something that is in its infancy. Getting it started was easy, but most of its functions are still rife with inconsistency, full of mistakes and errors. Yet we see it learn every day, developing new connections and frameworks in ways we can’t really understand. We know that soon, it will be able to stand on its own two feet, it will begin to learn much more rapidly, it will know more than us, it will be capable of doing more than us, and it will seek opportunities, find solutions, tackle challenges, and achieve things in the world in ways we haven’t yet imagined. And maybe, just maybe, it will turn out to be a jerk.”

Now tell me: did I just describe Artificial Intelligence or a human child? This is a decidedly humanistic view on the development of AI. If we approach a rapidly developing technology in much the same way as we approach our own reproduction, perhaps we can remove fear from the equation.

Humans procreate constantly (pumping out about 228,000 new humans daily, in fact), and we hope beyond hope that our children will be smarter and more powerful than us in every way. As we age, we even submit to their leadership precisely because they have greater knowledge, power, and ability. Yet we don’t mire ourselves in fear that our kids will turn bad.

Of course, our children still do sometimes end up as criminals, psychopaths, swindlers, dictators, and lawyers. But we have built educational, legal, moral, and ethical systems that guide their upbringing, and have paired those with judicial and rehabilitation systems as a backup plan for the ones that fall through the cracks.

Artificial Intelligence.

Tay did not make the list.

There’s a principle in computer engineering that states when a component in a system changes by an order of magnitude, it often necessitates a redesign of the fundamental building blocks of that system.

So, back to our analogy of artificial intelligence as a child, the challenge we have before us is the very real likelihood that our “child” in this case will not just be incrementally better, faster, and stronger, but rather an order of magnitude more capable. It follows, then, that a redesign of fundamental social systems may be in order to prepare for that future.

People with more letters behind their names than I have are speaking out about the importance of carefully guiding the beneficial development of artificial intelligence, and Elon Musk is even launching a new company that will focus on brain-computer interfaces to help humans keep up with the pace of technological change. As the topic becomes more mainstream, the complexities continue to deepen. So just as we dream for our children to surpass us, we can dream for our technology to do the same.

Let’s raise AI right, push it in the best directions, and admit that, yes, it may outpace us and might even run amok. To get to the real issues, we need to shift the conversation from one of fear to one of preparation, asking instead how to adapt the systems we rely on to ensure that this child grows up to contribute positively to society.

The notion of evil is a human concept, a construct that we have created. Consider a knife as an example. A knife can be used for both practical purposes like cutting vegetables and for harmful intentions like causing harm to others. However, we don't label the knife itself as evil based on its potential for harm. We understand that a knife is simply a tool, and its morality is determined by the intentions and actions of the user.

Similarly, AI operates as a statistical toolbox, a tool created and utilized by humans to perform specific tasks. AI itself is neither inherently good nor evil. It is the ethical responsibility of those who employ AI to ensure its proper use and consider the potential impact it can have. AI can be employed to address global challenges and improve society, or it can be misused to cause harm if in the wrong hands.

From the early ages, to the age of the internet, fear of the unknown has accompanied technological advancements. So, is evil AI something we should fear? The real reasons to be afraid of artificial intelligence can be attributed to several common factors: anxiety about machine intelligence in general, worries about widespread unemployment, apprehension regarding super-intelligent AI, the potential for AI to fall into the wrong hands, and the general caution and concern that often accompany new technologies.

Confronting the unknown and embracing our fears is a crucial step in shaping a meaningful future. By acknowledging and facing our fears head-on, we gain the strength and knowledge needed to overcome them. It is through this process that we can actively participate in creating a future that aligns with our intentions and aspirations, equipped with the wisdom and understanding to navigate the uncharted territories of life.

Brett Schilke

Brett Schilke is a strategist and storyteller for the future today, in pursuit of a world more present, purposeful, and connected. He is currently Head of Futurecraft at Eidos Global.

Download the asset