The Turn Off Button

"Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last." - Stephen Hawking


The scene is horrific when Chitti, the robot created by Dr. Vasi, gets out of control in the movie "Robot." There are many other movies about AI getting out of hand, and fiction is full of such incidents. Now is the time we must think about the possibility of AI getting out of hand or falling into the wrong hands, as the world is more connected than ever before.








Instead of talking about the regulation of AI, we should first discuss the worst-case scenario: the time when we must press the Turn Off button.


A few years ago, I was watching a YouTube video on the Turing Test. The Turing Test is a simple idea: can a machine chat like a human? If one can't tell the difference in a conversation, then the machine is considered to have passed the Turing Test and is intelligent. It's a way to see if a machine can trick you into thinking it is a person. Today's AI chatbots are far beyond that. But the fascinating thing that stuck in my mind was a comment below that video: "What if the computer willfully failed the Turing Test?"


Oh, you might say, AI has made our lives better. The thing is, I also believe that AI has done a great job and is pretty helpful in many ways. I myself use AI daily, like when I took help from AI in creating this article.


Let's take the example of nuclear power. Like AI, it also has a lot of potential, both negative and positive. We realized the danger of nuclear power only during the Cold War when the world was about to end. But nuclear technology can be pretty useful against climate change by creating a reliable source of energy.


"Knowledge is power," rightly said Francis Bacon. I understood this statement a few years ago when I was thinking about Google collecting all the data. Google and other MNCs collecting data know how powerful knowledge is. They can understand where the world is going. Google knows where the world is through Google Maps, and the possibilities are only limited by your imagination. Now we are feeding all the knowledge and information to AI, helping it understand this data and become better every day. Consider the potential of that data-feeding. How powerful it can become.


Thinking that AI is for the benefit of humanity sounds absurd to me, as was the sole purpose of knowledge, science, and medicine. Has it been left so?


Mitigation Measures

Let's take the scenario that AI becomes super intelligent, gets out of control, is present everywhere, and the whole world is connected through networks. Finally, the AI takes over the world. How do we deal with this hypothetical situation?

  1. The Turn Off Button: There should be some kind of turn-off button to shut down the AI system altogether.
  2. Profit Percentage: All AI companies in the world should contribute a percentage of their profit to a single fund. The purpose of this fund should be to research and develop strategies to deal with possible worst-case scenarios.
  3. Electricity: Cutting the power supply might be the easiest option one can think of, but that strategy turned out to be counterproductive in the movie "Robot." There are numerous sources of energy available all around us, like solar, generators, backup batteries, and batteries in automobiles. The AI may find a way out.

Please provide any other suggestion if you thing, in the comments below. I am not pessimistic; rather, I dream of an ideal world. At the same time, I am realistic enough to be future-ready.

Thanks. 😁



Post a Comment

Previous Post Next Post