Posted on

AIpocalypse Maybe.

My irritation with the AI issue and total lack of concern about it was primarily based on my stance as a dualist in the philosophy of the mind. It was actually Elon Musk that made me realize one’s take on consciousness meant very little; that it didn’t matter if the machine was truly conscious, truly alive. In a video of his talk at a governors meeting in 2017, he gives an example of his AI concerns:

“I want to empathize: I do not think this actually occurred. This is purely a hypothetical. I’m digging my grave here… But you know there was that second Malaysian airliner that was shot down on the Ukrainian-Russian border, and that really amplified tensions between Russia and the EU in a massive way? Well, let’s say you had an AI where the AI is always to maximize the value of a portfolio of stocks. One of the ways to maximize value would be to go long on defense, short on consumer, start a war. How can it do that? Hack into the Malaysian airlines aircraft routing server, route it over a war zone, then send an anonymous tip that an enemy aircraft is flying overhead right now.”

Personally, I’ve always been bothered by psychopaths in society. While those of the type that become serial killers are certainly a concern, I have been even more worried about what I’ve heard referred to as socialized psychopaths — the kind that occupy the highest levels of power in businesses and corporations. Lacking empathy, their prime interest is in maintaining and gaining power.

It bothered me for many reasons, not least of which is the fact that it says something about our society: that psychopathy is in fact a survival technique in the context of our culture; that it constitutes a successful adaptation in our system; that the characteristics of our culture reward that personality type. When Musk (who is most certainly not a psychopath, I should add) speaks about AI, he is basically describing technologically-generated psychopathy. And its easy to see how his example could manifest even if a machine or program does not constitute consciousness.

The larger point I’ve been missing until now is that it wouldn’t have to reach the extremes displayed in countless movies. As Musk also stated, “until people see robots go down the street killing people, they don’t know how to react.” It doesn’t have to be at that level, it need not manifest so blatantly, to constitute a threat to the survival of the human species. And, he says, we really cannot delay:

“AI is a rare case where we need to be proactive in regulation instead of reactive because if we’re reactive in AI regulation it’s too late. Normally the way regulations are set up is a whole bunch of bad things happen, there’s a public outcry and then after many years the regulatory agencies set up to regulate that industry. That in the past has been bad, but not something that represented a fundamental risk to the existence of civilization. AI is a fundamental risk to the existence of civilization in a way that car accidents, airplane crashes, faulty drugs or bad food were not. They were harmful to a set of individuals but they were not harmful to society as a whole.”

He set up a research company, OpenAI, in efforts to regulate the inevitable, though it has just recently been announced that he has distanced from it due to how it conflicts with his other projects.

While AI still doesn’t rank as the greatest threat to human civilization in my mind, what he has had to offer about the subject has come to raise my concerns.

As if we didn’t have enough threats to our species to worry about…

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s