A.I.pocalypse, Neuralink and the Superbrain.

After I got home that evening and hopped on YouTube, I quickly discovered that The Joe Rogan Experience podcast was streaming live — with Elon Musk as the guest. I think I cheered aloud. I then sat my happy ass down for two and a half hours, watching it from start to finish. However brief the moment was, I should mention, I knew from the moment he took that single, solitary puff of Mary Jane that this was what the media would jump on. Never mind the substance that could be procured from their conversation, never mind even the whiskey they were drinking, it was all about the stuttering genius partaking in a tiny hit of the Devil’s Lettuce.

Akin to Eckhart Tolle, he exhibits a characteristic behavioral trait when asked a question. He looks down, eyes flickering back and forth as he processes the question and awaits the response offered by the depths of his relentless, explosively hyperactive mind. Sometimes, like Jordan Peterson, his eyes dart to the sky, but I think the same subjective activity is at work. In the process, he seems to naturally experience the sharp distinction between inner and outer reality in much the same way that I do when I’ve smoked a sufficient amount of marijuana. Put simply, he becomes so absorbed in his internal focus, drawn into his mind to such an extreme and intense degree that all external sensory signals are drowned out, utterly lost to him. A moment of loaded silence passes. Then he changes channels, placing his focus, his target of psychological absorption, yet again on the external world, and offers his response.

It’s an abrupt switch. There is no middle ground. From the depths of the Mariana trench to the highest point of Olympus Mons, no segue necessary. It is no wonder that, though he has experimented with meditation via mantra, he feels anxiety towards Rogan’s suggestion that he try out a sensory deprivation floatation tank. The guy’s mind is a ceaseless swarm of ideas.

When AI is brought up, which was nearly inevitable, he confesses that he has become less worried about it. Not because circumstances has become less dire in his eyes, either, but because he’s come to adopt a more fatalistic attitude.

His calls for prospective regulation have fallen on deaf ears, and while disappointment, frustration and depression doesn’t seem to graze the surface of how it makes him feel, he doesn’t seem surprised, and he explains why. This is simply not the way the process of regulation tends to come about. Shit first hits the fan and then, over a period of struggle that could take up to a decade, regulations are finally put in place. Like seat-belts in cars. But if that pattern were to play out with respect to regulating AI, he says, it would be far too late.

Given what he considers to be a failure on his part to convince The Powers That Be, he has other safety measures in mind. First, though, it might be best to illustrate precisely what he thinks the danger is.

As he sees it, the danger of AI is likely to manifest, first and foremost, in one of two fashions. It will either be controlled and weaponized by a small group of people — a government faction, a terrorist organization — or it will develop a will of its own, and in either case there is potentially grave danger. If AI develops into a superintelligence, it will quickly become capable of improving upon itself, with the consequence of those improvements leading to still greater and swifter improvements. It essentially achieves a singularity, Musk says, in that the ultimate result is impossible to predict. Circumstances may not lead to our extinction, but it would certainly be well outside human control. AI would be so intelligent that even if it were benign, we’d be like pets to them. And they would be gods to us. This, Musk argues, wouldn’t exactly be the Panglossian “best of all possible worlds,” however, and I couldn’t agree more.

His solution, unfortunately, scares the shit out of me even more than the alleged dangers inherent in the rise of AI itself. Given the inevitable rise of superintelligent AI, he says, the best-case scenario for human beings would be to merge with it.

“If you can’t beat it, join it,” he offers.

It would be the only way to defend ourselves from AI, to arm ourselves against it, and to prevent us from becoming their house pets. This isn’t as drastic and divorced from our current circumstances as we may think, either, he stresses. In essence, we are already cyborgs. We have digital versions of ourselves online through email and social media. We can answer questions on Google, hold video conferences with people across the world. Our phones and laptops are extensions of us.

There is certainly, clearly a difference between this, our current relationship with technology, and what we typically think of when we imagine a cybernetic organism, of course. He describes this distinction as being our current “bandwidth problem”: our rate of input/output is far too snail-pace, particularly the output. Our vision can take in a lot of data through all the text and imagery on our screens, but we have to deliver our input through hunt-and-pecking fingers or, in the case of our phones, merely our dumb, clumsy thumbs. He suggests that what we need ASAP is a high-bandwidth interface with the cortex. Just as the cortex works symbiotically with the limbic system, the AI could work symbiotically with the cortex, making us more or less what we typically imagine as cyborgs. The rate of data exchange would be so fast that experientially, we would be one with the AI extension. We would have enhanced cognitive ability. We would interact with one another in simulated worlds, download limitless data, even backup our identities and achieve technologically-mediated immortality.

This is the aim of one of his companies, Neuralink: to blend man and machine so that we ride the AI wave rather than be submerged by it. And it fucking terrifies me.

In the midst of hearing him talk about this, I was reminded of both Delgado and Bearden, who helped me fashion nightmarish scenarios for the human future back in high school.

Ah, the good ol’ days.

I first heard of Dr. José Manuel Rodriguez Delgado back in my Sophomore year, I believe. Rather than the popular notion that physical abnormalities were the cause of mental disorders such as schizophrenia and epilepsy, he hypothesized that the underlying issue may be erratic electrical activity in the brain. After he learned of research that seemed to show that the movements of animals could be controlled by stimulating their brains with electrodes, he focused his own research there. Rather than lobotomizing people, he thought he might be able to jolt their brains into the normal manner of functioning, and to meet this end he invented the stimoceiver. It was essentially a radio receiver attached to electrodes, and he went on to implant them in the noggins of various animals including cats, dogs, monkeys, chimps and eventually human beings.

Through use of an implanted stimoceiver he could remotely control various parts of the limbic system. In the experiments he began carrying out in the 60s, he found that he could produce a wide spectrum of emotions and behavior in his subjects from the simple to the curiously complex — and all at the mere press of a button, the crank of a dial. The way we passively change channels on the television, adjust the temperature in our house or turn the dial on the radio, with just as much ease he could produce euphoria or terror in another, he could summon up a fit of rage, conjure sexual desire, or produce passivity. Armed with nothing but a remote control, he once stepped into the ring with a bull implanted with one of his stimoceivers and was able to stop the aggressive beast dead in his tracks every time it charged at him.

When his experiments moved to human beings, he found that he could induce emotional and behavioral responses they were unable to overcome by will. Even more disturbingly, in some cases the subjects mistook their remote-controlled behaviors and emotions as natural responses — as products of their own, free will.

Allegedly, such behavior can also be produced through posthypnotic commands, but that’s another long, ranting article.

Aside from remote control, he also discovered he had the ability to reprogram or condition the brain. There was a chimp named Paddy, and after she was implanted he monitored her brain waves and induced a painful sensation every time her brain produced certain spindles, ultimately training her brain to stop producing them altogether in just shy of a week. Admittedly, the potential this has for preventing seizures is amazing. It could also “train the brain” against depression and countless other mental disorders. Psychiatrist offices are full of patients who take drugs to overcome these very things every day; drugs that often have horrible side effects, at times much worse than the symptoms they’re administered to treat. Here, a person’s neurological habits could be subject to a limited period of conditioning. No lifelong routine of popping pills necessary.

Still, its more than a bit horrid.

So while there are certainly beneficial aspects to this technology, it is outweighed in my mind by the potential horror that could result from its use — particularly today, where upgraded versions of his stimoceiver would be much smaller and inserted far more easily. And technologies such as Neuralink could provide an avenue for such control as well, either by terrorists, hackers, or power-hungry factions of the intelligence community.

In addition, Neuralink may potentially pave the road to something even more terrifying, and this is where we come to Thomas E. Bearden. I first heard of him in the book Silent Invasion by Ellen Crystall, which led me to his own book, The Excalibur Briefing, which I found interesting insofar as I could comprehend it at the time. Eventually I came across him again, without seeking, in the mentally nauseating New Age book Gods of Aquarius by Brad Steiger, where he was given the last word. I photocopied that part of the chapter during high school and, looking into my files two days after watching the Rogan podcast with Musk, I found I still had it.

Reading this back then constituted a turning point in my thinking, where my growing paranoia regarding the experiments of Delgado reached a peak and solidified.

In Gods of Aquarius, Bearden speculated that:

“The evolution of a life-bearing planet may be divided into stages, the first five of which are: (1) The formation of the planet itself and some billions of years of cooling, so that a primordial atmosphere and ocean are gradually evolved; (2) The fomenting of amino acid structures in the violent convulsions of the primeval sea and planet; (3) The formation of the self-replicating supermolecules, DNA and RNA; (4) The formation of one-celled organisms; (5) The formation of multicellular organisms. At the upper end of the fifth stage of evolution, the intelligent mobiles emerge, as do eventually tool-using intelligent mobiles. This is the level on which man finds himself on the planet Earth.”

It is interesting to find that much of what he said, particularly beyond the quoted portion above, falls in line with my current thinking. He states that in life, in organisms, there are two competing control systems. The first he describes as “genetically programmed,” and this refers to the instinctive or genetically-hardwired and are inherited by virtue of being a member of the species; the second deals with the “genetically unprogrammed,” which is to say the patterns learned or conditioned through individual experience and cultural influence. Organisms must have some degree of both in order to survive, though a more “intelligent” species has more of the second.

In order for an intelligent organism to utilize it’s intelligence to its fullest potential, however, it must bear a body that provides the naturally-evolved tools or technology that makes the utilization of that potential possible, which is something I’ve contemplated in depth. As I’ve written of before, it may very well be that a species of octopus exists in the deep oceans under the thick surface ice of the moon Europa that bears an intelligence far greater than our own. Despite its relative superintelligence, however, it would be unable to develop spears, let alone the advanced technology our comparatively stupid selves have managed to develop — and simply due to the fact that it does not bear opposable thumbs or exist within an environment that would enable it to create fire.

Human beings are a tool-creating, tool-using species, however, and over the course of evolution we have developed greater and greater technology, or systems of tools. Our technology, serving as extensions of our bodies, which themselves constitute an extension of our minds, brought us to dominate all other species on earth and increase our population. It will also likely pave the way to our self-destruction, however, because our species in-fighting is not only no longer limited to our genetically-evolved technology (our bodies) but also no longer regulated by our genetically-hardwired instincts.

We need not strange someone, or even shoot them personally. We can bomb them remotely. This distance, available through our technology, gets around that limiting, hardwired sense of empathy.

Our technology is also advancing at an exponential rate. Bigger, better, faster, stronger. More destructive, at least potentially. We are on a positive feedback loop of conflict heading towards destruction. This was as central to Bearden’s concern as it was, or so it seems, to Delgado’s.

Despite the dangers inherent in going forward, we cannot go backward in evolution, Bearden concludes. Becoming a technological species was a one-way threshold. In terms of “natural” evolution, the human species has achieved its final stage, and the next step, the sixth stage of evolution, must be a conscious one — and, in Bearden’s estimation, a technological step, if we survive ourselves and are therefore capable of taking that next step at all.

(It also strikes me that this could be the filter that explains the “cosmic silence” so often spoken about when discussing the so-called Fermi Paradox.)

What would this sixth stage constitute? In the eyes of Bearden, by necessity it must be characterized by the reintroduction of internal control, preventing the kind of “destructive competition” that accelerates us towards species suicide, without relinquishing our intelligence, which is to say our “genetically unprogrammed” nature.

Over the course of recorded human history, we have struggled to achieve this.

“Law, logic, philosophy, creed, religion, practice, love, sacrifice, money, the ballot, and the bullet — all of these have empirically proven that they cannot solve the human problem for all humanity. Since none of the solutions advanced to date can solve the problem, we must discard them all and search for a new approach.”

How? What on earth could solve the problem? The solution he proposes could be seen as the inevitable trajectory of our technology, the endpoint of it’s ever-advancing and allegedly exponential rate of advancement, barring self-termination. He contends that the only available solution is to unify all human brains “into one giant superbrain” adding that “one would also hope for the ‘maximum individual freedom within the constraints of minimum essential inter-individual control.’”

“One would hope”? Seems frighteningly low on his hierarchy of values, from the smell of it.

This could be accomplished through, and would in all likelihood (at least in my own, paranoid mind) be the end result of, the kind of technology Delgado was developing and, more to the point, the kind of mind-machine interface technology that Musk was proposing as the most beneficial avenue given the inevitable rise of superintelligent AI.

Bearden envisions this as each individual or “mancell” functioning within its own personal sphere but interactions between such mancells being governed by the technologically-induced harmony of what would constitute a technologically-mediated superorganism or massmind.

Bearden hypothesizes that when, to start simply, two minds are technologically linked — at least in the kind of high-bandwidth, time-delay-free union Musk aims for in his Neuralink effort — a phenomenon occurs that is not unlike what happens naturally within the complex mesh of matter packed into the typical human skull. The cortex and the amygdala, or limbic system, are in symbiosis, causing them to identify not as individual parts, but as a whole, just as Musk explained.

The two sides of the cerebrum or cerebral cortex, the left and right hemispheres of the brain, have a similar relationship, perceiving themselves not as the dualistic aspects that they are physically, but as the singular entity they are experientially and enact behaviorally.

As Bearden explains, the right hemisphere controls the left side of the body and the left hemisphere controls the right, and one hemisphere, typically the left, tends to dominate. Despite this, we do not typically consciously experience any separation between the left and right side of our body. Both hemispheres are connected by the corpus callosum, a thick mesh of nerve fibers that transmits the messages between them at such a high rate of speed that it produces a convincing illusion of immediacy from the standpoint of conscious awareness.

“If one holds up both hands and observes them, one is perfectly aware that here are two separate hands, but is only aware of one being to whom the hands belong, even though each hand is being controlled by a different cerebral hemisphere.”

In other words, in those with functional, cerebral hemispheres, the inter-cerebral bandwidth is heightened to the point where our consciousness cannot detect any time-delay between one hemisphere and the other. Whatever one hemisphere generates the other hemisphere experiences as having generated itself. Put in another way, Bearden explains:

“… when consciousness can perceive no difference, identity results, just as separate movie frames appear continuous (each two appear one) when flashed at 22 frames per second. Thus in one’s own body, two brains are integrated into one functional brain and one perceptual personality. There is no conscious separation of the two brain hemispherical perceptions, and one consciously is aware of only one being or continuity, himself.”

This is precisely what Musk appeared to be trying to convey when he spoke about the AI extension. The AI extension he appears to be aiming at through Neuralink would constitute a artificial, technological layer of the brain that would, given sufficient bandwidth, perceive itself as being as synonymous with the AI extension as one hemisphere of the cerebrum considers itself synonymous with the other. If such a super-brain were to be accomplished, the brain itself would look upon the singular human organism in much the same way as the singular human organism — you, I — currently look upon one of our hands.

On a positive note, this linkage, according to Bearden, would naturally eliminate competition between individuals within the network, at the very least what he describes as “destructive competition,” as such behavior would be as self-defeating as you using one of your hands to stab the other. We would be one super-brain with access to countless bodies.

Where would there be room for an individual? For privacy? For personal freedom?

While this superbrain is not necessarily what Musk proposes, it certainly seems like a step in that direction and may even be an unintended consequence of the aims of Neuralink. As with Delgado and Bearden, Musk has good intentions, but be it intentional or not, this may ultimately destroy the individuality we presently enjoy, obliterate any vague semblance of privacy and personal freedom. Liberty of the soul could meet its dead fucking end here.

And that saying, the one about what the road to hell is paved with? Call me paranoid, and I hope I am, but it still might be a good one to keep in mind.

Advertisements

AIpocalypse Maybe.

My irritation with the AI issue and total lack of concern about it was primarily based on my stance as a dualist in the philosophy of the mind. It was actually Elon Musk that made me realize one’s take on consciousness meant very little; that it didn’t matter if the machine was truly conscious, truly alive. In a video of his talk at a governors meeting in 2017, he gives an example of his AI concerns:

“I want to empathize: I do not think this actually occurred. This is purely a hypothetical. I’m digging my grave here… But you know there was that second Malaysian airliner that was shot down on the Ukrainian-Russian border, and that really amplified tensions between Russia and the EU in a massive way? Well, let’s say you had an AI where the AI is always to maximize the value of a portfolio of stocks. One of the ways to maximize value would be to go long on defense, short on consumer, start a war. How can it do that? Hack into the Malaysian airlines aircraft routing server, route it over a war zone, then send an anonymous tip that an enemy aircraft is flying overhead right now.”

Personally, I’ve always been bothered by psychopaths in society. While those of the type that become serial killers are certainly a concern, I have been even more worried about what I’ve heard referred to as socialized psychopaths — the kind that occupy the highest levels of power in businesses and corporations. Lacking empathy, their prime interest is in maintaining and gaining power.

It bothered me for many reasons, not least of which is the fact that it says something about our society: that psychopathy is in fact a survival technique in the context of our culture; that it constitutes a successful adaptation in our system; that the characteristics of our culture reward that personality type. When Musk (who is most certainly not a psychopath, I should add) speaks about AI, he is basically describing technologically-generated psychopathy. And its easy to see how his example could manifest even if a machine or program does not constitute consciousness.

The larger point I’ve been missing until now is that it wouldn’t have to reach the extremes displayed in countless movies. As Musk also stated, “until people see robots go down the street killing people, they don’t know how to react.” It doesn’t have to be at that level, it need not manifest so blatantly, to constitute a threat to the survival of the human species. And, he says, we really cannot delay:

“AI is a rare case where we need to be proactive in regulation instead of reactive because if we’re reactive in AI regulation it’s too late. Normally the way regulations are set up is a whole bunch of bad things happen, there’s a public outcry and then after many years the regulatory agencies set up to regulate that industry. That in the past has been bad, but not something that represented a fundamental risk to the existence of civilization. AI is a fundamental risk to the existence of civilization in a way that car accidents, airplane crashes, faulty drugs or bad food were not. They were harmful to a set of individuals but they were not harmful to society as a whole.”

He set up a research company, OpenAI, in efforts to regulate the inevitable, though it has just recently been announced that he has distanced from it due to how it conflicts with his other projects.

While AI still doesn’t rank as the greatest threat to human civilization in my mind, what he has had to offer about the subject has come to raise my concerns.

As if we didn’t have enough threats to our species to worry about…

Punctuality of the White Rabbit.

“You live in the past,” he says to me, shaking his head.

Oh, fuck you. We all do. You think you’re all hip, living high and friggin’ mighty in the here and now? Let me steal one from mindful atheist Sam Harris — who, incidentally, looks like he could be Ben Stiller’s twin — and offer that you think of it in this way.

The sense data from the tip of your finger has a longer way to travel than does the sense data from the tip of your nose. With that in mind, touch the tip of your finger to the tip of your nose while mindfully noting the apparent simultaneous experience of feeling your finger touch your nose and your nose be touched by your finger.

Yeah. How the fuck do our bodies pull that shit off, right?

Well, there’s a time delay, but our brains are wired to first collect all data and then present it as a whole to our conscious awareness. Otherwise our experience would be like a multi-sensory manifestation of those irritating streaming videos where the movie visuals fail at matching up with the sound.

“That would suck. I hate those. Especially in porn. He slaps her ass and then you don’t hear that meaty skin-on-skin sound until moments later. Throws off my rhythm”

Word. At any rate, the process of receiving, collecting, translating and displaying sense data to consciousness creates a time delay between when stuff happens and when we perceive it happening. I mean, go outside on a dark and clear night away from city lights and look up. If you think you’re seeing now up there, you are a considerable distance off.

So ultimately, not only is there a time delay between reality and our awareness of it, but our responses are delivered fresh and set some dozen or so seconds before we are consciously aware of having ‘arrived’ at that decision.

Sense data as well as our reactions to it do not reach our awareness until after they are generated.

We are all living in the past. We are memories encapsulated in memories. The here and now is always and forever truly there and later. For all we know here and now are but a myth for we cannot, despite all our potential might, catch up to it, share the same three-dimensional cross-section with it. We are imprisoned within the illusions woven by the nervous system of the body. It serves not as a battery for A.I. Gone Wrong but rather our personal and to a large extent customized Matrix.

We are forever a step behind Here and Now, bathing in its shadow. So accept our partnership in this long and heavy realm of yesterday and shut up.