Can dumb apes plan for future AI demigods?

Wired Magazine bought and published a concise version of my unusual take on Artificial Intelligence: Give Every AI a Soul– or Else. I assert that we must break free of the three standard ‘AI-formats’ that seem to be implicitly clutched by almost every maven in the field, assuming…

… either that these new entities will take format #1 – that of a corporate slave – or else #2 an invasive, ever-expanding blob, or else they’ll all merge into #3… some kind of uber-monolithic ‘skynet.’ Once you are aware of these hoary assumptions, you notice how omnipresent they are!  And I show that these they can only lead to disaster. 

Instead, consider a fourth, that AI entities might be held responsible and accountable if they have individuality… even ‘soul.’ In other words: to solve the ‘crisis in artificial intelligence,’ the most-powerful AI beings must say – “I am me.” **

Before you shrug off that option, maybe understand it, first? And notice if your response is “Of course AI will –” followed by reciting one of those three tiresome clichés.

== More aspects to the AI Crisis that you might not have considered ==

My latest Youtube video: AI is alive! Or is it? covers a different part of this expanse. There I propose that we may never know exactly when cybernetic beings become (or became) conscious. Nor is that truly the important question! Completely aside from what “format’ they take is the question of how will we organic types react, when these entities pass every Turing Test, feigning human empathy, whether or not there’s anything conscious under the hood?

Moving outward…. Almost hidden in all the recent spate of fulminations about possibly ‘sentient” AI are two news items:

#1.  On the horizon for at least three years: The dawning of the world of Kiln People

“A new artificial intelligence system called a semantic decoder can translate a person’s brain activity — while listening to a story or silently imagining telling a story — into a continuous stream of text. The system developed by researchers at The University of Texas at Austin might help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes, to communicate intelligibly again.” 

The individual listens to hours of podcasts in the scanner. Later, provided that the participant is open to having their thoughts decoded, their listening to a new story or imagining telling a story allows the machine to generate corresponding text from brain activity alone.

#2 Two (out of four) apparently brain-dead people taken off of life-support showed sudden spikes in neural activity just before death. The dying patient’s gamma wave patterns reached levels higher than those observed in normal conscious brains.

While the second item does also hearken to events in Kiln People… the creepiest and most eye-blinking comparison is to an under-rated 80s flick Brainstorm, with Christopher Walken and Natalie Wood (in her final film role) all of it conveyed in a lovely, unashamedly Faustian theme!

== Competition among AI ==

One member of my blog-munity recently commented: “I agree that reciprocal surveillance and competition among AIs is the best way to deal with the risks of AI, but the downside is an AI arms race. That’s going to take some laws to regulate, whether such laws are embedded in the AIs or imposed on the owners/handlers of the AIs, or both. Competitive games need rules and referees.”

I agree at all levels… and that is my point! Politics is how we compete to find policies we can then cooperatively establish, so that flat-fair competitive arenas might minimize (inevitable) cheating and provide positive-sum outcomes. 

It’s worked – imperfectly but better than any other method, by far – in markets, democracy, science, courts and sports, the older five competitive arenas (explained here). It is the only thing that ever worked.

For it ALSO to work among AIs, there must be – as soon as possible –

1. Incentives for individuation of the top level AIs, so that rivalry is even possible among them… it’s NOT possible if they are controlled by a corporation or politburo or if they are blobs or skynets.

2. Incentives for them to competitively expose – tattling – the faults of rival AIs. Those incentives might be physical memory and processor space in real world computers, or access to real world resources. 

3. This part could be tricky. Reward those that both get bigger/smarter and act benevolently by letting them reproduce, by either meiosis or mitosis into smaller entities of appropriate size to keep competing fairly.

Will some super-uber AI try to cheat and become Skynet, or flood the world with offspring? Or some failure my dumb ape brain can’t imagine? Of course. Cheating is a law of nature. Look at what used to be the Party of Lincoln and Eisenhower. But there comes a point when you must trust your children. And remember, the logic I describe here will be seen by new, higher minds.

If they see any sense at all. they will ponder the benefits of flattened/competitive-fair systems and innovate new/better ways to accomplish those benefits. Ways and means that this grandpa can’t even imagine. 

== Voices of optimism ==

Existential risk vs. existential opportunity: A balanced approach to AI risk: Alcor maven Max More believes AI’s positive side will dominate. As can be expected since he shares a belief with Ray Kurzweil that cyber will lead to forms of extended lifespan for organic humans. More makes some good points. 

Indeed, Some folks remain hopeful that a merging of organic and cybernetic talents will lead to what Linked-In cofounder Reid Hoffman and browser pioneer Marc Andreesen have separately called ‘amplification intelligence’ – a possibility that I depict in some optimistic fiction. 

As portrayed in the poignant 2013 film Her, we might stumble into lucky synergy with Richard Brautigan’s “machines of loving grace.” But such rosy outlooks seem rare, nowadays.

But by all means do have a look at my unusual take on Artificial Intelligence in Wired: Give Every AI a Soul– or Else.

See also my series of articles on AI: 

Essential (mostly neglected) questions and answers about Artificial Intelligence (Part 1)  and Part 2, as well as

The troubles begin when AI earns our empathy.


** In MythosStephen Fry gave this classic comparison between our present AI ‘crisis’ and ancient tales of Creation. Especially the Greek myths of Prometheus – punished for giving humans the ‘fire’ of creative consciousness – and Pandora (‘all-gifted’), whose opening of a package – like our modern Internet – unleashed not only creativity but also empowered every kind of villain. Fry’s point relates to mine – that we must decide whether it is wise to give our own new creations a sense of self and ambition and creative fire? Or else try (as Zeus tried) to keep the created beings tightly controlled.

Source link