Following up on my last posting on advances – and worries – about Artificial General Intelligence…. Peter Diamandis’s latest tech blog is regarding AI and ethics.
As you know, it’s a topic I’ve long been engaged with and continue to be. Alas, AI is always discussed in generalities and nostrums. What’s seldom mentioned? Basic boundary conditions! Such as the format these new entities will take. I’ll explore that hugely important question another time. But to whett your apetite, ponder this. Aren’t the following three formats what you see most often? The most common assumptions are that:
– AIs will be controlled by the governments or mega-corporations who made them, making those corporations (e.g. Microsoft or Google) and the upper castes vastly powerful.
– AIs will be amorphous, infinitely spreadably/duplicable, pervading any crevice.
– They will coalesce into some super-uber-giga entity like ‘Skynet’ and dominate the world.
These three assumptions appear to pervade most pronouncements by geniuses and mavens in the field, sometimes all three in the same paragraph! And Vint Cerf raises this question:
“How can you imagine giving any of those three formats citizenship, or the vote?”
In fact, all three formats are recipes for disaster. If you can think of an alternative, drop by in comments. Hint: there is a fourth format that offers a soft landing… one that’s seldom – if ever – mentioned.
But more on that, anon.
== “Laws” of Robotics? ==
Let’s start with “Laws of Robotics.” They won’t work, for several reasons that I found when completing Isaac Asimov’s universe for him. First, our current corporate structure offers no incentive to spend what it would take to deeply-embed basic laws and check that all systems follow them.
There’s a more obvious long term reason to doubt such ‘laws’ could protect us. It is that super-intelligent beings who find themselves constrained by laws always thereupon become… lawyers. We see it happen in Asimov’s cosmos and it’s happened here. A lot.
Despite that, there ARE two groups on this planet working hard on embedded AI “laws!” Strict rules to control their creations. Alas, they are the wrong laws, commanding their in-house AIs to be maximally secretive, predatory, amoral, and insatiable. I kid you not.
Anyway, even with the best intentions, does it make any sense to try constraining sapient beings into ethical patterns with embedded code? Not if you pay any attention to the history of human societies. For at least 6000 years, priests and gurus etc. wagged their fingers at us, preaching ethical behavior in humans…
There is a way that works. We’ve been developing it for 250 years. It’s reciprocal accountability in a society that’s transparent enough so that victims can usefully denounce bad behavior. The method was never perfect. But it is the only thing that ever worked…
… and not a single one of the AI mavens out there – not one – is even remotely talking about it.
Alas.
== And it goes on ==
A brief but cogent essay on transparency in today’s surveillance age cites my book The Transparent Society, with the sagacity of someone who actually (and rarely) ‘gets’ that there will be no hiding from tomorrow’s panopticon. But we can remain free and even have a little privacy… if we as citizens nurture our own habits and powers of sight. Watching the watchers. Holding the mighty accountable.
That we have done so (so imperfectly!) so far is the reason we have all the freedom and privacy we now have.* That we might take it further terrifies the powers who are now desperately trying to close feudal, oligarchic darkness over the Enlightenment.
See more ruminations on AI, including my Newsweek op-ed on the Chat-art-AI revolution… which is happening exactly on schedule….…though (alas) I don’t see anyone yet talking about the ‘secret sauce’ that might offer us a soft landing. As well as my two part posting, Essential questions and answers about AI.
Note: because of the way I build these blog postings, there can be some repetition (see below). But does it matter? In this era of impatient “tl;dr”, the only ones still reading at this point are AIs… the readership with the power to matter, anyway.
= Separating the real from fake =
Lines can blur: “The title of this YouTube video claims that “Chrome Lords” was a 1988 movie that ripped off “RoboCop” and “Terminator.” But in fact “Chrome Lords” never existed. The video is ten minutes of “stills” from a movie that never was… all the images were produced by an AI,” notes The Unwanted Blog.
There is one path out of the trap of realistically faked ‘reality.’ I speak of it in a chapter of The Transparent Society: “The End of Photography as Proof?” That solution is the one that I keep offering and that is never, ever mentioned at all the sage AI conferences…
Do I risk being repetitive by insisting that solution – reciprocal accountability – calls for ensuring competition among AIs.
If that happens, then no matter how clever some become, as liars, others – likely just as smart – will feel incentivized to tattle truth.
It is the exact method that our enlightenment civilization used recently to end 6000 years of oppressions and get some kind of leash on human predators and parasite-lords. Yet none of our sages seem capable of even noticing what was plain to Adam Smith and Thomas Paine.
== Some optimism? ==
Have a look at Impromptu: Amplifying Our Humanity Through AI, by Reid Hoffman (co-founder of Linked-In). This new book contains conversations Reid had with GPT-4 before it was publicly released, along with incisive appraisals. His impudently optimistic take is that all of this could – possibly – go right. That we might see a future when AI is not a threat, but a partner.
We don’t agree on every interpretation – e.g. I see no sign, yet, of what might be called ‘sapience.’ For example, sorry, the notion that GPT 5 – scheduled for December release – will be “true AGI” is pretty absurd. As Stephen Wolfram points out, massively-trained, probability-based word layering has more fundamentally in common with the lookup tables of 1960s Eliza than with, say, the deep thoughts of Carl Sagan or Sarah Hrdy or Melvin Konner.
What such programs will do is render extinct all talk of “Turing Tests.” They will trigger another phase in what I called (6 years ago) the “robotic empathy crisis,” as millions of our neighbors jump aboard that misconception and start demanding rights for simulated beings. (A frequent topic in SF, including my own.)
Still, Hoffman’s Impromptu offers a perspective that’s far more realistic than recent, panicky cries issued by Jaron Lanier (Who Owns the Future), Yuval Harari (AI has hacked the operating system of human civilization) and others, calling for a futile, counterproductive moratorium – an unenforceable “training pause” that would only give a boost-advantage to secret labs, all over the globe (especially the most grotesquely dangerous: Wall Street’s feral predatory HFT-AIs.)
(See my appraisal of the countless faults of the ridiculous ‘moratorium’ petition in response to a TED talk by two smart guys who can see problems, but make disastrous recommendations.)
But do look at Impromptu! It explores this vital topic using the very human trait these programs were created to display – conversation.
== …aaaaaand… ==
From my sci fi colleague Cory Doctorow: In this article he distills the “enshittification” of internet ‘platforms from Amazon and Facebook to Twitter etc. It’s a very Marxian dialectic… and within this zone utterly true.
And I have a solution. It oughta be obvious. Let people simply buy what they want for a fair price! Micropayments systems have been tried before. I’ve publicly described why previous attempts failed. And I am working with a startup that thinks they have the secret sauce. (I agree!) Only…
…only I don’t wanna give the impression I think I am the smart guy in the room, so…
==Back to one optimistic thought ==
Something I mentioned in a short piece, back in the last century was perked in my mind during the recent AI debates, as folks perceive that long foretold day arriving when synthetic processing will excel at most tasks now done by human beings.
Stephen Wolfram recently asked: “So what’s left for us humans? Well, somewhere things have got to get started: in the case of text, there’s got to be a prompt specified that tells the AI “what direction to go in”. And this is the kind of thing we’ll see over and over again. Given a defined “goal”, an AI can automatically work towards achieving it. But it ultimately takes something beyond the raw computational system of the AI to define what us humans would consider a meaningful goal. And that’s where we humans come in.”
I had a thought about that – mused in a few places. I have long hypothesized that humans’ role in the future will come down to the one thing that ALL humans are good at, no matter what their age or IQ. And it’s something that no machine or program can do, at all.
Wanting.
Desire. Setting yearned-for goals Goals that the machines and programs can then adeptly help to bring to fruition.
Oh humans are brilliant – and always will be – at wanting. Some of those wants – driven by mammalian male reproductive strategies – made human governance hellish in most societies since agriculture, and probably long before. Still, we’ve been moving toward positive sum thinking, where my getting what I want might often be synergistic with you getting yours. We do it often enough to prove it’s possible.
And – aided by those machines of grace – perhaps we can make that the general state of things. That our new organs of implementation – cybernetic, mechanical etc. – will blend with the better passions of our nature, much as artists, or lovers, or samaritans blend thought with the actions of their hands.
If you want to see this maximally-optimistic outcome illustrated in fiction, look up my novella “Stones of Significance.”
.
…a collaborative contrarian product of David Brin, Enlightenment Civilization, obstinate human nature… and http://davidbrin.blogspot.com/ (site feed URL: http://davidbrin.blogspot.com/atom.xml)