Picture
WARNING: You might be screwed if you read this post. Well, if not you exactly, then some future simulated "you" that is dragged into being and tortured for not playing nicely with others. You've been warned.

I try to learn one new thing every day. Usually, it's along the lines of, "Where did I put my coffee?" but sometimes it's more interesting, usually because I stumbled across something on the Internet that I apparently wasn't supposed to. In this case, it's a discussion of Roko's Basilisk on sci-fi author Charles Stross' blog.

Roko's basilisk is a proposition suggested by a member of the rationalist community LessWrong, which speculates about the potential behavior of a future godlike artificial intelligence.

According to the proposition, it is possible that this ultimate intelligence may punish those who fail to help it, with greater punishment accorded those who knew the importance of the task. This is conventionally comprehensible, but the notable bit of the basilisk and similar constructions is that the AI and the person punished have no causal interaction: the punishment would be of a simulation of the person, which the AI would construct by deduction from first principles. In LessWrong's Timeless Decision Theory (TDT), this is taken to be equivalent to punishment of your own actual self, not just someone else very like you.

Roko's basilisk is notable for being completely banned from discussion on LessWrong; any mention is deleted. Eliezer Yudkowsky, founder of LessWrong, considers the basilisk would not work, but will not explain why because he does not want discussion of the notion of acausal trade with unfriendly possible superintelligences.

In shorter terms, a future AI could be so pissed at you for not helping create it in the first place that, hundreds of years from now, it will recreate you and torture you. Stross and members of the LessWrong community demolish the argument, and I won't repeat those demolitions here, mainly because I haven't grappled with the crazy consequences of AI in a long time and thus am incompetent to do so.

The bit that I find far-fetched (apart from, say, all of it) is the assumption that the 'you' that the AI recreated would be the same 'you' that exists now. Even if you could be recreated down to the most exact neurological pathways, memories and all, it would be a phenomenologically different self, a different stream of consciousness. It would think it was you, but its thinking it's you would have no bearing on you, an epistemic state without an ontological implication. You'd be dead, so no worries. (I think the same holds true about the possibility of uploading minds into computers. The uploaded mind might think and act exactly like you, but it'd be missing the irreducibly ontological attachment to a unified stream of consciousness that is indexed to the original you. Probably.).

Regardless of its implausibility, I still find the Basilisk idea fascinating and creepy, for two reasons:

1).  The creepy part is that, under the Roko's Basilisk scenario, death is not an escape. This isn't necessarily new. Belief in an afterlife in general also assumes that death is not an escape, that you can't wiggle out of your punishment just because you've conveniently died (and an afterlife is, in general, very much like being yanked back to life against one's will in the Basilisk scenario). But there's something uniquely twisted about the fact that the thing punishing you doesn't even exist during your lifetime. This isn't some moral debt owed to a pre-existing creator-god; it's a debt owed to the extremely low-probability existence of a being that you yourself might or might not have had a hand in creating. The obligation--and thus the space opened for punishment--isn't to a deity you believe has a real presence and efficacy in the world; it's to a mere thought, a whisper, but a whisper that floats across time itself.

2). Which leads to the second thematically interesting bit: If all it takes is a thought to implicate yourself in the a-causal punishment train, then you've entered the Garden of Eden scenario, forbidden knowledge, fruits on trees. Knowledge of good and evil. It's best not to know, it's best to shield yourself from the whisper that could reach back and grab you. This idea could launch a thousand stories (and likely already has): Entire civilizations led by priesthoods whose sole mission is to shield the populace from learning dark truths, an esotericism and Straussianism run amok.

For the record, I slip from the noose of this future conundrum because I'm absolutely illiterate when it comes to programming AI. If anything, future AIs will likely recreate me to give my simulated future self a big hug for not imposing my ignorance about AI on AI researchers, which would only have slowed down the march toward the super-intelligent perfection of our future overlords. You're welcome.
 


DoomGoober
07/18/2014 8:45am

I'm fascinated by this train of thought... but I have a small nitpick.

Do you really have a unified stream of consciousness? What happens when you go to sleep? When you go under anesthesia for surgery? If you accidentally get knocked out?

Reply
Jeremy Ryan
07/18/2014 11:34am

That's a very good objection that's traditionally been hard to get around. John Locke, for instance, proposed that our personal identity extends as far as our memory, but then what happens when the 60 year old can remember his 40 year old self, and the 40 year old self remembers his 20 year old self, but the 60 year old doesn't remember anything about his 20 year old self? Are they in fact the same person? In the current scenario, is there anyway to know that the person who wakes up in the morning is the same person who went to sleep the night before?

Putting aside those skeptical objections (it's very hard to come up with an adequate response to a thorough-going global skepticism), I think we can loosen the definition of "unified stream of consciousness" enough so that it's still meaningful. It doesn't necessarily have to mean "constant awareness." I tried to get at it with the weasel-phrase "ontological attachment" that probably obscures more than it helps. The unification of the stream isn't a de facto or accidental unity; the unity is an essential part that allows us to talk about "the same person" waking up from sleep or surgery to begin with. In other words, we wouldn't even be able to generate the idea of a unified stream of consciousness unless that unity was already operating. It's this original unity that would be broken in the Basilisk scenario. So the new person with the same memories as you would also suffer in this horrible future, but it wouldn't be the same person suffering. It wouldn't be "you" suffering. That "you"'s unity would already have dissipated and thus no longer touchable. I think.

Reply
Karen Barnacle
07/29/2014 12:27pm

The other objection that I have to Roko's basilisk is how this entity would get hold of the physical data to upload any individual's consciousness. It may get hold of some programmers' if they include the data in the programming or deliberately upload themselves but it cannot access the rest of us. My brain and all of its data will have been cremated or rotted away long before the rise of RB and I think that this applies to the minds of most people over the age of 35-40 if you take the optimistic view of the singularity occurring @2050.

Reply
Karen Barnacle
07/29/2014 12:50pm

Edit: OK according to the theory the Basilisk would recreate my consciousness by deduction from first principles. Well.....of course I can accept that this intelligence is far beyond any human intelligence so it will have resources, time and motivation that is unimaginable even to someone with a very high intelligence and an excellent education; I also know that many highly intelligent and otherwise well motivated people spend their time inventing worlds and imaginary people. However my objection now is why RB would spend time recreating people who are already dead. Surely there will be a population of many more billions around at the time and wouldn't it be more efficient to upload wall existing consciousness and have them reproduce in RB's virtual universe. Or is RB so lacking in gainful employment that it is bored to the point of recreating billions of the dead in order to punish them? What will it do with those who did assist? Annihilate or reward them? What if annihilation is perceived as a punishment by that person? Will RB consider every single individual's preferences? It will only know if any individual assisted or rejected once it has recreated the consciousness, by which time it is to late to make a decision about punishment or not. Or would they simply be deleted without experiencing suffering like deleting a file? It seems from the literature that continuing existence even in the best of circumstances becomes a burden so even those who are rewarded may end up suffering and this cannot be avoided. Even advocating ignorance does not work as the consciousness has to be interrogated to establish its ignorance.
By the way I completely support the idea of a difference between epistemology and ontology. It's hard to explain and may be due to a bias in favour of my existence as 'unique' which is a result of evolution? Different topic of course....

Reply
Jeremy Ryan
07/29/2014 12:59pm

Yeah, trying to figure out just what would motivate the recreation of people in the past is difficult. The (very implausible) answer, as far as I can tell, is that the RB does so as a sort of after-the-fact causal mechanism (as counterintuitive as that sounds), counting on the fact that people right now--and not the ones in the future--will be scared into working to create the RB so they won't be recreated and punished. The RB is thus bootstrapping itself into existence via the thought experiment.

Reply
Karen Barnacle
07/29/2014 1:42pm

Bootstrapping I understand. However the motivation to create a super intelligence exists independently from the future existence of RB. The Basilisk only needs to bootstrap into existence if it was never created without using acausal blackmail, but this means that it would have to have prior knowledge of its existence before its existence doesn't it?

Reply
Jeremy Ryan
07/29/2014 5:04pm

I'd say that in this case, it's not the future entity itself that is doing the retroactive creating, but the present-day fearful humans who would do the creating due to their presumption of what such an AI would do in the future. And just typing that gives me a headache and makes me think this is probably the most implausible thought experiment ever created.

Reply
Karen Barnacle
07/30/2014 12:14am

You're not kidding. Presumably the humans who attempt to create it should include fail safes to ensure a benign entity; something like Asimov's three laws of robotics?

Reply



Leave a Reply.

    Author

    A former student and teacher of philosophy, I write a daily stock market/humor column for a major financial organization from my home base north of Austin, Texas. If I find any words lying around after deadline, I stuff them into a novel-in-progress. It turns out that these are usually the wrong words.

    Archives

    May 2013