That's just the setup, not the actual basilisk question. Box B gets you nothing or eternal torment.Would just take box B. Either I get a million dollars OR I become the first person to prove the infallible supercomputer wrong. Both outcomes sound like a "win" to me.
He did the same thing I did. Quit after half the article.That's just the setup, not the actual basilisk question. Box B gets you nothing or eternal torment.
Sorry, I can't take the basilisk question seriously since it's just the money boxes question with added BULLSHIT.That's just the setup, not the actual basilisk question. Box B gets you nothing or eternal torment.
Oooh, nice, totally forgot about the fame and talk show angle.Would just take box B. Either I get a million dollars OR I become the first person to prove the infallible supercomputer wrong. Both outcomes sound like a "win" to me.
It is not obvious that an AI would be affected by quantum uncertainty in its decisions. The macroscopic world behaves mostly independent of quantum uncertainty. Computers do not do weird things because they work with electrons and our brains probably don't either (as appealing as it is to think that our decisions are influenced by quantum randomness to get a -false- appearance of free will, it is not clear that they are).1) The universe is NOT deterministic. This is incredibly important, and its not surprising to me that a group of intelligencia missed this NEARLY A CENTURY OLD FACT. We have known the universe is not deterministic since quantum mechanics was discovered, yet, for some reason, people often ignore this. Maybe its because the logic of a non-deterministic world is really hard to grasp, like a zen koan, or maybe it's because at larger scales the universe effectively IS deterministic/Newtonian.
However, this entire experiment is focused around the creation of an AI, a form of computer. Maybe it would be traditions circuitry, maybe it would be organic, but either way I can guarantee that it would be effected by quantum uncertainty.
Well, the quantum randomness is a good reason to have many simulations running in parallel to have a good sampling of everything that could happen. But the simulation doesn't need to have many levels because it is not simulating the AI, it's simulating you: in the Newcomb problem, it only matters what YOU do, not what's in the box, so the AI doesn't need to simulate its own decision (therefore needing to simulate you and itself again, and again)2) Even if we ignored the deterministic flaw, for the basilisk to effectively create a perfect it would have to model the universe, which would require it to model itself, which would have to contain a model of the universe, which would have to have a model of itself. Ad infinitum.
[DOUBLEPOST=1416606371,1416606205][/DOUBLEPOST]3) Time travel. The AI requires time travel. Maybe this could really happen. Maybe not. But if your proposal requires time travel in its core it better be a new Terminator movie. Maybe you could remake terminator 3. Another version of it doesn't require time travel but requires an even more out there idea that we would be indivisible from a future simulation of ourselves, and that we might be living in the simulation right now.
[\QUOTE]
I have never heard that it requires time travel... It just requires that you act according to what some future AI may or may not do which doesn't make a lot of sense either (that or I'm not understanding it, but I browse LessWrong from time to time and I've never read that it requires time travel... Of course, it was censored for a long time so...)
Ah, and by the way, as far as I know most LessWrong users don't buy into the basilisk bulshit. Would be surprised if they did, actually was quite surprised when I thought that EY did
It's like a negative version of Pascal's wager (I should believe in God regardless of wheter it exists, to make sure I don't miss on heaven ~ I should work towards IA regardless of wheter that particular one comes to be, to make sure I don't go to real or simulated hell)He did the same thing I did. Quit after half the article.
Because really, the whole thing seems like the usual philosophical quandries, just with 'god' replaced by 'evil computer' so that atheists can have a crisis of faith.
No, it's forcing the machine to pick a box itself, which it could have done without my involvement if it's prediction abilities were sufficient for it to KNOW (rightly or wrongly) which I'd pick. To involve me in any way is to admit that my free will is a factor in the outcome and that it could not do what it claims (or has chosen not to for some inexplicable reason) or that it's simply toying with me and demanding I choose for it's own amusement or out of ritual. As a rational agent, the only outcome in which I preserve my free will is to not abide by the demands made of me.Not choosing a box is choosing the second box.
Fwiw: I ardently disagree with the interpretation of quantum mechanics as meaning the universe is probabilistic in nature. The reason we cannot develop deterministic models at the quantum level is because we cannot measure things that small without affecting them (thus Heisenberg and his principle). This doesn't mean the universe must roll probabilities to function but that we have no other way to model them but with probabilities.Is anyone familiar with this? It was referenced in the latest XKCD alt-text, so I looked it up. The premise of it is complicated and I won't do it proper service, so I'll just link the Slate article discussing it:
http://www.slate.com/articles/techn...errifying_thought_experiment_of_all_time.html
Before you read that let me, as a warning, point out that apparently even reading it or thinking about it may cause it to come true, according to the logic of the thought experiment (haha I warned you after the link so you probably read it anyways). Anyways, for those of you who are familiar with it or bravely chose to risk eternal damnation by reading the article I linked, what are your thoughts?
Personally, it just strikes me as pure and utter bullshit and the kind of mental games that will make your palms hairy. The kind of garbage metaphysics that people come up with to sound smart at parties. There are three fundamental flaws with the argument, and they are important:
1) The universe is NOT deterministic. This is incredibly important, and its not surprising to me that a group of intelligencia missed this NEARLY A CENTURY OLD FACT. We have known the universe is not deterministic since quantum mechanics was discovered, yet, for some reason, people often ignore this. Maybe its because the logic of a non-deterministic world is really hard to grasp, like a zen koan, or maybe it's because at larger scales the universe effectively IS deterministic/Newtonian.
However, this entire experiment is focused around the creation of an AI, a form of computer. Maybe it would be traditions circuitry, maybe it would be organic, but either way I can guarantee that it would be effected by quantum uncertainty.
This would mean that it would be impossible for it to do these accurate predictions of events that are necessary for the thought experiment to work.
2) Even if we ignored the deterministic flaw, for the basilisk to effectively create a perfect it would have to model the universe, which would require it to model itself, which would have to contain a model of the universe, which would have to have a model of itself. Ad infinitum.
I'm hoping you can see the problem with this. This is impossible. And its impossible because the prediction and the model has to be perfect to work, it requires omniscience. Any thought experiment that involves omniscience is going to always hit problems, because omniscience is very similar to the infinite, it simply doesn't exist in reality.
To me this whole thing just boils down to the old George Carlin joke where he asks the preacher "Can God make a boulder so large he couldn't lift it?"
3) Time travel. The AI requires time travel. Maybe this could really happen. Maybe not. But if your proposal requires time travel in its core it better be a new Terminator movie. Maybe you could remake terminator 3. Another version of it doesn't require time travel but requires an even more out there idea that we would be indivisible from a future simulation of ourselves, and that we might be living in the simulation right now.
Anyways. Anyone else familiar with this?
There is not very good reason at all to believe quantum effects neural change! It might be nice to believe such a thing but there is virtually no data to support it.My understanding of it is that the computer would be able to predict my choice based on a perfect model of who I was made by creating a perfect simulation of me. First, this would require a perfect simulation of me, physically, from which you could, based on determinism, predict my actions. This is the first flaw of determinism in the model, as I would argue there is good reason to believe that nueral pathways are heavily affected by quantum uncertainty. However, remember, nature vs nurture. A perfect simulation of me would have to include the things that affected me. To do that you would have to make a perfect simulation of my surroundings. To do that you would have to model the earth, then the solar system, then the universe itself, and you would have to do that for every living human throughout time that has to be hit by this blackmail. The scale of computation here is where the secondary affects of quantum uncertainty could come into play. Quantum uncertainty has almost no chance to ever affect computational stuff. However, the scale of the model and the necessary calculations would quite likely hit a tipping point here and have said uncertainty disrupt the model.
I'm not sure where I got that it would need to model itself, that doesn't seem to be necessary
Anyways, the key flaw here is that the model requires perfect models and simulations, which is where you get to these insane scopes where you have to model whether the gravitational disruption caused by some far flung galaxy on my chest hairs will affect whether or not I decide to buy cherios or captain crunch. But the requirement, as far as I can tell, is that it IS required for the model to be perfect, because if it is not then it cannot accurately predict what I will do, which invalidates the decision crisis. Statistical averages of weaker models will yield "close-enough" results, but that is not really acceptable in this premise.[DOUBLEPOST=1416609349,1416609284][/DOUBLEPOST]Oh yeah and I did spend some more time reading up on this and LessWrong doesn't seem to buy it at all anymore, but...I have yet to hear any refutation of the initial comments made by EY, which sort of make him sound like a crazy person.
Except it doesn't, if the computer thinks you worked/will work to help create it.why would I want both boxes, one has torture in it.
That is a lot less definitive of statement than the OP.What about unsheathed neurons cross chatter? That deals with electron level interactions doesn't it? Fwiw I agree that quantum uncertainty could just be a measurement problem, but from what I can tell determinism vs uncertainty is just a coin toss, and I'll stick with the one that doesn't leave me with a free will paradox.
As for the reductive models, in the real world this is absolutely true. But this is a philosophical logic trap which means that it has unrealistic requirements of perfection. A perfect model can not be reduced. At least, that's my guess seeing as no one has ever made a perfect model for anything, ever. Remember, this isn't a probabilistically predictive model, this needs to be a completely perfect model.
The reward is analogous to the choice. If it has already decided that I would not help it, it is going to punish me no matter what. In which case there is no point in humoring the AI. If it has decided that I WILL help it, then it should have already given me the reward. That it hasn't means it's ether unsure of if I will help it (in which case why bother? It's a flawed AI) or has chosen not to predict my choice for some unknown reason (which makes it completely arbitrary).The boxes are the rewards, not the choice. If you decide to help create the Basilisk AI, you get both boxes, if you decide not to, you get box B.
This is what I thought, as well.This sounds like the premise of I Have No Mouth and Yet I Must Scream, except with a time travel angle. Lemme read this...
This was what confused me. The AI doesn't need to model itself, only the ways in which it affects the universe. Also, I thought they said that on the article but the problem with imperfections in the simulations is solved by doing many of them and taking the collective result.for the basilisk to effectively create a perfect it would have to model the universe, which would require it to model itself, which would have to contain a model of the universe, which would have to have a model of itself
I thought Bell's inequalities (http://en.wikipedia.org/wiki/Bell's_theorem#Importance_of_the_theorem) proved that there are no hidden variables that can predict what the outcome of a 'probabilistic' measurement.Fwiw: I ardently disagree with the interpretation of quantum mechanics as meaning the universe is probabilistic in nature. The reason we cannot develop deterministic models at the quantum level is because we cannot measure things that small without affecting them (thus Heisenberg and his principle). This doesn't mean the universe must roll probabilities to function but that we have no other way to model them but with probabilities.
.
There is some of that here. I haven't read all of it but for instance he says:Oh yeah and I did spend some more time reading up on this and LessWrong doesn't seem to buy it at all anymore, but...I have yet to hear any refutation of the initial comments made by EY, which sort of make him sound like a crazy person.
EY said:Why I yelled at Roko: Because I was caught flatfooted in surprise, because I was indignant to the point of genuine emotional shock, at the concept that somebody who thought they'd invented a brilliant idea that would cause future AIs to torture people who had the thought, had promptly posted it to the public Internet. In the course of yelling at Roko to explain why ths was a bad thing, I made the further error---keeping in mind that I had absolutely no idea that any of this would ever blow up the way it did, if I had I would obviously have kept my fingers quiescent---of not making it absolutely clear using lengthy disclaimers that my yelling did not mean that I believed Roko was right about CEV-based agents torturing people who had heard about Roko's idea
I am not familiar enough with Bell's theorem but it looks like it hasn't been fully tested as of yet. As to your question: what better option could we have? Alternately: perhaps the probability mathematics is a good approximation. Of something physical (like subatomic energy states or something we cannot observe) and so probabilities serve as a very good proxy. We are spinning out of reach for me now, but I am always interested to read more. Thanks for the link.This was what confused me. The AI doesn't need to model itself, only the ways in which it affects the universe. Also, I thought they said that on the article but the problem with imperfections in the simulations is solved by doing many of them and taking the collective result.
--------------------------
On the uncertainty of neuron behaviour:
I think that paper confuses determinism with predictability. Chaos is deterministic but becomes unpredictable because you can't know to an infinite precision its initial conditions (they say that this means not being deterministic). A system is stochastic (not deterministic) when, even with complete information you cannot predict with 100% certainty what will happen.
I say this but, on the other hand, cells are indeed effectively stochastic due to the reasons they say in the paper. Low number of molecules makes it hard to predict when reactions will happen et cetera. But this stochasticity is due to the level of description that we look at (the cell), I'd have thought that molecule trajectories still were deterministic. Maybe I'm wrong, I'll have to think about it (I actually work on this: stochasticity in gene expression etc, so it's relevant to me).
In any case, I don't see the philosophycal advantadge of having our brains be stochastic. In the end if theres nothing 'extra' or metaphysical (such as a soul) influencing what you do, either it is predetermined (shit) or it is random according to laws you don't control (shit), that randomness does not restore the illusion of free will does it?[DOUBLEPOST=1416617755,1416617323][/DOUBLEPOST]
I thought Bell's inequalities (http://en.wikipedia.org/wiki/Bell's_theorem#Importance_of_the_theorem) proved that there are no hidden variables that can predict what the outcome of a 'probabilistic' measurement.
Also, I'm pretty sure the uncertainty principle is not really about modifying that which you are measuring but something more fundamental. I think trying to get Quantum Mechanics to be deterministic is imposing on it how the world should be. The probabilities we use work so magnificiently that... why shouldn't they be true?
TIL: Roko's Basilisk is Voldemort. Don't even mention his name, or he gains in power.EY said:Why I yelled at Roko: Because I was caught flatfooted in surprise, because I was indignant to the point of genuine emotional shock, at the concept that somebody who thought they'd invented a brilliant idea that would cause future AIs to torture people who had the thought, had promptly posted it to the public Internet. In the course of yelling at Roko to explain why ths was a bad thing, I made the further error---keeping in mind that I had absolutely no idea that any of this would ever blow up the way it did, if I had I would obviously have kept my fingers quiescent---of not making it absolutely clear using lengthy disclaimers that my yelling did not mean that I believed Roko was right about CEV-based agents torturing people who had heard about Roko's idea.
I for one prefer the risk of dragons.Coming in late to say Chrono Cross already did this.
The problem can be solved with dragons, but then you have the problem of dragons.
You just want to learn to Thu'um, don't you?I for one prefer the risk of dragons.
In fact, I for one believe the dragons are the best solution to most problems.
Nope. I'm just proposing Dragon's as a solution to the problems the world faces.You just want to learn to Thu'um, don't you?
My proposed solution? Give the people something to worry about. You guessed it. DragonsTwo percent of the people surveyed said they're still thinking about what the world's biggest problem is. They answered that they simply didn't know.
Not a clue. None. Nada.
Solution: DragonsI'm here and not asleep because the rokos basilisk gave me nightmares.
You shouldn't be reading anything while driving, you lunatic!Man, I should not be trying to read that article while driving
Why should I dedicate my life to helping a computer that can't even decide which of two options I will take without my input? If it could perfectly simulate the universe it would be one thing, but it can't even pick a single choice.Oh, also, the computer doesn't necessarily need to have a perfect simulation of the universe. The simulation just needs to be close enough to make a prediction regarding one choice in your life, namely which box out of two you'll choose.
I haven't read the article, because I don't have the time, but I assumed the idea was that you might be part of the simulation, and thus the computer is already in control, and will punish you for being the type of person who wouldn't support it.Why should I dedicate my life to helping a computer that can't even decide which of two options I will take without my input? If it could perfectly simulate the universe it would be one thing, but it can't even pick a single choice.
Except why would it run a simulation where I refuse to do the one thing it's trying to test? How has it not come up before? Am I the first? If not, why is it running it again? Why is the simulation still going if I refused to do the experiment?I haven't read the article, because I don't have the time, but I assumed the idea was that you might be part of the simulation, and thus the computer is already in control, and will punish you for being the type of person who wouldn't support it.
"If you chose not to decide you still have made a choice." - Getty LeeSo again... there is no reason to pick ANYTHING.
Exactly. I'm choosing to disrupt the experiment on the grounds that it's being done against my will and not for my benefit."If you chose not to decide you still have made a choice." - Getty Lee
You keep saying you're disrupting the experiment, but you're not. The question posed to you is binary. Do you help create the basilisk, or do you not. There's no other possible choice. Saying "I don't choose" is equivalent to saying "I do not assist." Thus, if you turn out to be an accurate simulation of the real you, the basilisk knows to go with "eternal torment" in box 2.Exactly. I'm choosing to disrupt the experiment on the grounds that it's being done against my will and not for my benefit.
You keep saying you're disrupting the experiment, but you're not. The question posed to you is binary. Do you help create the basilisk, or do you not. There's no other possible choice. Saying "I don't choose" is equivalent to saying "I do not assist." Thus, if you turn out to be an accurate simulation of the real you, the basilisk knows to go with "eternal torment" in box 2.
Except it should have been able to determine my choice long before I was ever presented the choice, based on the data it had of me by simulating my entire life up until that point. The fundamental flaw of this thought experiment is that it only works if I am spontaneously created right before I am given the choice and the computer has no other data of me... but then how can it be sure that it created an accurate simulation of me unless it has substantial data before hand? Essentially, it would have had to have had more than enough data to predict this decision to simulate me accurately to begin with and if it didn't then running THIS experiment wouldn't have given it any useful data.You keep saying you're disrupting the experiment, but you're not. The question posed to you is binary. Do you help create the basilisk, or do you not. There's no other possible choice. Saying "I don't choose" is equivalent to saying "I do not assist." Thus, if you turn out to be an accurate simulation of the real you, the basilisk knows to go with "eternal torment" in box 2.
The underlying reasoning is unimportant. Either you help, or you don't. Either the AI comes to exist, or it doesn't. It kind of works out like the prisoner's dillema, in that if EVERYBODY chooses not to help, you're golden. But if enough choose to try to save their own skin (and who knows, the number required could be as low as 1), and the AI does come to exist, you're boned.[DOUBLEPOST=1416842695][/DOUBLEPOST]But the reasoning behind not helping is important. 'I refuse to play your game' means that the supposed future torture will be useless. It's like 'we don't negotiate with terrorists', if you refuse to play the game the resources they put into it are lost and therefore they won't play either.[DOUBLEPOST=1416842187,1416842075][/DOUBLEPOST]In any case, there is no reason to believe that a future AI will want to do this any more than just torture everyone, or torture no one, or torture people whose favourite color is orange.
That's just it - you don't know whether or not you are, in fact, the basilisk's conjectured version of you, created only in its mind to determine what choice the real you will make.[DOUBLEPOST=1416842726][/DOUBLEPOST]Except it should have been able to determine my choice long before I was ever presented the choice.
"Why would god even want angels to dance on the head of a pin, anyway."If you don't accept the foundation of TDT and other writings from this group, then you are correct.
I'd say that FF13-2 did it even more directly with the ADAM artificial Fal'cie that kept re-creating itself "outside of time" and such. Chrono Cross is just... really really screwed up even more and I don't think has a good specific example on this one, though I'll admit it's been a while. FATE maybe, but still, even then... iffy as that's more of an intelligent record trying to keep things going the same way as it was already observed, rather than making decisions to influence them to a different result. I maintain that the ADAM example from 13-2 is much better/closer.Coming in late to say Chrono Cross already did this.
The problem can be solved with dragons, but then you have the problem of dragons.
No, I meant it in the sense that the logic of the theoretical AI can be as alien as you want. So it may not care about how soon it is created and only worry about its goals going into the future, or it may care about people so it doesn't want to torture me, or it may care so much about some other goal (like making paperclips) that it's not worth it to devote resources to the retroactive blackmail (which, yes, doesn't make sense to me anyway).If you don't accept the foundation of TDT and other writings from this group, then you are correct.
But you may act differently in worlds with punishment or worlds without it. So it does make sense... The punishment is a deterrent/incentive, a new input to change your not-free courseAs for this, the entire philosophy is based upon perfect prediction being possible, which therefore means that free will is an illusion, but then that says "what's the logic in PUNISHING somebody for something that is in their nature to begin with?" Free will doesn't exist remember. If it did, you couldn't perfectly predict things. But since you claim to be ABLE to perfectly predict, then any punishment is just punishing the laws of physics/chemistry/biology for setting them up that way in the first place.
Take you out for not helping? Sure, that's just elimination of obstacles, but punishment for something that you literally can NOT change? That defies logic itself, and we're assuming the AI is the ultimate of a "logical actor." Thus, you can't change anyways, and thus there's no need to worry about it.
So, how does the theoretical AI have power over us? It's a pretty big logical leap to assume that any one AI could attain god-like power over reality.Also, guys, the basilisk really ISN'T about us being simulated versions of us[DOUBLEPOST=1416844644,1416844492][/DOUBLEPOST]
So, how does the theoretical AI have power over us? It's a pretty big logical leap to assume that any one AI could attain god-like power over reality.
I like the story, but it's more fantasy than hard science fiction. Might as well worry that an imp from the 5th dimension is going to come and torture you because you made the wrong choice in the Pepsi challenge.
Or, as noted previously, if God can make a boulder so large he himself can't lift it, or how many angels on the heads of pins, etc.I like the story, but it's more fantasy than hard science fiction. Might as well worry that an imp from the 5th dimension is going to come and torture you because you made the wrong choice in the Pepsi challenge.
It doesn't, right now. If it ever came to be then it would, and would punish us accordingly because we know that's a possibility and are choosing wrongly anyway. It's retroactive blackmail. That's why I say not playing its game is a good answer. I'm saying right now that wether it tortures me in the future or not won't affect how I act right now, so no need to torture me, eh.So, how does the theoretical AI have power over us? It's a pretty big logical leap to assume that any one AI could attain god-like power over reality.
But how? Midiclorians? Unobtainium? I find this lacking as a thought experiment because it puts forth the idea of plausibility, but then fails to actually connect the dots beyond hand-waving. As a Star Trek plot, it's interesting, but as a philosophical question it's just an attempt to hide a metaphysical quandary under the guise of scientific possibility.If it ever came to be then it would,
Sorry, I wasn't clear. It doesn't have power over us right now, it will never have power over the present us. In the hypothetical future in which a super powerful AI comes to be, it can take us and torture us. At that point in time! Obviously it can't torture us in the present, or we would be seeing that, but it can torture you in the future depending on what you are doing now.[DOUBLEPOST=1416857888,1416857365][/DOUBLEPOST]Ah, sorry, you were repeating this question:But how? Midiclorians? Unobtainium? I find this lacking as a thought experiment because it puts forth the idea of plausibility, but then fails to actually connect the dots beyond hand-waving. As a Star Trek plot, it's interesting, but as a philosophical question it's just an attempt to hide a metaphysical quandary under the guise of scientific possibility.
Well obviously if the AI never becomes superpowerful the point is moot. What the people at less wrong believe will probably happen (and this they do believe in, unlike the basilisk) is that when an AI that is capable of improving itself at the level of modyfing its code at any level, and does so successfully, an exponential(?) explosion/growth of intelligence will (or can) happen, and that AI will become godlike in some amount of time. The organization behind lesswrong, MIRI, is actually dedicated to trying to find a way to guarantee that that AI will have values similar to ours so that it does not interpret its initial purpose (whichever that may be) in completely alien ways (An example would be an AI in a clip factory wanting to 'increase clip production'. If it self-improves and becomes supersmart but its terminal values don't change, you can see how that would be a problem)[DOUBLEPOST=1416858046][/DOUBLEPOST]Have you read Friendship is Optimal? I think I saw it around here at some point. It deals with these concepts, and is surprisingly good.It's a pretty big logical leap to assume that any one AI could attain god-like power over reality.
Sweetie Bot is kind of a bitch.Have you read Friendship is Optimal? I think I saw it around here at some point. It deals with these concepts, and is surprisingly good.
Well, that then brings up an issue that I don't think this thread has addressed: Is the person that the AI decides to mess around with actually me, if it's not the me of this present. If the AI has to time travel, or recreate me, in order to mess with me, is that person it's torturing actually me, if I've died before it gained power?Well obviously if the AI never becomes superpowerful the point is moot. What the people at less wrong believe will probably happen (and this they do believe in, unlike the basilisk) is that when an AI that is capable of improving itself at the level of modyfing its code at any level, and does so successfully, an exponential(?) explosion/growth of intelligence will (or can) happen, and that AI will become godlike in some amount of time. The organization behind lesswrong, MIRI, is actually dedicated to trying to find a way to guarantee that that AI will have values similar to ours so that it does not interpret its initial purpose (whichever that may be) in completely alien ways (An example would be an AI in a clip factory wanting to 'increase clip production'. If it self-improves and becomes supersmart but its terminal values don't change, you can see how that would be a problem)[DOUBLEPOST=1416858046][/DOUBLEPOST]Have you read Friendship is Optimal? I think I saw it around here at some point. It deals with these concepts, and is surprisingly good.
The assumption of the problem is that the AI has enough data on you to at least simulate your entire life up to this point. To a transhumanist, a perfect simulation is analogous to you because it would have lived and experienced everything you had (or at least have memories to that effect). In fact, a large part of transhumanism is that eventually it will be possible for people to live forever as digital or mechanical beings, or at least perfect copies of them will.Well, that then brings up an issue that I don't think this thread has addressed: Is the person that the AI decides to mess around with actually me, if it's not the me of this present. If the AI has to time travel, or recreate me, in order to mess with me, is that person it's torturing actually me, if I've died before it gained power?
And what if this god-like AI has different standards? And is appalled by all the people who would bow down to an imaginary construct just out of fear that they would be tortured? What if that AI only rewards those who show spirit and refuse to play such an absurd game?So, you know, you better be nice to yourself.