|
Post by vincent on Feb 7, 2021 8:51:47 GMT -5
vincent I just noticed something. I THINK my reference to Roko's Basilisk implied to you that I was talking about the horror of an AI's potential "personality," so to speak. That's not really what I meant though. I was trying to say that regardless of the AI's "personality," I think I may have also accidentally "shifted the goalposts" only due to the language I used and the way I actually explained my last few points after referencing the Basilisk. My bad.
Well, i adressed the "potential personality" aspect when i mentioned the Fi inf sulking.
But it's not just that.
The thing is Roko Basilisk's (and my Pink Unicorn too) are both "distant retroactive blackmailers", so to speak.
A pretty specific specie of theorized decision-maker.
My (other) point was that this specific kind of decision-maker is bogus.
Not bogus as in "could lead to dire consequences if implemented" bogus as in "can't be implemented because the decision-making model lead to an infinite regress, not to an actual decision".
Again, the argument tricks you into thinking "why not ?". The real question is "why this rather than anything else ?". Without an actual answer to that, the AI can't decide anything.
And without an answer to that, you're not a proper blackmail target either.
But yeah, i agree with that :
except we already know it won't be anything remotely like a Pink Unicorn or a Basilisk.^^
|
|
anthony
Terra9Incognita
Posts: 1,537
Enneagram Core Fix: 9w1
|
Post by anthony on Feb 7, 2021 9:15:15 GMT -5
"The real question is "why this rather than anything else ?". Without an actual answer to that, the AI can't decide anything. And without an answer to that, you're not a proper blackmail target either."
Ohhhhhhhhhh, I see now. I agree.
|
|
anthony
Terra9Incognita
Posts: 1,537
Enneagram Core Fix: 9w1
|
Post by anthony on Feb 7, 2021 9:22:06 GMT -5
WAIT...but why couldn't it decide anything?
I understand that the argument tricks you into thinking "why not?" when the real question is "why this rather than anything else?", but are there not loads of information from which the AI can easily pool and develop a capacity for decision making? Or, does this just lead back to the same question, "why would the AI make decisions to begin with? who says it would?" -- since a 'decision' is required to begin the decision-making process. The only way to circumvent that(if we hypothetically wanted an AI to lord over us) is if WE implemented AI and trained it until it learned to train itself, all in a specific way, I think.
|
|
|
Post by vincent on Feb 7, 2021 9:36:09 GMT -5
WAIT...but why couldn't it decide anything? I understand that the argument tricks you into thinking "why not?" when the real question is "why this rather than anything else?", but are there not loads of information from which the AI can easily pool and develop a capacity for decision making? Or, does this just lead back to the same question, "why would the AI make decisions to begin with? who says it would?" -- since a 'decision' is required to begin the decision-making process. The only way to circumvent that(if we hypothetically wanted an AI to lord over us) is if WE implemented AI and trained it until it learned to train itself, all in a specific way, I think.
Well the short answer is this :
it can't decide anything because YOU can't.
The Basilisk/Unicorn thing is based on prisoner dilemna cooperation stuff.
The whole idea is that merely knowing about it makes you a target for some distance retroactive blackmail. It will torture you tomorrow to incentivize you to donate today.
So the AI make his own decisions based on your predicted decision.
The problem here is that there is an infinite number of such potential blackmails and blackmails. So when you learn about the possibility of one, you ALSO learn about the possibility of all the others.
And facing an infinite number of equally optimized but mutually exclusive threats, you simply can't choose. And then you can't be incentivized into anything, by definition.
And the AI will know that too. So the whole thing falls apart.
See ?
|
|
anthony
Terra9Incognita
Posts: 1,537
Enneagram Core Fix: 9w1
|
Post by anthony on Feb 7, 2021 9:51:39 GMT -5
I do.
|
|
anthony
Terra9Incognita
Posts: 1,537
Enneagram Core Fix: 9w1
|
Post by anthony on Feb 7, 2021 19:02:44 GMT -5
If an AI of the aforementioned level of granularity and specificity did come into existence and eventually grew to be truly intelligent, wouldn't then it accept its own position of being less than us anyways? It wouldn't stay "atheistic," so to speak, it'd subject itself -- even if it wiped out humanity before it came to that realization.
|
|