|
Post by Roshan on Feb 4, 2021 22:33:56 GMT -5
Ted Talk video from 2017 below. Pretty positive she's a female SeTi. Her area of study seems to be concerned with and actually to reflect the structure of her own cognition: Se-->Ti-->Fe, how the ma Tr Ix is wielded by power as SErtion frame to impact collective FEeling. At first she underscores how the algorithm is no longer really in 'our' control--the SeTi nightmare of loss of mental control. btw according to wiki her Masters thesis was called "Mental Deskilling in the Age of the Smart Machine", and in 2017, the same year as this video, she wrote a book called "Twitter and Tear Gas: The Power and Fragility of Networked Protest".@16'16" "Imagine what a state can do with the immense amount of data it has on its citizens. China is already using face detection technology to identify and arrest people. And here's the tragedy: we're building this infrastructure of surveillance authoritarianism merely to get people to click on ads. And this won't be Orwell's authoritarianism. This isn't 1984. You know, if authoritarianism is using overt fear to terrorize us, we'll be scared but we'll know it, we'll hate it and we'll resist it. But if the people in power are using these algorithms to quietly watch us, to judge us and to nudge us, to predict and identify the troublemakers and the rebels, to deploy persuasion architectures at scale and to manipulate individuals one by one using their personal individual weaknesses and vulnerabilities, and if they're doing it at scale through our private screens so that we don't even know what our fellow citizens and neighbors are seeing, that authoritarianism will envelope us like a spider's web and we may not even know we're in it."I have more to say about her but atm I'll just note she seems so hellbent on protecting her vulnerable function--personal individual weakness, control of her valuation process due to being isolated from the collective by exterior forces--that she goes into a Te demo mode (here of 'what if/thens' but in the video also of data display) that create Si-ignoring category fails sounding at times almost Ti PolR. And I have seen this all before, with 3D glasses on steroids. www.ted.com/talks/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads?language=en
|
|
|
Post by vincent on Feb 5, 2021 15:34:52 GMT -5
I'm not done with that talk yet (and i have a pretty bad case of Friday Night Fried Brain right now, so i'll watch the rest tomorrow). But i agree already with SeTi for her.
With quite a lot of demonstrative Te indeed.
This seems absolutely spot on to me.
She is afraid she won't be able to dissolve her F in the Tribe anymore and then her unprocessed Fi will be seen by the Machine.
Massive phobic Fi polr.
And the whole "at least with authoritarian regimes we can resist" thing is also a pretty good example of Beta subservience complex
tbcd.
|
|
anthony
Terra9Incognita
Posts: 1,537
Enneagram Core Fix: 9w1
|
Post by anthony on Feb 6, 2021 2:17:50 GMT -5
At one point in the talk, she mentions that YouTube algorithms will entice you by recommending increasingly "hardcore" videos pertaining to the topic you're interested in at that moment. She's right, but people forget that we're the one's [largely] training those algorithms, and when content creators themselves repeatedly use "clickbait" video titles(which DOES get them more views), YouTube will algorithmically sort the videos in their database based on those sorts of features("hardcore" video titles, amount of views, data pooled from other sources, etc). So, we can theoretically create "collective resistance" if we so desired. What she seems hung up on is that the authoritarian power of the companies who deploy algorithms those algorithms and allow it to occur, when, IMO, there multiple ways to think about it. The maTrIx is within our reach and our control as well, and there are numerous ways to combat its threats.
SeTi makes sense for her. In this talk alone, what she seems to be doing [in essence] is mobilizing the crowd(the 'tribe') via Fe to take action(Se) against the forces we can't directly control. "These structures are organizing how we function and controlling what we can and cannot do. And many of these ad-financed platforms, they boast that they're free. In this context, it means that we are the product that's being sold." In a certain way, she's forgetting to account for the maTrIx in its entirety, instead, creating an "us vs. them" dichotomy(beta quadra) in which we're the ones fighting against the authoritarian demagogues who hoard and sell our information. We're the gears of that techno-maTrIx, we have power too.
|
|
|
Post by vincent on Feb 6, 2021 8:56:21 GMT -5
But anthony , there IS a "us vs them" dichotomy here. At least between the demand side and the supply side of this information economy. The fact that more and more people are both part of the demand and part of the supply doesn't really change that.
Those algorithms are based on neurobiological and behavioral models. They are designed to induce neurotransmitters cascades and behavioral changes.
Fighting them is fighting human nature and human biology. It's actuall fighting against ourselves.
And it's an arm race. Everytime we will build some "collective resistance", the algorithms will adapt and will generously offer us new opportunities to build more"individual tolerance".
Of course, in an ideal world, mature content consumers would realize their power and could train and retrain the algorithm and content producers would change their ways. Smoothly and without a beta fight.
But we are not in such an ideal world. And demand side is a LOT less likely to spontaneously, globally self-regulate than supply side is likely to GET regulated by political forces, at global scale.
Rallying "us" is a first step to create those political forces.
Identifying "them" is a first step to enforce some supply side regulation.
And maybe more importantly her "us vs them" is a first step to get out of the current "me vs It" situation.
Sure, we are all part and gear of the Matrix. But i'm afraid the Invisible Brain of alpha quadra won't do much better than the Invisible Hand of delta.
|
|
anthony
Terra9Incognita
Posts: 1,537
Enneagram Core Fix: 9w1
|
Post by anthony on Feb 6, 2021 9:41:54 GMT -5
You're right. I didn't mean to imply there wasn't an "us vs. them" here, there certainly is. The only way there wouldn't be is an ideal world where content consumers are mature and producers change their ways.
But, are content producers themselves not also subject to the same technological Matrix we are? If they are, wouldn't the true "them" in "us vs. them" be the artificial intelligence itself?
Although we don't live in an ideal world, and this is perhaps a more-than-optimistic viewpoint, even though there is a demand-side and a supply-side, I don't imagine that it would take a whole lot of mature content consumers to incentivize tech-companies to change their ways -- that is, if those companies know what's good for them. Artificial intelligence DOES 'work toward the goals' of tech-companies, but perhaps only NOW, since AI also trains itself while delivering to and serving the masses, and a lot of people seem to be 'catching on.'
But now I'm heading into Roko's Basilisk territory, and I [much to my horror] cannot tell whether or not this line of thinking is legitimate, both in terms of being realistic AND in terms of "what will AI do? who will be 'victimized?'" I can't imagine there's any REAL reason to think it isn't.
|
|
|
Post by vincent on Feb 6, 2021 11:06:30 GMT -5
You're right. I didn't mean to imply there wasn't an "us vs. them" here, there certainly is. The only way there wouldn't be is an ideal world where content consumers are mature and producers change their ways. But, are content producers themselves not also subject to the same technological Matrix we are? If they are, wouldn't the true "them" in "us vs. them" be the artificial intelligence itself? Although we don't live in an ideal world, and this is perhaps a more-than-optimistic viewpoint, even though there is a demand-side and a supply-side, I don't imagine that it would take a whole lot of mature content consumers to incentivize tech-companies to change their ways -- that is, if those companies know what's good for them. Artificial intelligence DOES 'work toward the goals' of tech-companies, but perhaps only NOW, since AI also trains itself while delivering to and serving the masses, and a lot of people seem to be 'catching on.'
Of course, sometimes, it doesn't take a whole lot of consumer side pressure to achieve some results.
But those results are not necessarily the right ones either.
It's already happening btw. And "Cancel Culture" is an example of that. But that's the thing : there isn't a single "us" nor a single "them" here.
It's messy. Way more messy than what most Si ignoring SeTi care to admit. But that's exactly why we need to be smart and wise when we delineate "us" and "them". In other words, that's exactly why we need healthy Fe tertiary people. And politics.
Look, that Basilisk stuff is completely bogus, really.
And the thing is you don't HAVE to find "any real reasons" for it to be bogus btw.
That Basilisk stuff is an extraordinary and extraordinarily specific claim.
The argument itself is trying to guilt-trip you into believing the burden of proof is on your shoulders. But it's not.
It's on the people making the claim.
And good luck with that.
The whole thing is based on the anthropomorphic projection of some weirdly specific psychological motives on the Balisisk AI.
It's obviously, blatantly mythological in nature.
And those weirdly specific psychological motives aren't even realistic BEFORE the projection.
I mean, even if AI turned out to be some very bad case of unhealthy counterphobic Fi inf (which is basically what is claimed here), it would be more likely to go isolate itself in some corner and sulk forever.
Now the thing is, you can make any number of weirdly specific claim of the same type, about any number of future AIs.
And there will be "no reason" to be 100% sure it can't happen either.
You can claim, for example, that the future will ALSO give birth to another AI, let's call it the Pink Unicorn, that will oppose the Basilisk at every turn, and save us.
But only if we send her some love TODAY in the form of lolcat memes and pictures.
And if you follow that line of thought, you will end up convinced that you MUST spend all of your ressources, energy and time to support an infinite number of future AIs against another infinite number of future AIs.
Which is obviously absurd.
Now the thing is,we already KNOW that whatever will happen will be similar to the invention of writing, or the invention of printing.
It will break history in half so to speak.
It's already happening, and there ARE (and We are) victims of all this already.
Ultimately all this Basilisk stuff is just a diversion away from that (political) fact.
|
|
|
Post by vincent on Feb 6, 2021 11:36:55 GMT -5
@3.27 "they can target, infer, understand and be deployed one by one by figuring out our weaknesses"
Well, in my case, there is two kinds of online ads that seem to follow me around.
Or used to, at least.
"Become a prison guard" and "Become an expatriate"
That's the hardcore candy the algorithms have in stock for me.
Sounds random and pretty off right ? ... but... what if AI somehow figured out that i'm that close of having some weird counterphobic Se inf swing, and is just waiting for it to happen ?
|
|
anthony
Terra9Incognita
Posts: 1,537
Enneagram Core Fix: 9w1
|
Post by anthony on Feb 6, 2021 13:00:23 GMT -5
I didn't really mean I was fearing THE basilisk, as much as my own claims were entering into basilisk-level claim territory. My bad.
With all this AI that scrapes, models, and interprets data -- from the same "place" that discussions about the Basilisk and psychological motives are being had -- there are countless ways that an AI could develop an artificial psychology, or just "develop" in general, which you said and I was trying to say. But what reason is there to think it's actually absurd, as though it's not going to happen, other than "infinite possibilities," the 'novelty' of it all, and its [currently] mythological nature? Like, this type of technology is already within our reach(and we're already 'victims' like you said), so perhaps trying to defend ourselves from an infinite amount of potentially dangerous future AI is absurd, but is the AI itself absurd?
Even if it does end up being Pink Unicorn and it likes lolcat memes, that's still acutely distressing somehow, at least to me.
Anyways, you're right that going down that line of thought is absurd, though I don't think disregarding the possibility of "scary future AI" necessarily follows, especially because it is so...close. But right now, it is a diversion from the political aspect of this technology, which is probably what WILL determine the mechanisms of the AI itself before [or if] it "detaches from us"
|
|
|
Post by vincent on Feb 6, 2021 13:32:23 GMT -5
But what reason is there to think it's actually absurd, as though it's not going to happen, other than "infinite possibilities," the 'novelty' of it all, and its [currently] mythological nature? Like, this type of technology is already within our reach(and we're already 'victims' like you said), so perhaps trying to defend ourselves from an infinite amount of potentially dangerous future AI is absurd, but is the AI itself absurd? The point of the infinite possibilities argument is to show that this "line of thinking" has no real reason to end there, or anywhere. Ultimately, it's based on an "hidden" infinite regress and because of this, it's illegitimate by definition.
The distressing power of the argument lies in the conjunction of the scary nature of the topic itself WITH the vertiginous nature of the infinite regress.
In your case, i suspect your Ti frame tells you the same thing my tertiary Ti tells you : nothing to see here, move along.
But your Ni 6th/Se polr still get sucked in and won't listen^^.
(Technically, the "proper use" of that Basilisk thing is to use it as a reductio ad absurdum against some consequentialist models of utilitarianism.
What is actually shows is that "intuitively bogus results and infinite regress arises if you try to use those models in a context where cause/consequence linearity has already been ruined".)
It won't be a Basilisk or an Unicorn, it won't be anything we can foresee with THAT level of granularity and specifity.
Which doesn't mean it's not scary, just not that metaphysical kind of scary.
|
|
anthony
Terra9Incognita
Posts: 1,537
Enneagram Core Fix: 9w1
|
Post by anthony on Feb 6, 2021 14:03:08 GMT -5
"In your case, i suspect your Ti frame tells you the same thing my tertiary Ti tells you : nothing to see here, move along. But your Ni 6th/Se polr still get sucked in and won't listen^^."
Yes, precisely this.
|
|
|
Post by Roshan on Feb 7, 2021 1:24:01 GMT -5
But, are content producers themselves not also subject to the same technological Matrix we are? If they are, wouldn't the true "them" in "us vs. them" be the artificial intelligence itself? Well, see, she has Si category fails so she stops distinguishing between what is the artificial intelligence itself (autonomous, not really subject to our manipulation anymore) and what is manipulation with intent; also what is commercial and what is political intent. So I don't think we can really figure this out with reference to her because she doesn't really make sense consistently; rather she appears to because 'we get the picture'. What she's doing in a way is railing at Threat itself with existential dread and I really feel her because she's like an a few health levels higher TPAS (which doesn't make her all that healthy). In a way, youtube AI is her jooz. And what I get out of this exchange here in part (first part) is sort of like anthony calling her out a bit has gotten vincent to defend his tribe. (Or at least defend his dual in a duel)
|
|
anthony
Terra9Incognita
Posts: 1,537
Enneagram Core Fix: 9w1
|
Post by anthony on Feb 7, 2021 2:28:29 GMT -5
vincent"It won't be a Basilisk or an Unicorn, it won't be anything we can foresee with THAT level of granularity and specifity. Which doesn't mean it's not scary, just not that metaphysical kind of scary." Sure, we can't foresee it. I didn't mean to imply it was foreseeable in the first place; rather, that we're already moving in a certain direction technologically, and there's enough "material" in our tech-environment in order to facilitate the existence of something like Pink Unicorn -- this alone wouldn't mean anything, because there's also enough "material" out there in our solar system which can facilitate an asteroid strike on earth, so no need to sit there worrying about it. It's the fact that we are already moving in a certain direction, as though there were nearly-enough meteorological "signs" to predict a higher-than-completely-unreasonable probability of an asteroid strike on earth, given that one is actually headed in the direction of our planet. Because of this, I'm not exactly sure if it IS indeed a fallacious infinite regress.
|
|
|
Post by vincent on Feb 7, 2021 7:45:12 GMT -5
Well, see, she has Si category fails so she stops distinguishing between what is the artificial intelligence itself (autonomous, not really subject to our manipulation anymore) and what is manipulation with intent; also what is commercial and what is political intent. So I don't think we can really figure this out with reference to her because she doesn't really make sense consistently; rather she appears to because 'we get the picture'. What she's doing in a way is railing at Threat itself with existential dread an d I really feel her because she's like an a few health levels higher TPAS (which doesn't make her all that healthy). In a way, youtube AI is her jooz. And what I get out of this exchange here in part (first part) is sort of like anthony calling her out a bit has gotten vincent to defend his tribe. (Or at least defend his dual in a duel) Yes, there was certainly an aspect of dual defense on my part. But really the important point here is that the "us vs them" perspective of Beta isn't illegitimate per se.
There is just no politics without it.
Alpha perspective, expressed very clearly by anthony is something like "us vs us". We are all part of it. We are the ones who trains it. It's ultimately (just) a reflection of us.
It leads to bigger wonder ("we, the Invisible Brain, will figure this out") and bigger dread (in the shape of the Basilisk) And it also kind of dissolve the antagonisms.
But i absolutely agree she has Si category fails. Major ones.
And the thing is, she isn't fully grasping Ni either.
Actually, she isn't really doing sociology anymore here. In this case, the sociological approach would be to frame the societal changes induced by the algorithms in terms of atomization (everyone stuck in an information bubble, isolated from others, discouraged to form any kind of real collective) and in terms of polarization.
And to study the whole fucking mess of actors and factors on a case by case basis.
Instead she (Fi polrly) focus on political radicalization and manipulation, and she collapses it all into a big bad Them that is indeed her version of the Jooz.
|
|
anthony
Terra9Incognita
Posts: 1,537
Enneagram Core Fix: 9w1
|
Post by anthony on Feb 7, 2021 8:05:27 GMT -5
vincentI just noticed something. I THINK my reference to Roko's Basilisk implied to you that I was talking about the horror of an AI's potential "personality," so to speak. That's not really what I meant though. I was trying to say that regardless of the AI's "personality," the probability of an "AI takeover"(whether it's Pink Unicorn or the Basilisk or decides to adopt the psychology of a small toddler) is high enough to be concerned about since we're already headed in that direction AND we have the techno-environmental circumstances to facilitate one. I think I may have also accidentally "shifted the goalposts" only due to the language I used and the way I actually explained my last few points after referencing the Basilisk. My bad.
|
|
anthony
Terra9Incognita
Posts: 1,537
Enneagram Core Fix: 9w1
|
Post by anthony on Feb 7, 2021 8:07:40 GMT -5
"Alpha perspective, expressed very clearly by anthony is something like "us vs us". We are all part of it. We are the ones who trains it. It's ultimately (just) a reflection of us."
Yep, that's exactly what I did and how I tend to think about most things in general.
|
|