I wonder....
What if you just made it illegal to sell people's attention? Or at least made it illegal to do so without compensating them?
Facebook, Youtube, Twitter, Tik-Tok. All of them work on a model where they are selling your attention to someone else. That means that your attention is their product, and their resource. Which then means that capturing your attention is critically important.
This means that they are all incented to exploit what in many cases are the worst traits of human nature. Our ease of outrage, our desire to hate someone else, our tribalism, our cognitive dissonance and irrationality. When we demand that Twitter shitcan the President for lying, or that Facebook vet content for bullshit, we are basically demanding that they act against their own algorithms, models, and the basic foundation of their product. They might do so, but they will only do so in the margins, and as little as they can get away with.
And for what, exactly? What is the actual benefit to humans that this is the model for funding these services?
Note that I am not asking what the benefit of the services are - I think there are great benefits to those services.
I am asking (other than making their owners giant, monstrous piles of cash) what are the benefits of the market model where social media outlets make money from advertising, rather then simply charging the actual users for the use of the service itself?
How different would this be if Facebook was a subscription service, and was not allowed to sell advertising at all?
To be devil's advocate (almost literally in this case), online ads are a more efficient way to advertise. If you think that advertising serves a useful purpose, then you should conclude the same thing about online ads. When you blast ads on TV, you're engaging in carpet bombing, whereas online ads coupled with smart algorithms will target audiences more appropriately.
Quote from: Berkut on June 22, 2021, 12:28:14 PM
I am asking (other than making their owners giant, monstrous piles of cash) what are the benefits of the market model where social media outlets make money from advertising, rather then simply charging the actual users for the use of the service itself?
I don't have to pay.
Quote from: Admiral Yi on June 22, 2021, 01:05:37 PM
Quote from: Berkut on June 22, 2021, 12:28:14 PM
I am asking (other than making their owners giant, monstrous piles of cash) what are the benefits of the market model where social media outlets make money from advertising, rather then simply charging the actual users for the use of the service itself?
I don't have to pay.
OK, that is true. You don't have to pay.
I would argue that while you might *think* that is a benefit, what you are actually paying instead of a fee, is both more valuable, and vastly more destructive to yourself and society at large.
It is, I will argue, basically mining human beings worst tendencies in order to sell their attention to someone else - a form of exploitation where the human is up against power they cannot possible be equipped to rationally contend with - it's you against the smartest people in the world devising complex data driven algorithms that you cannot even begin to understand or rationally evaluate.
It is, I would argue, akin to European settlers "buying" New York with a bunch of beads. The transaction is based on a complete imbalance in power and understanding of what is actually being transacted.
Quote from: DGuller on June 22, 2021, 12:39:37 PM
To be devil's advocate (almost literally in this case), online ads are a more efficient way to advertise. If you think that advertising serves a useful purpose, then you should conclude the same thing about online ads. When you blast ads on TV, you're engaging in carpet bombing, whereas online ads coupled with smart algorithms will target audiences more appropriately.
This is easier for me to address.
I don't think advertising serves a *practical* useful purpose at all. (By this I mean it doesn't serve a purpose to society in and of itself - obviously it is useful to the companies that engage in it).
Indeed, in the modern world, the only theoretical purpose it had has been supplanted anyway. That would be the purpose of informing consumers about your product, which has the purpose of giving information to consumers, which is necessary for them to make informed choices that drive the functioning free market.
The problem is that advertising is only very loosely linked to actual information. Since it is funded by businesses who by and large don't care to actually inform you, but rather to convince you, then it has always been a pretty shitty way to achieve the "inform the consumer" service anyway.
Now? I don't see how it helps me at all to have ads thrown at me all the time. I make decisions based on available customer reviews, mostly.
So yeah, I don't think advertising itself has any particular intrinsic social value that needs protecting. It's not like if you took it out of the social media model, there would not still be other avenues of advertising open.
Quote from: Berkut on June 22, 2021, 01:33:52 PM
It is, I will argue, basically mining human beings worst tendencies in order to sell their attention to someone else - a form of exploitation where the human is up against power they cannot possible be equipped to rationally contend with - it's you against the smartest people in the world devising complex data driven algorithms that you cannot even begin to understand or rationally evaluate.
Yet you yourself, when faced with this power which humans can not possibly contend with, managed to contend with it.
I think your thesis is severely overcooked. We all know what click bait is and how to contend with it. We also know that Facebook and Twitter (maybe less than Facebook) can create cocoons of agreement and lead to full retard QAnon conspiracy theories. Would this change if Facebook went to a subscription model? Not sure that's a given.
Your thesis sounds ridiculous when I apply it to my relationship to Google search and Youtube. I can browse with Youtube clips with ads or pay a whonking big fee to get rid of the ads. Are the ads I see for Volkswagen's electric car and the new iPhone destroying society? Are the clips of the US Open and Euro Cup Youtube's algoryithms bring up destroying society?
Before we "fix" social media, it would help to identify what you think the problems are?
As for 'making it illegal to sell people's attention', that's already the business model for a lot of traditional media as well.
Quote from: Admiral Yi on June 22, 2021, 02:09:05 PM
Quote from: Berkut on June 22, 2021, 01:33:52 PM
It is, I will argue, basically mining human beings worst tendencies in order to sell their attention to someone else - a form of exploitation where the human is up against power they cannot possible be equipped to rationally contend with - it's you against the smartest people in the world devising complex data driven algorithms that you cannot even begin to understand or rationally evaluate.
Yet you yourself, when faced with this power which humans can not possibly contend with, managed to contend with it.
I think your thesis is severely overcooked. We all know what click bait is and how to contend with it. We also know that Facebook and Twitter (maybe less than Facebook) can create cocoons of agreement and lead to full retard QAnon conspiracy theories. Would this change if Facebook went to a subscription model? Not sure that's a given.
Your thesis sounds ridiculous when I apply it to my relationship to Google search and Youtube. I can browse with Youtube clips with ads or pay a whonking big fee to get rid of the ads. Are the ads I see for Volkswagen's electric car and the new iPhone destroying society? Are the clips of the US Open and Euro Cup Youtube's algoryithms bring up destroying society?
You are missing the point - the ads are fine.
The need to keep your attention on YouTube so they can sell those adds means that Youtube MUST deploy some very serious and sophisticated algorithms to capture your attention. What they do with your attention is not the problem.
It's like saying strip mining is terrible, it destroys the environment! And you responding that copper isn't so bad, it does all kind of cool things.
And I don't think I manage to contend with it that well, and even if I did, I don't think that is relevant. We cannot structure society on the presumption that all humans contend with Youtube as well as those of us best able to do so. Again, that likes saying children laboring in mines is no big deal, why, just look at Joey here! *He* didn't die or even get black lung!
You say "we all know what click bait is and how to contend with it". Oh?
Then why is that we have the problems we do have? If we ALL knew how to contend with it, then it wouldn't work, and the entire ad driven attention model would not even work, or would not work very well.
To torture my analogy some more, this is like you saying "Hey, we all know how to deal with black lung and the occasional mine shaft collapse! Those miners are all volunteers....kind of!" in response to actual data showing that long of miners die of black lung and are killed in mine shaft collapses.
Your argument appears to be that the problem really isn't any problem at all. Which is a good argument against my solution, of course. Just say that the problem isn't worth the solution, and we should be fine with social media operating exactly as it does today.
I'm not convinced that there is an illness, and even if there is my guess is that the cure would be worse.
Quote from: Barrister on June 22, 2021, 02:22:48 PM
Before we "fix" social media, it would help to identify what you think the problems are?
Sure, but I would rather just discuss with people who recognize the problem. If you think there isn't any problem, then clearly there is no need for a solution. I think the problem of social media information siloing, the problems around greater and greater polarization of thought, and rise in extremist groups as a matter of routine are pretty well understood, and not really interesting to the solution discusssion.
But certainly if you don't agree that there is a problem to begin with, then this isn't a very interesting discussion.
Quote
As for 'making it illegal to sell people's attention', that's already the business model for a lot of traditional media as well.
Indeed it is, which is why it just kind of naturally morphed into the social media business model.
And traditional media had problems with this as well. I would contend that those problems, due to the nature of the technology and delivery systems involved, where not nearly as profound as what we are seeing now. Of course, this is back to the "Is this even a problem?" question.
Quote from: Berkut on June 22, 2021, 02:30:33 PM
Quote from: Barrister on June 22, 2021, 02:22:48 PM
Before we "fix" social media, it would help to identify what you think the problems are?
Sure, but I would rather just discuss with people who recognize the problem. If you think there isn't any problem, then clearly there is no need for a solution. I think the problem of social media information siloing, the problems around greater and greater polarization of thought, and rise in extremist groups as a matter of routine are pretty well understood, and not really interesting to the solution discusssion.
I didn't say there isn't a problem with social media. I would agree there is something unsettling going on with it.
But I don't think I could answer "What is the problem?". So I wondered if you could either, since after all you just named 3 different albeit related problems.
You could also mention the problem of social media hollowing out traditional media (in particular newspapers). The rampant intellectual property theft. Various privacy problems / selling your data.
Let me take a stab at it. I think the problem with social media is the same problem we have with drugs: some people are highly susceptible to falling to addiction, to the point that for those people the assumption of them being rational actors, which we use to justify various freedoms, is demonstrably unreasonable.
We choose to outlaw some drugs completely, and tightly control other drugs, in recognition of the reality that an addict does not have much in the way of free will, and society is better off with measures in place to keep people from getting addicted.
Quote from: Berkut on June 22, 2021, 02:30:33 PM
I think the problem of social media information siloing, the problems around greater and greater polarization of thought, and rise in extremist groups as a matter of routine are pretty well understood, and not really interesting to the solution discusssion.
Those are one set of problems associated with social media; there are others. And there are still others with Big Tech - which includes other significant domains.
There are many potential solutions to the insular experience of social media, but that's only one part of the picture.
I think the issue is less about the "consumer is the product" model of advertising, and more about the fact that the most virulent propaganda can be served, that there is no control for truth, and that there is no professional standards that apply to the content that's created and promoted; that and that the AI "you liked this so try that" algorithms create effective funnels into extremist content.
That butts into the free speech issue and the kind of social engineering that's dangerous - and that Americans are particularly allergic to - but that's where the actual issue lays IMO. The problem is not - as Yi rightly points out - the VW and iPhone and cereal ads particularly.
It is a problem with social media but is it really a problem of social media?
It's always been true that people tend to associate with like-minded people and the resulting insularity can foster extremism and vulnerability to conspiracy theories. For example, most white people living in say Mississippi in the 1850s would have held objectively extreme and conspiratorial opinions. It was common to believe that chattel slavery was a positive good, to the benefit of the slaves and that the South was being targeted by horrific conspiracies from abolitionist zealots seeking to spread miscegenation and destroy the virtue of Southern womanhood. Even by the standard of the times, there were extreme views and we know that led to great violence. That is perhaps an extreme example, but one can pretty easily show other examples of extreme polarization in other periods, such as the Gilded Age and the resulting civil violence.
It may be the case that the political polarization and extremism we see today is closer to the norm and that the seemingly less partisan era after WW2 was the departure from the norm. The broad-based participation in the war fostered mixing of different kinds of people from different backgrounds and brought people together in a common pursuit. The nature of mass media in the postwar period - eg. dominance of TV by a few corporatist networks - also worked against polarization. ( Even so, there were serious outbursts of political extremism - from McCarthyism and Jon Bircherism on the Right to the some the excesses of the New Left).
The decentralized nature of social media helps foster insularity but it is not unique in this regard as the rise of politicized talk radio and cable news also facilitates people to choose their preferred reality. Social media is less coherent in its messaging but it does draw some additional power from its direct peer-to-peer contact and the impact of emotive imagery ("memes" and videos). But the broader point is even if social media disappeared overnight, the underlying problem doesn't go away.
Quote from: Berkut on June 22, 2021, 02:25:43 PM
Your argument appears to be that the problem really isn't any problem at all. Which is a good argument against my solution, of course. Just say that the problem isn't worth the solution, and we should be fine with social media operating exactly as it does today.
For a guy who bitches about strawmen as much as you do, you sure do a lousy of job of characterizing other's positions.
I specifically said Facebook echo chambers are a problem. Which is not quite the same thing as saying the problem isn't any problem at all.
I then asked if a subsciption model would solve this problem. Which is not quite the same thing as saying the problem isn't worth the solution.
And none of this can be characterized as "we should be fine with social media operating exactly as it does today."
If you're not going to bother responding to what I actually post, please tell me so that I don't waste my time responding.
Quote from: Admiral Yi on June 22, 2021, 03:43:04 PM
Quote from: Berkut on June 22, 2021, 02:25:43 PM
Your argument appears to be that the problem really isn't any problem at all. Which is a good argument against my solution, of course. Just say that the problem isn't worth the solution, and we should be fine with social media operating exactly as it does today.
For a guy who bitches about strawmen as much as you do, you sure do a lousy of job of characterizing other's positions.
I apologize - I thought I was accurately characerterizing your position. It seemed to be that there wasn't a problem worthy of solving, in that you said I was overbaked or something.
Quote
I specifically said Facebook echo chambers are a problem. Which is not quite the same thing as saying the problem isn't any problem at all.
Exactly - that was my point, which you cut out. You say there is a problem, then say there isn't a need for a solution, because...why exactly? That is what I meant by the analogy with children in coal mining. If you say its fine because nothing bad really happens, then remark about the bad things that happen, I find that somewhat confusing when I propose a solution.
Quote
I then asked if a subsciption model would solve this problem. Which is not quite the same thing as saying the problem isn't worth the solution.
I think it would - or rather, I think it is nearly certain to be a lot better.
A subscription model would result in social media trying to appeal to the people who are using social media, rather then just trying to capture their attention so it can be sold to someone else.
If I subscribe to Facebook, you don't need me to be on facebook for another 15 minutes so you can sell that 15 minutes to someone. You just need to provide an experience for me that keeps me paying my monthly fee. If I use it 10 hours a month or 50 hours a month doesn't matter to you, so your engagement algorithms won't need to be tuned to constantly answering the question of "HOW DO WE GET BERKUT TO STAY ON ANOTHER 120 SECONDS???? AND ANOTHER??? AND ANOTHER???"
That might have its own problems, but I think it would be definitely much better then what we have now, and what we are going to have in the future.
Remember, those algorithms? They are like first generation technology right now - literally less then a decade old. And they are learning, and they get more and more and more data every single day.
This is like, I don't know, the first decade that someone invented cocaine, but where there are literally billions being invested by the very smartest people in our society in figuring out how to make better and better and better cocaine.
Quote from: Berkut on June 22, 2021, 04:03:02 PM
You say there is a problem, then say there isn't a need for a solution, because...why exactly?
:mellow:
You can't help yourself, can you?
Quote from: Jacob on June 22, 2021, 03:13:55 PM
I think the issue is less about the "consumer is the product" model of advertising, and more about the fact that the most virulent propaganda can be served, that there is no control for truth, and that there is no professional standards that apply to the content that's created and promoted; that and that the AI "you liked this so try that" algorithms create effective funnels into extremist content.
The model is the problem *because* it means that serving up virulent propaganda is (due to human nature) often the best way to create more of the product. Take away the need to capture as much of the consumers attention span as possible (because that is the product) and you take away the incentive to create algorithms that exploit the worst parts of our own nature.
Quote
That butts into the free speech issue and the kind of social engineering that's dangerous - and that Americans are particularly allergic to - but that's where the actual issue lays IMO. The problem is not - as Yi rightly points out - the VW and iPhone and cereal ads particularly.
The ads are fine, actually.
I do agree there are serious free speech issues at play here, and my idea is definitely rather disturbingly "nanny state" in the worst way.
Quote from: Admiral Yi on June 22, 2021, 04:07:28 PM
Quote from: Berkut on June 22, 2021, 04:03:02 PM
You say there is a problem, then say there isn't a need for a solution, because...why exactly?
:mellow:
You can't help yourself, can you?
Fuck man, I am trying. I post these long responses that I try to explain why I am asking and responding the way I do, then you cut out all the explanation and leave one line and get all put out as if that is the only thing I said. If that the most interesting part of my post?
Probably better that we just don't engage. I can't do much better myself. If your goal is to score some points by figuring our how to select out a single line and respond to that, I am willing to concede that victory to you. The Yicratic method?
Quote from: The Minsky Moment on June 22, 2021, 03:36:10 PM
It is a problem with social media but is it really a problem of social media?
It's always been true that people tend to associate with like-minded people and the resulting insularity can foster extremism and vulnerability to conspiracy theories. For example, most white people living in say Mississippi in the 1850s would have held objectively extreme and conspiratorial opinions. It was common to believe that chattel slavery was a positive good, to the benefit of the slaves and that the South was being targeted by horrific conspiracies from abolitionist zealots seeking to spread miscegenation and destroy the virtue of Southern womanhood. Even by the standard of the times, there were extreme views and we know that led to great violence. That is perhaps an extreme example, but one can pretty easily show other examples of extreme polarization in other periods, such as the Gilded Age and the resulting civil violence.
It may be the case that the political polarization and extremism we see today is closer to the norm and that the seemingly less partisan era after WW2 was the departure from the norm. The broad-based participation in the war fostered mixing of different kinds of people from different backgrounds and brought people together in a common pursuit. The nature of mass media in the postwar period - eg. dominance of TV by a few corporatist networks - also worked against polarization. ( Even so, there were serious outbursts of political extremism - from McCarthyism and Jon Bircherism on the Right to the some the excesses of the New Left).
The decentralized nature of social media helps foster insularity but it is not unique in this regard as the rise of politicized talk radio and cable news also facilitates people to choose their preferred reality. Social media is less coherent in its messaging but it does draw some additional power from its direct peer-to-peer contact and the impact of emotive imagery ("memes" and videos). But the broader point is even if social media disappeared overnight, the underlying problem doesn't go away.
A fair set of observations.
Quote from: Jacob on June 22, 2021, 03:13:55 PM
I think the issue is less about the "consumer is the product" model of advertising, and more about the fact that the most virulent propaganda can be served, that there is no control for truth, and that there is no professional standards that apply to the content that's created and promoted; that and that the AI "you liked this so try that" algorithms create effective funnels into extremist content.
I don't disagree about algorithms funnelling people.
But I think there is something more to the consumer is the product angle because that ecosystem is huge and out of control. There's no transparency over really what data is really collected, how it's used or where it goes - across billions of daily transactions. When you speak to people who work in adtech it is concerning and it's an ungoverned space on the internet.
Privacy laws are beginning to impose but the existing players have such a huge advantage new entrants are difficult to imagine and it's an invisible market with minimal transparency. Google for example gets the space from publishers on exchanges, owns the exchange and sells it on the buy side to advertisers - that sort of market dynamic is problematic for all sorts of reasons. There is - rightly - minimal trust for this, so many publishers have to use Google but are basically certain that Google deliberately directs advertising revenue to their own properties like YouTube. Similarly advertisers are far from sure they're getting the best eyes on their ad (when they get human eyes at all).
It's a really fucked up, non-transparent, self-dealing part of the economy and it's a market worth billions where our data is the product. It slightly concerns me on a sort of social/global level that a few of the biggest companies in the world generally basically sell ad space. I don't think advertising has ever been that big before.
In terms of how you fix it I don't think the growing body of privacy laws are enough, I think it needs to be treated as a market on which billions of dollars are traded. So I'd suggest basically banning the owners of the exchange from selling and buying ad space on their own behalf. In an ideal world I'd also move towards a "data trust" model so the people who are able to best monetise our data are those with the best models/algorithms (including new entrants) not just the companies with the most data to begin with while also enhancing individual control over our own data.
The privacy push is leading to interesting solutions that may help - but I'm fairly dubious about what they'll actually look like.
The trouble with the world today is the speed at which misinformation can spread, be updated, and come with a huge data stack to claim authenticity.
For a historic comparison see the later chapters of Homage to Catalonia. The communists are moving towards purging the anarchists. Nobody really knows what's happening. Rumours are flying all over the place of what is going on, soldiers return from months on the front line and march straight into jail their organisation having been outlawed weeks ago.
Reading this I remember that I couldn't help but wonder how it might be different in an age of social media if official channels were cut off and it was left to personal networks to disseminate and sort out rumours.
I do have faith that the digital natives and the new generation will be a lot more capable of handling social media than boomers have proven. Nonetheless we can't just wait for the boomers to die. People are suffering in the here and now so something does need to be done.
As I've said in the past I do think China had something tight with linking real ids to online ids. Some sort of government ID api needs to be set up so you can validate yourself as a real person without having to actually trust Facebook and the like with it.
Perhaps limit this to public platforms rather than one to one communication platforms. They do say whatsapp misinformation is an issue but I don't think it's quite so harmful and it must be tackled in a different and rather more traditional way.
Also seeing anti vaxers I grow increasingly keen on criminalising the deliberate spread of harmful misinformation. A delicate one for sure that would have to be set up very precisely so as not to harm free speech too much. But those arse hole doctors making a fortune from exploring the ignorant at the least need a slap.
Oh. Also micro targeting. That needs to stop. It should be outlawed. Especially when you're telling different groups incompatible things. Eg brexit and it's end to immigration /easier immigration dychotomy.
Honestly I often just think we need a healthy dose of shame and guilt.
Maybe we went too far in the past but I feel like we could do with a modicum of shame and guilt, especially in public life :lol: :weep:
Quote from: Sheilbh on June 22, 2021, 04:41:35 PM
But I think there is something more to the consumer is the product angle because that ecosystem is huge and out of control.
The product is information - data about the consumer and their behavior and interest.
The difficulty in designing a response stems in part of the multiple effects and implications: (1) the conflict with principles of privacy, (2) the fairness of a consumer transaction where valuable data is being acquired without the consumer understanding the nature or true value of what is being sold, (3) the competitive implications of companies that enjoy significant network effects obtaining proprietary and exclusive rights over consumer data. Three quite different sets of effects with different policy implications. The pat answers to these kinds of questions that have developed over the years in the context of tangible transactions do not necessarily work the same way in the virtual context.
Quote from: Sheilbh on June 22, 2021, 05:00:55 PM
Honestly I often just think we need a healthy dose of shame and guilt.
Maybe we went too far in the past but I feel like we could do with a modicum of shame and guilt, especially in public life :lol: :weep:
I would strenuously disagree. If you go on to Twitter what we need are orders of magnitude less of shaming and guilting, and much more grace and charity.
I mean guilt and shame as internally felt impulses/emotions not as something you do to others.
Quote from: Barrister on June 22, 2021, 05:17:19 PM
I would strenuously disagree. If you go on to Twitter what we need are orders of magnitude less of shaming and guilting, and much more grace and charity.
I'm pretty sure Shelf meant shame and guilt in the sense of introspection and self control, not in the sense of scolding and berating others.
I do agree with him that there seems to be a correlation in increase in hectoring and decrease in self reflection. I would go further and say the nagging of the church lady is a coping mechanism to avoid the pain of pondering the morality of your own actions.
Quote from: Berkut on June 22, 2021, 01:38:35 PM
This is easier for me to address.
I don't think advertising serves a *practical* useful purpose at all. (By this I mean it doesn't serve a purpose to society in and of itself - obviously it is useful to the companies that engage in it).
Indeed, in the modern world, the only theoretical purpose it had has been supplanted anyway. That would be the purpose of informing consumers about your product, which has the purpose of giving information to consumers, which is necessary for them to make informed choices that drive the functioning free market.
The problem is that advertising is only very loosely linked to actual information. Since it is funded by businesses who by and large don't care to actually inform you, but rather to convince you, then it has always been a pretty shitty way to achieve the "inform the consumer" service anyway.
Now? I don't see how it helps me at all to have ads thrown at me all the time. I make decisions based on available customer reviews, mostly.
So yeah, I don't think advertising itself has any particular intrinsic social value that needs protecting. It's not like if you took it out of the social media model, there would not still be other avenues of advertising open.
I don't disagree that there are plenty of undesirable things that come with advertising, and the problem isn't new, but I do think that advertising serves some practical purpose. Often times ads make you aware of solutions to problems that you never knew you had (and I don't mean that ironically). I find that often the most difficult thing is not finding a solution to a problem, but rather identifying the problem in the first place. Fixing a bug in your program is usually easy. Realizing you have a bug in the first place can be very hard.
Quote from: The Minsky Moment on June 22, 2021, 03:36:10 PM
It is a problem with social media but is it really a problem of social media?
It's always been true that people tend to associate with like-minded people and the resulting insularity can foster extremism and vulnerability to conspiracy theories.
...not really. Unless by always, you mean, only relatively recently, in an age of relatively increased mobility where certain people were entitled to voice their opinions relatively freely.
Historically, insularity tended to foster conformism, because the price to pay for dissension could be high. Small villages made up mostly of your relatives could not so easily be split by quarrel, and straight up leaving was onerous (and often life-threatening). Dissension therefore mostly expressed itself in factionalism - i.e., a confederacy of malcontents against existing order. "Bipartism" is a historical artefact of that time. Conspiracies, meanwhile, are generally considered as sociological coping mechanisms against powerlessness. Things must be coordinated by a few hidden people, because historical agency is limited to the few.
It takes gigantic, revolutionary, universalist ideas that require total committment to upend this sort of baseline. Religious reformation, liberalism, emancipation, socialism... These make demands that explicitly go beyond the small communities. Polarization isn't the natural state of communities: it's actually quite an important crisis. What democracy did was to raise the threshold of acceptable dissent. You didn't simply have the choice between silence or civil war. You could express some measure of dissent legitimately. How much dissent was the question always worked out politically.
But institutions always had a really important role to play, whether by offering a venue that structured or channelled political debate or dissent, supplying vocabulary to an ill-defined sense of "wrong", or relayed ideas. The moments of deep polarization in the US are moments of institutional crisis - the rise of abolitionism, for instance, didn't happen because anti-slavery people settled in the same area; sectionalism took root because of the institutional power and role of states in the US constitution. Indeed, we have lots of testimonies of anti-slavery people who ended up defending slavery *after* having lived in slave societies.
There clearly is a political crisis in the US, as there has been in the past. That crisis is both fueled by, and fueling, international debate, as American crisis have done in the past - and the way out of that crisis is liable to be ugly before it is good. What's new is that there are new institutions that structure the debate, and we are woefully underprepared to deal with them as such.
Social media are no longer just corporations. They have become institutions. They structure the political debate. They relay ideas and vocabularies. Yet, we have instituted no claim on them - the way we have done so with justice, school, or even with the classical concept of "corporation". Because unlike classical corporations, social media create dynamic communities of dissent, and we haven't really had to deal with the emergence of these sorts of things for a long, long time.
Quote from: Oexmelin on June 22, 2021, 06:16:22 PM
Social media are no longer just corporations. They have become institutions. They structure the political debate. They relay ideas and vocabularies. Yet, we have instituted no claim on them - the way we have done so with justice, school, or even with the classical concept of "corporation". Because unlike classical corporations, social media create dynamic communities of dissent, and we haven't really had to deal with the emergence of these sorts of things for a long, long time.
I often think about things as systems.
I look at the information revolution and the rise of social media, and what I note is that these are profound changes in our economic and social fabric.
I know people say things like "Oh, this isn't really a change! There was biased media and polarization and lowest common denominator propaganda before!" And that is true. But it is true the same way someone could counter concerns about the danger of these new cars by pointing out that a car isn't really different from a horse, and horses and carriages kill people all the time! Or nuclear weapons are not really that different from conventional, why, more people died in the bombing of Tokyo then at Hiroshima!
Differences in scale matter, and create different needs for how society wants to engage in that technology. We have entire volumes of new regulations and rules to deal with cars that were never needed before we had cars. And yet, there is basically zero new real regulation on how social media and Google and Apple and all of them interact - we seem to have this idea that nothing really has changed, maybe because the new digital entities are, well...digital. It doesn't FEEL like something really new.
Quote from: Oexmelin on June 22, 2021, 06:16:22 PM
Quote from: The Minsky Moment on June 22, 2021, 03:36:10 PM
It is a problem with social media but is it really a problem of social media?
It's always been true that people tend to associate with like-minded people and the resulting insularity can foster extremism and vulnerability to conspiracy theories.
...not really. Unless by always, you mean, only relatively recently, in an age of relatively increased mobility where certain people were entitled to voice their opinions relatively freely.
Always as in American political history which in the sense you mean is relatively recently. Even small towns in America were penetrated by mass media well before the Civil War.
Quote from: Oexmelin on June 22, 2021, 06:16:22 PM
Social media are no longer just corporations. They have become institutions. They structure the political debate. They relay ideas and vocabularies. Yet, we have instituted no claim on them - the way we have done so with justice, school, or even with the classical concept of "corporation". Because unlike classical corporations, social media create dynamic communities of dissent, and we haven't really had to deal with the emergence of these sorts of things for a long, long time.
The language in this passage ("they", "social media are . . .") is loose. It's true that "social media" refers to much more than the corporate entities and managers that operate and control social media platforms; it's true that social media has assumed a powerful role in the way that people perceive the world and structure their lives. But to use active tense verbs - "They structure the political debate. They relay ideas and vocabularies. " - glosses over the reality of what is happening. It wrongly implies the same kind of agency exercised the press barons of the old media world - from Hearst to Murdoch. Part of what makes social media so powerful and intoxicating is the delegation of content creation and distribution to users. Admittedly that is within a framework (TOS, interface, design) created by the social media company, but that framework is an imprecise and indirect form of control. The relative lack of regulation of social media is not due to American cultural techtoxication or even Washington gridlock - there is real interest and will to regulate that crosses party lines. It is a recognition of the facts that old models of regulation don't easily apply and that the new models are still being developed.
You are reading in my post something that isn't there, and impute singular agency when none was implied. Institutions structure debate, even when there are no singular will behind it. This is true of the 19th century press taken collectively, as it is of current social media. Facebook does things that go well beyond what Zuckerberg wants - even if what he wants does have much more of a singular impact than what you or I can have on Facebook. That language may lead to generalizations, but is not qualitatively different than the one we use as short-hand to refer to all sorts of other collective institutions, like state - without ever implying that there is a single will behind them all.
I agreed fundamentally with your point: we don't quite know what to do with social media. Maybe it's because it's new. But I suggested that part of the problem is that, unlike many of the more recent institutions (and generally unlike other corporations), is that they fundamentally create communities - and that is a political issue. Our current predicament is that this puzzle accompanies, or heightens a current (American) political crisis, but also a more generalized crisis of democratic legitimacy. We are at a loss as to how to deal with political crisis and legitimacy right now.
Quote from: Berkut on June 22, 2021, 12:28:14 PM
How different would this be if Facebook was a subscription service, and was not allowed to sell advertising at all?
I will not use it. It is that simple. I won't subscribe to anything where payment is needed.
Quote from: Monoriu on June 23, 2021, 01:27:41 AM
Quote from: Berkut on June 22, 2021, 12:28:14 PM
How different would this be if Facebook was a subscription service, and was not allowed to sell advertising at all?
I will not use it. It is that simple. I won't subscribe to anything where payment is needed.
So?
People say that like it is some kind of travesty.
Most services in life people have to choose whether or not to pay for it, and there is no presumption that it ought to be free.
If you choose not to pay for your cell phone, you don't have a cell phone.
Oh well.
If society decides that access to social media is so important that everyone should have it, then society can subsidize the cost for low incomes, the same way we might do so for other "necessary" utilities.
Taking away ad revenue eats into the profits of the social media companies, but it doesn't fix the problem of social insularity and virtual echo chambers.
Of course for all the negative effects in the US and other rich countries - I think the risk for Facebook etc is sufficiently high that they have some degree of control and moderation in the relevant languages.
I think the biggest negative impacts of Facebook have been in Sri Lanka and Myanmar where Facebook has been directly linked to ethnic cleansing propaganda and encouraging violence against protesters. They are a huge internet provider in large parts of Africa. I think because those markets are not a focus for Facebook and they may not have local teams with appropriate language skills etc to actually police any content - that there's a risk that an American social media company will become the sort of RTLM of ethnic or political violence. It may not even be noticed at the time in HQ in the US but when people look back they will identify propaganda primarily travelling and being amplified on, for example, Facebook and WhatsApp.
In the UK we've also seen the rise of WhatsApp as a source for the really dark stuff in political campaigns - so misogyny, anti-semitism, racism, Islamophobia, homophobia - all stuff that may not be coming from the campaign but is going around WhatsApp a bit like chain emails in the late 90s. I believe the same has been observed in India as well.
Quote from: The Minsky Moment on June 23, 2021, 09:00:50 AM
Taking away ad revenue eats into the profits of the social media companies, but it doesn't fix the problem of social insularity and virtual echo chambers.
It does not fix it, but it does remove the pressure for social media companies to use any means necessary to capture the attention of their users.
We don't have to solve ALL problems at once!
The term "Big Tech" covers a diverse selection companies which, if they do indeed need to be fixed, would require different fixes. Even among social media platforms; individual platforms, I feel, would require different solutions. Twitter, in my opinion, is the worst platform and has done the most harm in the real world. Twitter facilitates the spread of misinformation and it rewards obnoxious and puerile behavior; (indeed if one is obnoxious and puerile enough, often enough, one can become President of the United States.) This, I believe, is due to the brevity of the posting allowed, which do not allow for well thought out arguments. Such a platform could be used for epigrams, of course, but the vulgar populace seems to have a taste for much coarser text. It seems unfair, though, to punish the epigramists or haiku poets along with the bullies and ignoramuses. Therefore, I propose we take everyone with over 100,000 followers on Twitter and put them on a proscription list (preferably written in Courier font.) While this may net a few unfortunate innocent people, like Bentham I think that allowing thousands of guilty to go unpunished for the sake of one innocent is pushing principle too far. In addition to the justice that would be served, this plan has many benefits; the most obvious is that by depriving the worst individuals their opportunity to reach a wide audience (or any audience) will certainly create a politer society and slow the spread of misinformation. Even beyond the realm of social media there are benefits; as many wealthy celebrities and magnates have a large Twitter following, the sale of their goods would provide the government with a badly needed additional revenue stream. This, in addition, would reduce income inequality; and, as elite schools could no longer rely on the proscribeds' generous grants, perhaps reduce the toxic impact of privilege, paving the way for a more egalitarian society.
Step #1 for doing something about Twitter is just removing anonymity in most case. Allow it in exceptions that have to be individually approved. Every account must be linked to an actual person in some fashion.
And get rid of ads. :P
The way Twitter works just results in its toxicity I think. Or at least that was my conclusion when I used it. Sometimes the media really is the message.
Quote from: Valmy on June 23, 2021, 05:07:10 PM
The way Twitter works just results in its toxicity I think. Or at least that was my conclusion when I used it.
:zipped:
Quote from: Berkut on June 23, 2021, 04:12:45 PM
Step #1 for doing something about Twitter is just removing anonymity in most case. Allow it in exceptions that have to be individually approved. Every account must be linked to an actual person in some fashion.
And get rid of ads. :P
I'm not saying it's not a good step, but I've lost quite a few IQ points accidentally reading people's comments on LinkedIn, and I rarely visit it. If people can't keep their political idiocy to themselves on a professional site while posting under their real names, then at the very least it indicates that your idea is just a first step of many.
Quote from: DGuller on June 23, 2021, 05:33:27 PM
I'm not saying it's not a good step, but I've lost quite a few IQ points accidentally reading people's comments on LinkedIn, and I rarely visit it. If people can't keep their political idiocy to themselves on a professional site while posting under their real names, then at the very least it indicates that your idea is just a first step of many.
I don't think I've ever seen politics on LinkedIn except very "safe" opinions in my circle (City/London/Lawyers/Tech) - so occasionally the odd exasperated comment about Brexit or for pride/diversity etc.
However I would add the sheer number of #blessed posts I saw from recruiters walking me through their morning yoga, smoothie, run, time with family routine during the pandemic as another reason we all need a stronger sense of shame :lol:
I get that they basically suddenly had no market (things have since more than recovered) and they just needed to keep themselves in front of their contacts in HR/candidates. But, God, it was awful :lol:
Quote from: DGuller on June 23, 2021, 05:33:27 PM
Quote from: Berkut on June 23, 2021, 04:12:45 PM
Step #1 for doing something about Twitter is just removing anonymity in most case. Allow it in exceptions that have to be individually approved. Every account must be linked to an actual person in some fashion.
And get rid of ads. :P
I'm not saying it's not a good step, but I've lost quite a few IQ points accidentally reading people's comments on LinkedIn, and I rarely visit it. If people can't keep their political idiocy to themselves on a professional site while posting under their real names, then at the very least it indicates that your idea is just a first step of many.
It is only a first step, but a necessary first step.
QuoteI don't think I've ever seen politics on LinkedIn except very "safe" opinions in my circle (City/London/Lawyers/Tech) - so occasionally the odd exasperated comment about Brexit or for pride/diversity etc.
I've never seen it from anyone I know (really wouldn't class diversity stuff as politics) but have seen it a lot from strangers. Some of which I heavily suspect were fake shit stirrer accounts who just saw LinkedIn as another front in their culture war.
Quote from: Valmy on June 23, 2021, 05:07:10 PM
The way Twitter works just results in its toxicity I think. Or at least that was my conclusion when I used it. Sometimes the media really is the message.
Yes. Enforced short sharp interactions with strangers rated on a binary like/not like scale naturally breeds polarisation.
Quote from: Berkut on June 23, 2021, 07:27:54 AM
Quote from: Monoriu on June 23, 2021, 01:27:41 AM
Quote from: Berkut on June 22, 2021, 12:28:14 PM
How different would this be if Facebook was a subscription service, and was not allowed to sell advertising at all?
I will not use it. It is that simple. I won't subscribe to anything where payment is needed.
So?
People say that like it is some kind of travesty.
Most services in life people have to choose whether or not to pay for it, and there is no presumption that it ought to be free.
If you choose not to pay for your cell phone, you don't have a cell phone.
Oh well.
You asked a question. I gave you my answer :contract:
If Facebook chooses to operate with the subscription model, I will leave. That is their choice, and my choice, and that's fair. But if governments step in and say they have to use the subscription model, it is a different matter.
Quote from: Monoriu on June 23, 2021, 10:46:46 PM
Quote from: Berkut on June 23, 2021, 07:27:54 AM
Quote from: Monoriu on June 23, 2021, 01:27:41 AM
Quote from: Berkut on June 22, 2021, 12:28:14 PM
How different would this be if Facebook was a subscription service, and was not allowed to sell advertising at all?
I will not use it. It is that simple. I won't subscribe to anything where payment is needed.
So?
People say that like it is some kind of travesty.
Most services in life people have to choose whether or not to pay for it, and there is no presumption that it ought to be free.
If you choose not to pay for your cell phone, you don't have a cell phone.
Oh well.
You asked a question. I gave you my answer :contract:
If Facebook chooses to operate with the subscription model, I will leave. That is their choice, and my choice, and that's fair. But if governments step in and say they have to use the subscription model, it is a different matter.
Its rather amusing to see where you are happy to tolerate government interference, and where it is beyond the pale.
I think that the solution is to declare that all information that is linkable to an individual is the property of that individual. Anyone who wants to use or access that information must pay compensation for that access. That compensation may come in the form of some services provided, but the compensation and the access allowed must be explicit, and must be revocable at any time, by either side.
So if Google, say, or Twitter, wants to use my data they have to offer me a contract written in plain English that describes what they are giving me and what i am giving them. And if i don't like the results, I can end the relationship and all of their ability to use my data. They cannot sell my data under any circumstances; if someone else wants my data, they have to come to me. My data is mine.
I'd note that I don't use social media at all. It has long been a recommendation that teachers never use it, as it only leads to trouble, because your students WILL find you on social media.
Quote from: grumbler on June 24, 2021, 09:36:32 PM
I think that the solution is to declare that all information that is linkable to an individual is the property of that individual. Anyone who wants to use or access that information must pay compensation for that access. That compensation may come in the form of some services provided, but the compensation and the access allowed must be explicit, and must be revocable at any time, by either side.
So if Google, say, or Twitter, wants to use my data they have to offer me a contract written in plain English that describes what they are giving me and what i am giving them. And if i don't like the results, I can end the relationship and all of their ability to use my data. They cannot sell my data under any circumstances; if someone else wants my data, they have to come to me. My data is mine.
I'd note that I don't use social media at all. It has long been a recommendation that teachers never use it, as it only leads to trouble, because your students WILL find you on social media.
Languish is social media :contract:
Quote from: Monoriu on June 24, 2021, 09:42:37 PM
Languish is social media :contract:
If it makes you feel better to think so, then think away.
Upon further reflection; I've come to the conclusion that social media might not actually be the problem:
People on TikTok Are Shoving Garlic Up Their Noses to Clear Sinuses. (https://www.msn.com/en-us/health/medical/people-on-tiktok-are-shoving-garlic-up-their-noses-to-clear-sinuses-does-it-really-work/ar-AALYtvr)
There's nothing left to believe in:
Frances Haugen: Facebook whistleblower revealed on '60 Minutes,' says the company prioritized profit over public good (https://www.msn.com/en-us/news/politics/facebook-whistleblower-revealed-on-60-minutes-says-the-company-prioritized-profit-over-public-good/ar-AAP6A1a)
:o :o :o
Whoa! First Pandora, now this!?!
Bloomberg has the solution. :)
(https://preview.redd.it/cv498xv2iar71.jpg?width=640&crop=smart&auto=webp&s=e7d4f7ae1589bed7aa84a9732a4cd2c3de39b4f4)
Someone's been playing too much Shadowrun.
I agree FB and Amazon should be held accountable, but that does not mean they should have any of the rights of nation states including seats at the UN.
Was that a serious suggestion by Bloomberg, or was it a tongue-in-cheek way of drawing attention to their influence?
Quote from: DGuller on October 04, 2021, 12:09:58 PM
Was that a serious suggestion by Bloomberg, or was it a tongue-in-cheek way of drawing attention to their influence?
https://www.bloomberg.com/opinion/articles/2021-10-03/give-amazon-and-facebook-a-seat-at-the-united-nations
QuoteGive Amazon and Facebook a Seat at the United Nations
Given the scope of their ambitions and our dependence on them, behemoth brands should be treated, and held to account, for what they really are: commercial superpowers.
It's getting harder to distinguish brands from nation-states.
The resemblance is not simply semiotics:
logos (flags), anthems (jingles), taglines (mottoes), mission statements (constitutions), founder stories (official histories), terms and conditions (legal codes)
... structures:
customers (citizens), shareholders (legislators), boards of directors (executives), chairmen (monarchs), CEOs (presidents), and oversight boards (judges)
... or even size, though the comparisons are startling:
Walmart Inc. employs roughly the population of Botswana; Microsoft Corp.'s market cap is greater than Brazil's GDP; FedEx Corp. has five times more planes than Air India Ltd.
The more substantial resemblance derives from the breadth of brand ambition, and the depth of our brand dependence.
Take the current global chip crisis. Nowadays, ever-increasing tracts of the world economy rely on the pure wafer foundry market, of which 55% is controlled by the Taiwan Semiconductor Manufacturing Co. Not only does TSMC pick which businesses get the chips they need, and which countries get its new "fab" factories, the company's leadership has warned nation-states against competitive onshoring, and advised China not to invade:
"As to an invasion by China [said TSMC's chairman, Mark Liu] let me tell you everybody wants to have a peaceful Taiwan Strait. Because it is to every country's benefit, but also because of the semiconductor supply chain in Taiwan — no one wants to disrupt it."
If this all sounds a tad whimsical — comparing gadflies to goliaths — it may be because of the cultural dominance of the "Westphalian system," under which the global balance of power has been envisioned, since 1648, as a mosaic of centrally controlled and culturally unified nation-states, each wielding a monopoly of force inside mutually recognized borders.
In reality, of course, these Westphalian axioms have always been more or less fluid: State sovereignty is regularly pooled militarily, legally and economically; central control has repeatedly yielded to demands for democracy and regional autonomy; cultural unity has been fragmented by religious freedom, mass migration and globalization; the monopoly of force is increasingly challenged by stateless terror and non-state cyber attacks; and the sanctity of state borders has been violated by the miasmas of pollution, climate change, invasive species and infectious diseases.
That said, for all its limitations, the Westphalian vision of sovereignty remains a useful lens through which to assess the emergence of brands as global players in their own right.
Defense and the Realm
External defense and internal assistance have historically been the province — the raison d'être — of developed nation-states. Earthquakes, outbreaks, hurricanes, heatwaves, floods, fires, insurrections and invasions demand a scale of command and control typically available only to state military and civil agencies.
Covid-19 tested this truism, even in the most advanced nations, both because of the pandemic's unprecedented global impact, and because the highly specialized responses it required could often only be provided expeditiously by brands.
In many ways the initial scramble for personal protective equipment, and the subsequent jostling for tests and vaccines, respected conventional (if questionable) public/private supply chains — even if the pandemic's severity meant deploying emergency powers (the 1950 Defense Production Act); seeking legal coercion (the European Union sued AstraZeneca Plc); and providing liability shields (invoking the 2005 PREP Act to protect vaccine manufacturers from prosecution).
More novel was how nation-states turned to private companies to solve the unprecedented challenge of contact-tracing entire populations.
In May 2020, Apple Inc. and Alphabet Inc.'s Google set aside their rivalry to launch an "exposure notification" API, available to official public health bodies via iOS and Android. By September 2021, more than 40 countries (and 25 U.S. states) had plugged into this API — including initially resistant nations like England, Finland, Germany and Norway. Significantly, many of the independent state-led solutions — notably Singapore's BlueTrace, Israel's HaMagen and France's TousAntiCovid — were still obliged to interact with Google and Apple, if only to use their app store protocols to get their technology onto their citizen's phones.
Such embedded reliance helps explain why, even in the depths of Covid's first winter, Edelman's Trust Barometer indicated that:
CEOs were more trusted than government leaders, religious leaders and journalists
Business was more trusted than government in 18 of 27 countries
68% thought CEOs should step in when governments fail to fix societal problems
The international dependence on far-from-neutral brands did not go unnoticed, as one anonymous German official told Politico:
"We need to have a discussion on how Silicon Valley is increasingly taking over the job of a nation-state ... But we don't need to have it amid a pandemic."
If this unnamed German is right about the discussion, she's wrong about the timing. As history teaches us, a Pandora's box of profit-seeking seldom fails to open for the apocalyptic horsemen: whether it's Coca-Cola Co. achieving a global presence on the coattails of World War Two, or the explosion of mercenary groups during the Global War on Terror. (In 2006, the Private Security Company Association of Iraq estimated there were more than 48,000 mercenaries inside Iraq, a combined force greater than three U.S. Army divisions.)
And, lest we forget, the flag of McDonald's Corp. still flutters proudly over Gitmo:
(https://assets.bwbx.io/images/users/iqjWHBFdfxIU/i0S5pA9juO9c/v1/800x-1.jpg)
Brand Ambassadors
The revolving door between politics and business has spun for generations. On one side of the transom are commercial companies, snapping up the power, prestige and phone books of former heads of state (John Major to Carlyle Group, Nicolas Sarkozy to Accor SA, Malcolm Turnbull to KKR & Co.). On the other side, are global statesmen eager to build own-branded ventures, with more and less charitable aims (Kissinger Associates, the Clinton Foundation, Tony Blair Associates).
Such second acts are seldom subtle — from the announcement that former Canadian Prime Minister Brian Mulroney would join the U.S. cannabis company Acreage Holdings Inc. on the day Canada legalized pot, to David Cameron's cack-handed (if lucrative) shilling for Greensill Capital. And while the term "brand ambassador" may be little more than a title bump from "celebrity endorser," it takes on a darker meaning when the CEO of ExxonMobil becomes Donald Trump's secretary of state, or when the owner of the New York Jets is appointed America's ambassador to the U.K.
But even by these low standards, something seems to have slipped.
When in 2018 Facebook Inc. hired the former UK deputy prime minister, Nick Clegg, as its vice president of global affairs, the appointment was widely viewed not as an old-school sinecure but as something more sinister: a political "heat shield" for a behemoth accused of enabling evils from fraud to genocide — to say nothing of its anti-democratic impact on elections across the globe. However, if it's true that in January Mark Zuckerberg deferred to Nick Clegg on whether to allow Donald Trump back onto Facebook and Instagram, it would mark a pivotal spin of the revolving door: the CEO of the world's sixth largest company delegating to a former British deputy PM the power to influence the political future of a one-term American president.
So much for Clegg's pre-election preening:
"Would it be acceptable to society at large to have a private company in effect become a self-appointed referee for everything that politicians say? I don't believe it would be."
Change is also afoot on the freelance side of K Street. On leaving office, Barack Obama followed well-worn convention by establishing the Obama Foundation ("Our mission is to inspire, empower, and connect people to change their world"). More unexpected was his decision to create Higher Ground — an independent production company that immediately struck multi-year deals with Netflix Inc. and Spotify Technology SA, and executive-produced "American Factory," the very on-brand Oscar-winning documentary exploring labor relations and trade with China.
The Obama playbook of a branded-content second act (beyond books and speeches) will inevitably attract a new cohort of youthful former statesmen keen to wield soft power (and earn small fortunes) unfettered by orthodox gatekeepers. On quitting the royal family, Prince Harry and Meghan Markle lost no time in establishing their own indie shop — Archewell Productions — inking copycat deals with Netflix and Spotify, and collaborating on an Apple TV+ series with the brand powerhouse Oprah Winfrey.
Naturally, such stellar brands don't twinkle alone. Off they rocket to a galaxy of VIP constellations — Aspen, Alfalfa, Davos, Jackson Hole, Munich, Milken, Sicily and Sun Valley (to say nothing of Bohemian Grove and Bilderberg) — where they signal-boost alongside the new "nomenkooltura" of Bono, Bill (x2), Blair and Bezos (to name just some of the B's).
Move Fast and Break Laws
Although the ability to enact and enforce legislation is a defining characteristic of the nation-state, private companies have long been powerful enough to shape public laws to their liking behind closed doors. Now, however, brands have also become far more willing — eager, even —to take overt positions on specific laws. In April, hundreds of companies (including Amazon, Google, Netflix and Starbucks Corp.) protested against restrictive voting laws in Georgia, Texas and other states. And in September, Lyft Inc. and Uber Technologies Inc. pledged to pay the legal fees of any of their drivers sued under Texas's draconian new abortion legislation. The "women-founded and women-led" dating app of Bumble Inc. went further, creating a "relief fund" for anyone seeking an abortion in Texas — a move that suggests a novel form of corporate jury nullification.
Of course, brands wield their greatest legal power at the tabula rasa stage of innovation — before relevant rules and regulations are even envisaged. Lumbering legislators in dozens of nation-states are confronting the consequences of a disruption culture that has learned it's better to sue for forgiveness than petition for permission.
Many of our buzziest brands have blitzscaled to vast market caps by leapfrogging their industry's rules and regulations and embedding themselves into society's fabric before elected politicians have uncapped their pens.
As we see from legal challenges across the globe, the most brazen examples are to be found in the sphere of peer-to-peer platforms, where accommodation companies sidestep the taxes paid by estate agents, renters and hoteliers; and ride-sharing brands circumvent the norms of employment and qualification. (It takes London's licensed cabbies three to four years to acquire "the Knowledge.")
However innovation is outpacing legislation across a swathe of new industries — not least vaping, virtual reality, fintech, foodtech, drones and 3D-printing. As the U.S. Transportation Secretary Pete Buttigieg admitted to Axios, the technology powering self-driving vehicles has left Model T regulation in the dust:
"We've got a lot of very detailed authorities for regulating where a mirror ought to go. They don't even contemplate a scenario where we're talking about sensors doing the work that mirrors or human beings used to."
Finally, because taxation is just legislation with its hand out, multinationals have eagerly exploited global gaps and mismatches in nation-state tax codes to pay low or no corporate tax. According to the Organization for Economic Cooperation and Development, such "base erosion and profit shifting" strategies cost nation-states between $100 and $240 billion a year in lost revenue, or 4-10% of the global corporate income tax take.
Lebrandsraum
Space exploration was once the strategic preserve, technical province and patriotic pride of nation-states. Even if brands like IBM, Whirlpool and Hasselblad were central to the 1969 Apollo 11 mission, there was no question that the Moon's first flag would be that of the United States — and not the then-Grumman Corporation, which made the lunar module.
Fast-forward 50 years, and the answer is not so obvious.
Today's space race has been joined by Blue Origin, Space X and Virgin Orbit — the competitive brand-children of billionaire businessmen who are motivated less by romantic stargazing than rapacious freebooting.
As the Earth floods, burns, chokes and shakes, so Atlas shrugs — and the world's richest brandleaders set a course for Planet B as a place to dump heavy industry, lure tourists and escape the thin air of tax and regulation. The mission statement of Google's Lunar XPRIZE could hardly be blunter:
"Space exploration had been in the exclusive domain of governments. Commercial exploration had not been a viable option, restricting creativity and resources of private markets."
In our gilded age of cosmo-capitalism, the "final frontier" offers a libertarian terra nova beyond the wildest dreams of "seasteading," onto which cashtronauts can project their commercial cupidity unfettered by hidebound laws, dead-handed bureaucracy, parochial ideologies, arts-degree intellects and GAAP accounting.
Fast-forward another 50 years, and private brands may well be planting extraterrestrial territorial logos alongside the flags of nation-states. Elon Musk is already there: The terms and conditions of his satellite internet service Starlink boldly state:
"For Services provided on Mars, or in transit to Mars via Starship or other colonization spacecraft, the parties recognize Mars as a free planet and that no Earth-based government has authority or sovereignty over Martian activities. Accordingly, Disputes will be settled through self-governing principles, established in good faith, at the time of Martian settlement."
Truly, the ego has landed.
UN-Branded
The United Nations currently grants "observer status" to some 120 intergovernmental organizations and specialized agencies — from the Sovereign Order of Malta to the International Seabed Authority. Doubtless, these bodies do admirable work, but is the Parliamentary Assembly of the Mediterranean vastly more vital to global democracy than Twitter Inc.? Or the International Telecommunication Union more unifying of international telecoms than Google?
And, if such brands might be granted observer status, why not full membership?
In 2011, the Republic of South Sudan became the latest country to join the UN after its people voted overwhelmingly to secede from the north. Last year, South Sudan's GDP was $4 billion — roughly equivalent to 2% of Jeff Bezos's current net worth, or 0.2% of Amazon's market cap. (A pandemic that made countries poorer made CEOs richer). Of course, such comparisons are not simply about relative sums of money, but power. If Amazon decided to follow Ben & Jerry's lead and pull its products from the Israeli-occupied territories — or indeed from any of the 188 countries with which Amazon does business — what might the impact be, and who would be able to stop it?
No third-party has seriously impeded Facebook's growth, influence or toxicity — certainly not its much-vaunted Oversight Board, nor even Congress, which regularly beclowns itself in the presence of disruptive CEOs:
https://www.youtube.com/watch?v=n2H8wx1aBiQ
(Senator Asks How Facebook Remains Free, Mark Zuckerberg Smirks: 'We Run Ads' | NBC News)
This is not to say that South Sudan shouldn't be the UN's 193rd member — but why not make Amazon the 194th?
Of course, in some ways, it already is.
Despite the UN Secretary-General's rabble-rousing condemnation of "billionaires joyriding to space while millions go hungry on Earth," Antonio Guterres would doubtless jump at a meeting with Jeff Bezos with at least the same alacrity as he'd sit down with South Sudan's President Salva Kiir. Equally, if President Kiir could address either the General Assembly or Sun Valley, is it obvious which he would choose?
Also:
https://www.theverge.com/2021/10/4/22708989/instagram-facebook-outage-messenger-whatsapp-error
QuoteFacebook is down, along with Instagram, WhatsApp, Messenger, and Oculus VR
Wow - Facebook is down. I don't know if I've ever seen that before.
Quote from: Barrister on October 04, 2021, 12:26:48 PM
Wow - Facebook is down. I don't know if I've ever seen that before.
As is Insta and WhatsApp (which makes sense) and for some people/geographies Google too :hmm:
Oh, no!
Quote from: Barrister on October 04, 2021, 12:26:48 PM
Wow - Facebook is down. I don't know if I've ever seen that before.
It has happened before.
Apparently outages on Amazon Web Services too which is pretty huge and in the US there seem to be issues with accessing the internet from AT&T, T-Mobile and Verizon phones :mellow:
Oh, shit!
Quote from: Sheilbh on October 04, 2021, 01:42:13 PM
Apparently outages on Amazon Web Services too which is pretty huge and in the US there seem to be issues with accessing the internet from AT&T, T-Mobile and Verizon phones :mellow:
China preparing to attack Taiwan? :ph34r:
Is Languish up?
Considering the massive DNS issues Facebook and it's affiliates have, including key card authorizations and phone not working (apparently they dispatched a team to reset the servers) I wouldn't be surprised if this was sabotage by a passed off employee.
Did someone try turning it off and then back on yet?
Quote from: Syt on October 04, 2021, 04:05:48 PM
Considering the massive DNS issues Facebook and it's affiliates have, including key card authorizations and phone not working (apparently they dispatched a team to reset the servers) I wouldn't be surprised if this was sabotage by a passed off employee.
He'd have to cover his/her tracks pretty well though. There's gonna be an army of forensic tech investigators looking over this.
Facebook's products 'harm children, stoke division and weaken our democracy', whistleblower tells Congress (https://www.cnn.com/business/live-news/facebook-senate-hearing-10-05-21/index.html)
I think the most shocking part of the Frances Haugen's testimony is that most news sources are treating it as a revelation rather than something that's blindingly obvious. For this reason I don't think Congress will do anything more than grandstand. It cannot possibly be news to them that Facebook stokes division.
its weird how something that started to get college students laid turned into something for old people and angry conservatives.
Quote from: HVC on October 05, 2021, 04:00:43 PM
its weird how something that started to get college students laid turned into something for old people and angry conservatives.
People grow up.
Quote from: Savonarola on October 05, 2021, 03:49:56 PM
Facebook's products 'harm children, stoke division and weaken our democracy', whistleblower tells Congress (https://www.cnn.com/business/live-news/facebook-senate-hearing-10-05-21/index.html)
I think the most shocking part of the Frances Haugen's testimony is that most news sources are treating it as a revelation rather than something that's blindingly obvious. For this reason I don't think Congress will do anything more than grandstand. It cannot possibly be news to them that Facebook stokes division.
Well Republicans think FB foments divisions by censoring conservative opinions. :P
Quote from: Savonarola on October 05, 2021, 03:49:56 PM
Facebook's products 'harm children, stoke division and weaken our democracy', whistleblower tells Congress (https://www.cnn.com/business/live-news/facebook-senate-hearing-10-05-21/index.html)
I think the most shocking part of the Frances Haugen's testimony is that most news sources are treating it as a revelation rather than something that's blindingly obvious. For this reason I don't think Congress will do anything more than grandstand. It cannot possibly be news to them that Facebook stokes division.
Yes and no.
I think that when we are talking about how big organizations - and indeed countries - "know" things collectively and act on them there's a sort of slow progress that goes something like this:
[People directly affected experience it]
--> [people with an interest have convincing theories and specific proofs]
--> [it is widely and hotly debated (often with plenty of innacurate information in the mix)]
--> [it is no longer hotly debated but is generally "widely known"]
--> [it is officially registered as known and actionable by the decision-makers in the organization].
Sometimes that progress goes very fast. More frequently it's quite drawn out. And in many cases it gets stuck and never progresses.
I think this "testifies in front on Congress" thing is part of process for the last step. Whether or not they act on it is a different matter of course.
Quote from: The Brain on October 05, 2021, 04:08:30 PM
Quote from: HVC on October 05, 2021, 04:00:43 PM
its weird how something that started to get college students laid turned into something for old people and angry conservatives.
People grow up.
But it only took ten years.
Quote from: Eddie Teach on October 05, 2021, 05:07:26 PM
Quote from: The Brain on October 05, 2021, 04:08:30 PM
Quote from: HVC on October 05, 2021, 04:00:43 PM
its weird how something that started to get college students laid turned into something for old people and angry conservatives.
People grow up.
But it only took ten years.
Closer to 18 years. After the first million, these kids became conservative very quickly.
Facebook is 18 years old *now*. It's been the domain of old folks for a while.
Quote from: Eddie Teach on October 05, 2021, 05:07:26 PM
Quote from: The Brain on October 05, 2021, 04:08:30 PM
Quote from: HVC on October 05, 2021, 04:00:43 PM
its weird how something that started to get college students laid turned into something for old people and angry conservatives.
People grow up.
But it only took ten years.
Probably all the GM food.
Quote from: The Brain on October 05, 2021, 08:10:10 PM
Probably all the GM food.
You think player food would be better?
Quote from: Jacob on October 05, 2021, 10:25:37 PM
Quote from: The Brain on October 05, 2021, 08:10:10 PM
Probably all the GM food.
You think player food would be better?
Well, yeah. The GM doesn't have time to prepare tasty nosh, being busy preparing the campaign.
Don't worry in Hungary they managed to make online content/news providers largely responsible for the contents of user comments under their articles. It has successfully saved most of online Hungarian spaces from the worrying influence of unfiltered personal opinions. I am sure there's a way in America as well to suffocate online public discourse.
(https://pbs.twimg.com/media/FBBFLtqVIAEgUH9?format=png&name=small)
Can we get Kirk to engage Zuckerbot to engage in a debate and cause it to short-circuit? :P
Quote from: Jacob on October 05, 2021, 05:06:26 PM
Quote from: Savonarola on October 05, 2021, 03:49:56 PM
Facebook's products 'harm children, stoke division and weaken our democracy', whistleblower tells Congress (https://www.cnn.com/business/live-news/facebook-senate-hearing-10-05-21/index.html)
I think the most shocking part of the Frances Haugen's testimony is that most news sources are treating it as a revelation rather than something that's blindingly obvious. For this reason I don't think Congress will do anything more than grandstand. It cannot possibly be news to them that Facebook stokes division.
Yes and no.
I think that when we are talking about how big organizations - and indeed countries - "know" things collectively and act on them there's a sort of slow progress that goes something like this:
[People directly affected experience it]
--> [people with an interest have convincing theories and specific proofs]
--> [it is widely and hotly debated (often with plenty of innacurate information in the mix)]
--> [it is no longer hotly debated but is generally "widely known"]
--> [it is officially registered as known and actionable by the decision-makers in the organization].
Sometimes that progress goes very fast. More frequently it's quite drawn out. And in many cases it gets stuck and never progresses.
I think this "testifies in front on Congress" thing is part of process for the last step. Whether or not they act on it is a different matter of course.
That's a good point. With the exception of the cybersecurity issues everything in Haugen's testimony was previously covered in the news. One wouldn't have had to have been a Facebook employee to have made her testimony (and she was a cybersecurity manager, so she wouldn't have had direct knowledge of most of the things she testified about.) The significance, though, is that she provided internal memos that demonstrated Facebook was aware of the problems that they cause and chose to bury it. (While it would have been much more shocking to learn the Facebook was unaware of the problems it was causing), that might be a significant step to changing public perception.
(And, though it is against my natural inclinations,) to be fair to Congress this is new territory for them. Can they regulate social media algorithms? Is it desirable that they have this ability? If we regulate, say, Instagram because it damages the self esteem of young women, shouldn't we also regulate other things that exacerbate body issues like Hollywood movies, Barbie dolls and Glamour Magazines? This isn't something they can't act quickly on.
I do think that we're in a novel territory here, so precedents and ideals should at least not be uncritically accepted. Sometimes the technological abilities upset the balance of issues so much that old arguments cannot be recycled.
For example, when it comes to privacy, some people claim that one has no right to privacy in a public space. Maybe it made sense in 1950, when a network of CCTV cameras with facial recognition software couldn't essentially map out all your movements, and the only invasion of privacy most people potentially faced was being a person of interest for a PI. Now that the government could retroactively be that PI on anyone they choose to take interest in, the very debate changes.
The same concept applies to social media and their algorithms. Human minds obviously always had vulnerabilities, and human fell pray to manipulators all the time. That didn't happen frequently enough to outweigh the risks of controlling speech that could zombify people. However, manipulation of such vulnerabilities has literally become science in the last decade, and now there are ways to create alternative reality for people by microtargeting that doesn't require monopoly of information. Democracy relies on having sufficient number of people having a sufficient grasp on reality, you can't have debates without having commonly accepted facts. Maybe it's time to be worried about more than just yelling "fire" in crowded theaters.
Better to have the cries of oppression be baseless.
Quote from: DGuller on October 06, 2021, 09:01:36 AM
The same concept applies to social media and their algorithms. Human minds obviously always had vulnerabilities, and human fell pray to manipulators all the time. That didn't happen frequently enough to outweigh the risks of controlling speech that could zombify people. However, manipulation of such vulnerabilities has literally become science in the last decade, and now there are ways to create alternative reality for people by microtargeting that doesn't require monopoly of information. Democracy relies on having sufficient number of people having a sufficient grasp on reality, you can't have debates without having commonly accepted facts. Maybe it's time to be worried about more than just yelling "fire" in crowded theaters.
The new tools are available to all. The forces of good (however those are defined) can use them to reach exactly those individuals who are in hate echo-chambers with targetted messages. I can't shake the suspicion, however, that they in many cases have nothing to say to them.
Quote from: The Brain on October 06, 2021, 09:42:52 AM
The new tools are available to all. The forces of good (however those are defined) can use them to reach exactly those individuals who are in hate echo-chambers with targetted messages. I can't shake the suspicion, however, that they in many cases have nothing to say to them.
Just because a tool is available to all doesn't mean it's equally effective for all. If a tool is aiding you in inspiring hate, then it's not very useful to people who want to inspire tolerance. Unfortunately tolerance does not inspire as strong of an emotional response as hate does, so any tool that appeals to basest of emotions will have a disparate utility to forces of evil.
Quote from: DGuller on October 06, 2021, 09:54:46 AM
Quote from: The Brain on October 06, 2021, 09:42:52 AM
The new tools are available to all. The forces of good (however those are defined) can use them to reach exactly those individuals who are in hate echo-chambers with targetted messages. I can't shake the suspicion, however, that they in many cases have nothing to say to them.
Just because a tool is available to all doesn't mean it's equally effective for all. If a tool is aiding you in inspiring hate, then it's not very useful to people who want to inspire tolerance. Unfortunately tolerance does not inspire as strong of an emotional response as hate does, so any tool that appeals to basest of emotions will have a disparate utility to forces of evil.
I am not convinced that limiting people's communication is a tool that gives the advantage to tolerance.
Quote from: The Brain on October 06, 2021, 10:32:45 AM
I am not convinced that limiting people's communication is a tool that gives the advantage to tolerance.
Not in any circumstance?
Quote from: DGuller on October 06, 2021, 10:51:45 AM
Quote from: The Brain on October 06, 2021, 10:32:45 AM
I am not convinced that limiting people's communication is a tool that gives the advantage to tolerance.
Not in any circumstance?
In most fields you can think up specific scenarios where going against best practice gives a desired outcome.
(https://pbs.twimg.com/media/FBB__DeWQAI5Z_E?format=png&name=medium)
I love the China's social media platform is Qzone, that what the intranet was cold at at my previous employer. :lol:
Quote from: DGuller on October 06, 2021, 09:01:36 AM
I do think that we're in a novel territory here, so precedents and ideals should at least not be uncritically accepted. Sometimes the technological abilities upset the balance of issues so much that old arguments cannot be recycled.
For example, when it comes to privacy, some people claim that one has no right to privacy in a public space. Maybe it made sense in 1950, when a network of CCTV cameras with facial recognition software couldn't essentially map out all your movements, and the only invasion of privacy most people potentially faced was being a person of interest for a PI. Now that the government could retroactively be that PI on anyone they choose to take interest in, the very debate changes.
The same concept applies to social media and their algorithms. Human minds obviously always had vulnerabilities, and human fell pray to manipulators all the time. That didn't happen frequently enough to outweigh the risks of controlling speech that could zombify people. However, manipulation of such vulnerabilities has literally become science in the last decade, and now there are ways to create alternative reality for people by microtargeting that doesn't require monopoly of information. Democracy relies on having sufficient number of people having a sufficient grasp on reality, you can't have debates without having commonly accepted facts. Maybe it's time to be worried about more than just yelling "fire" in crowded theaters.
The thing to realize about this is that the science is still in its infancy.
This is the brick suitcase mobile phone level of this technology. It will get better, and it will get better exponentially fast. This is just the tip of the iceberg.
Quote from: The Brain on October 06, 2021, 09:42:52 AM
The new tools are available to all. The forces of good (however those are defined) can use them to reach exactly those individuals who are in hate echo-chambers with targetted messages. I can't shake the suspicion, however, that they in many cases have nothing to say to them.
I think the reality of market forces and the lack of controls to prevent monopolies (local or more widespread) means that the new tools are indeed not available to all.
Quote from: The Brain on October 06, 2021, 09:42:52 AM
Quote from: DGuller on October 06, 2021, 09:01:36 AM
The same concept applies to social media and their algorithms. Human minds obviously always had vulnerabilities, and human fell pray to manipulators all the time. That didn't happen frequently enough to outweigh the risks of controlling speech that could zombify people. However, manipulation of such vulnerabilities has literally become science in the last decade, and now there are ways to create alternative reality for people by microtargeting that doesn't require monopoly of information. Democracy relies on having sufficient number of people having a sufficient grasp on reality, you can't have debates without having commonly accepted facts. Maybe it's time to be worried about more than just yelling "fire" in crowded theaters.
The new tools are available to all. The forces of good (however those are defined) can use them to reach exactly those individuals who are in hate echo-chambers with targetted messages. I can't shake the suspicion, however, that they in many cases have nothing to say to them.
That is like saying the nuclear weapons are available to the good guys as well.
There is nothing about technology or tools that demand that they theoretically be equally useful for both good and bad under any particular set of specific circumstances.
You might be right, but there isn't any reason to assume that you are right absent some very conscious effort by us humans to think hard about those tools, how they are used, and regulate them appropriately so their negative utility is minimized and their positive utility maximized. Just like we do with other dangerous tools. We don't just assume that guns will naturally be equally useful to good and bad actors. We don't assume that cars will be net positive to human society. Rather we structure our laws around how to minimize the negative effects of tools and maximize the positive.
Looking at the map above I think I'll take my chances with non-state-controlled social media.
Quote from: The Brain on October 06, 2021, 01:00:00 PM
Looking at the map above I think I'll take my chances with non-state-controlled social media.
There may not be a choice in the matter. It may well be that social media will either be controlled by a state seeking to protect its democracy, or it will be controlled by a state seeking to protect its autocratic rule once it successfully toppled democracy.
I am fairly pessimistic about the future of democracy. I hope it will somehow weather the current crisis, but it may well be the case that the experiment will come to an end (modern democracy has only existed for a century). Freedom of speech and the rule of law could in theory survive the fall of democracy, but my guess is they will be gone too.
Most major Western countries are still democracies. Ie their peoples decide their fate. I have neither obligation nor inclination to protect peoples of democracies from the effects of their actions. If they want to end democracy it will end. Hopefully with a whimper and not with a bang.
I think freedom of speech is a very important component of a good society. AFAICT the state telling people that they are wrongspeaking or telling them how they should communicate is not the way forward. There are limits to freedom of speech in place, and I think they are sufficient (in Sweden the limits are too strict IMHO). The state telling haters "honey, you're wrongfeeling again" is not the way forward.
I am fully aware that few people are in favor of democracy, freedom of speech, or the rule of law when they go against them. Even if they all go against me I will consider them better than the alternative until I see evidence of a better option. I may well see such evidence at some point in the future, but at present I do not.
Those who have read my crap over the years know that I don't like Communism. Sweden has had a Communist party in parliament for many decades. I think their presence has poisoned Swedish political discourse and has led to worse political outcomes for the country. But I have NEVER been of the opinion that they shouldn't be allowed to freely spew their hateful garbage. Because it is not the way.
The Brain, what do you think of rules that:
1. Puts some level of limit on the type of lies that can be presented as the truth?
2. Require some level of "fair access" for different perspectives on a market level (i.e. prevent a monopoly or duopoly from pushing specific political viewpoints while freezing out all others)?
3. Provides some set of limits to how private actors can engage with and/ or spend money on elections?
As I understand it democracies have had varying levels of strict laws on those topics at different times and places. From my perspective, it looks like democracy tends to be weaker once those rules get too weak. That is not to say that autocrats and worse can't use similar arguments to stifle free expression and their opposition, but to my eyes that's typically not associated merely with regulation.
Quote from: Jacob on October 06, 2021, 05:19:58 PM
The Brain, what do you think of rules that:
1. Puts some level of limit on the type of lies that can be presented as the truth?
2. Require some level of "fair access" for different perspectives on a market level (i.e. prevent a monopoly or duopoly from pushing specific political viewpoints while freezing out all others)?
3. Provides some set of limits to how private actors can engage with and/ or spend money on elections?
As I understand it democracies have had varying levels of strict laws on those topics at different times and places. From my perspective, it looks like democracy tends to be weaker once those rules get too weak. That is not to say that autocrats and worse can't use similar arguments to stifle free expression and their opposition, but to my eyes that's typically not associated merely with regulation.
1. I don't have a problem with the present situation where there's a bunch of situations where lies are illegal for specific reasons. For instance declared contents of food etc etc etc. As for some kind of general ban on lying I think it's a horrible concept. In the words of Pontius Pilate: "And what is truth? Is truth unchanging law? We both have truths. Are mine the same as yours?". Even if you somehow restrict it to harmful lies it would pretty much kill off political and religious communication, among others.
2. I'm fine with them when there is an actual monopoly, for instance back in the day when Sweden had a state monopoly on radio and TV news. In the age of the internet of course there is very rarely an actual monopoly. For instance Facebook isn't even close to having a monopoly on information on the internet.
3. I don't have a problem with the normal limits that exist in modern democracies (my uninformed impression is that there are significant differences in details between different countries). NB I don't even know what they look like in Sweden.
Quote from: The Brain on October 06, 2021, 03:54:03 PM
I am fairly pessimistic about the future of democracy. I hope it will somehow weather the current crisis, but it may well be the case that the experiment will come to an end (modern democracy has only existed for a century). Freedom of speech and the rule of law could in theory survive the fall of democracy, but my guess is they will be gone too.
Most major Western countries are still democracies. Ie their peoples decide their fate. I have neither obligation nor inclination to protect peoples of democracies from the effects of their actions. If they want to end democracy it will end. Hopefully with a whimper and not with a bang.
I think freedom of speech is a very important component of a good society. AFAICT the state telling people that they are wrongspeaking or telling them how they should communicate is not the way forward. There are limits to freedom of speech in place, and I think they are sufficient (in Sweden the limits are too strict IMHO). The state telling haters "honey, you're wrongfeeling again" is not the way forward.
I am fully aware that few people are in favor of democracy, freedom of speech, or the rule of law when they go against them. Even if they all go against me I will consider them better than the alternative until I see evidence of a better option. I may well see such evidence at some point in the future, but at present I do not.
Those who have read my crap over the years know that I don't like Communism. Sweden has had a Communist party in parliament for many decades. I think their presence has poisoned Swedish political discourse and has led to worse political outcomes for the country. But I have NEVER been of the opinion that they shouldn't be allowed to freely spew their hateful garbage. Because it is not the way.
I don't think I am talking about the same thing you are talking about. I am not suggesting any kind of controls on free speech in the manner you are rejecting. At least...I don't think that I am.
I mean (take that, Yi), are there individual state actions regarding this problem that in isolation wouldn't be a huge problem and that I could be OK with as isolated actions? Probably. But I am very wary of the state taking actions against ways of communicating that the state thinks have resulted in wrongthink. In addition to other considerations there are huge numbers of people on the "good" side of the aisle who have very little regard for freedom of speech or understanding of the deeper advantages of it, if they smell blood a feeding frenzy isn't far away.
I have observed in Sweden a trend that I find disturbing. As part of fighting the good fight against the rising Sverigedemokraterna (SD, the xenophobic nutjob party) some established parties have dismantled a bunch of valuable taboos. They have given detailed political directives regarding what state museums should say about their fields. They have done the same for government-funded arts, science and similar. They are shamelessly discussing changing the constitution to make it harder for SD to mess with it. In short, they have gone out of their way to kill off taboos that would have slowed down the SD if/when they get in power (and also made museums and others say weird stuff cheapening them etc etc which in itself is quite harmful). Telling museums and others that they should now talk about the glorious history of white Swedes if they want funding? Changing the constitution to limit the power of non-SD parties? Yeah sure go ahead, these are all normal and established practice now. I think this kind of strategy is unsound.
I share some of your concerns (I mean), and that's why i think it's a huge blessing that private companies are trying to correct misinformation without the government's involvement.
If the US had the modern equivalent of the Fairness Doctrine then perhaps people would not have to rely on private actors with a near monopoly doing the right thing.
Quote from: The Brain on October 06, 2021, 05:49:08 PM
Quote from: Jacob on October 06, 2021, 05:19:58 PM
The Brain, what do you think of rules that:
1. Puts some level of limit on the type of lies that can be presented as the truth?
2. Require some level of "fair access" for different perspectives on a market level (i.e. prevent a monopoly or duopoly from pushing specific political viewpoints while freezing out all others)?
3. Provides some set of limits to how private actors can engage with and/ or spend money on elections?
As I understand it democracies have had varying levels of strict laws on those topics at different times and places. From my perspective, it looks like democracy tends to be weaker once those rules get too weak. That is not to say that autocrats and worse can't use similar arguments to stifle free expression and their opposition, but to my eyes that's typically not associated merely with regulation.
1. I don't have a problem with the present situation where there's a bunch of situations where lies are illegal for specific reasons. For instance declared contents of food etc etc etc. As for some kind of general ban on lying I think it's a horrible concept. In the words of Pontius Pilate: "And what is truth? Is truth unchanging law? We both have truths. Are mine the same as yours?". Even if you somehow restrict it to harmful lies it would pretty much kill off political and religious communication, among others.
2. I'm fine with them when there is an actual monopoly, for instance back in the day when Sweden had a state monopoly on radio and TV news. In the age of the internet of course there is very rarely an actual monopoly. For instance Facebook isn't even close to having a monopoly on information on the internet.
3. I don't have a problem with the normal limits that exist in modern democracies (my uninformed impression is that there are significant differences in details between different countries). NB I don't even know what they look like in Sweden.
1. Hillary Clinton is involved with a pedo ring run out of a pizza shop was a lie. Under US law somehow such lies are allowed. It never becomes a truth. Just a lie which continues to be told as truth.
2. The problem is when a person taking in the information does not see other points of view. If the bar is set so high that nothing should be done until one entity dominates all information it would become impossible to have a sane regulatory environment. The simple solution is that no mode of communication can have only one point of view - regulated balance.
Quote from: crazy canuck on October 08, 2021, 04:10:14 PM
If the US had the modern equivalent of the Fairness Doctrine then perhaps people would not have to rely on private actors with a near monopoly doing the right thing.
If people only understood the meaning of words like "monopoly" and phrases like "the modern equivalent of the Fairness Doctrine" they wouldn't say silly things like this.
The problem with Big Tech and Social media isn't the providers, it is the users.
No one can force users to see alternate points of view, nor even force them to understand the importance of seeing opposing points of view. One can only educate them on the dangers of seeing only those POVs that they already agree with, and hope that some, at least, will break away from the cults. Deprogramming is hard, but it generally works.
Quote from: grumbler on October 08, 2021, 06:56:55 PM
The problem with Big Tech and Social media isn't the providers, it is the users.
No one can force users to see alternate points of view, nor even force them to understand the importance of seeing opposing points of view. One can only educate them on the dangers of seeing only those POVs that they already agree with, and hope that some, at least, will break away from the cults. Deprogramming is hard, but it generally works.
Who are the ones who will engage in the deprogramming? And how do they go about it?
Quote from: grumbler on October 08, 2021, 06:56:55 PM
No one can force users to see alternate points of view, nor even force them to understand the importance of seeing opposing points of view. One can only educate them on the dangers of seeing only those POVs that they already agree with, and hope that some, at least, will break away from the cults. Deprogramming is hard, but it generally works.
Who's going to force the users to undergo deprogramming, though? From what I understand, it's generally not an activity one chooses for themselves to undergo, and conceptually it probably can't be.
Quote from: grumbler on October 08, 2021, 06:56:55 PM
The problem with Big Tech and Social media isn't the providers, it is the users.
No one can force users to see alternate points of view, nor even force them to understand the importance of seeing opposing points of view. One can only educate them on the dangers of seeing only those POVs that they already agree with, and hope that some, at least, will break away from the cults. Deprogramming is hard, but it generally works.
Facebook is probably looking for some good PR people at the moment. "Big Tech isn't the problem, its the users" may be exactly the slogan they are looking for. Might even fool some people into thinking the user gets to choose whatever it is they want to see and that specific information isn't actually being fed to them.
Quote from: Jacob on October 08, 2021, 07:13:31 PM
Who are the ones who will engage in the deprogramming? And how do they go about it?
Big Tech and Social Media. They just need incentives, like freedom from foundations dedicated to make them conform.
Quote from: grumbler on October 08, 2021, 09:02:39 PM
Big Tech and Social Media. They just need incentives, like freedom from foundations dedicated to make them conform.
I suspect big tech and social media will - absent external pressure - prioritize 1) generating profit for shareholders; and 2) wielding political influence to serve their business interests.
I still think the novelty of people falling into echo chambers and the evil influences is overstated, so I am with The Brain on this one.
Sure, there are people who actually believe Hillary Clinton ran a pedo ring from a pizza place but are they really a bigger portion of the populace than, say, followers of Scientology? And the "milder" form of Trumpism where everything not supporting their ignorant narrative is a lie - I hate to break it to you people but that's how a lot of people thought about politics before the Internet. You just weren't aware of it. If this level of ignorance can overtake half your country the problem perhaps is not with Facebook.
And making very sure The Truth is labelled as such and distributed won't help much on it's own. When Protestantism started to take hold, official channels of communication very forcefully branded it fake news, to no avail. In communist Hungary right-wing and nationalistic thoughts were -sometimes actively, sometimes passively- discouraged and hounded and considered offensive for 50 years, but came out to the open the moment it was safe to do so. Etc.
If anything, what should be done -like my esteemed Swedish colleague mentioned- is to utilise the tools to spread the facts or even your own ecochambers' thoughts, instead of trying to suppress those tools. The state wants to spread fact sheets to people exclusively watching Stop the Steal channels? Tag their fact videos with "#stopthesteal". When radio and TV came around the answer (most) states found wasn't to fight it and ban it but to set up channels which would spread their versions of events, which in better countries was mostly the truth. I can't see any viable option here either. If they are really worried they can force Facebook to put their state channel into all users' news feed. The ones you really one to reach will still ignore these but then they'd do so with everything else.
Then Trump's spiritual successor with actual brains will come around and turn all this against you.
I really have a hard time seeing how things are the same as they always have been. I'm not laboring under an illusion that the average voter was a hyper-rational human being in the past, but their critical thinking weaknesses weren't intentionally or unintentionally abused to the max by the algorithms. Both anecdotally and statistically, it seems like people are far more entrenched in their echo chambers than they've ever been, to the point that many are openly discussing coup or civil war.
As I said previously, Trump's competent successor won't need anything in place to turn it against us, he'll build something up from scratch. This is the same kind of thinking that keeps Democrats on the backfoot politically, as they refuse to challenge the norms, ceding the ground to Republicans who have no such inhibitions.
Quote from: Tamas on October 09, 2021, 03:48:25 AM
I still think the novelty of people falling into echo chambers and the evil influences is overstated, so I am with The Brain on this one.
Sure, there are people who actually believe Hillary Clinton ran a pedo ring from a pizza place but are they really a bigger portion of the populace than, say, followers of Scientology? And the "milder" form of Trumpism where everything not supporting their ignorant narrative is a lie - I hate to break it to you people but that's how a lot of people thought about politics before the Internet. You just weren't aware of it. If this level of ignorance can overtake half your country the problem perhaps is not with Facebook.
And making very sure The Truth is labelled as such and distributed won't help much on it's own. When Protestantism started to take hold, official channels of communication very forcefully branded it fake news, to no avail. In communist Hungary right-wing and nationalistic thoughts were -sometimes actively, sometimes passively- discouraged and hounded and considered offensive for 50 years, but came out to the open the moment it was safe to do so. Etc.
If anything, what should be done -like my esteemed Swedish colleague mentioned- is to utilise the tools to spread the facts or even your own ecochambers' thoughts, instead of trying to suppress those tools. The state wants to spread fact sheets to people exclusively watching Stop the Steal channels? Tag their fact videos with "#stopthesteal". When radio and TV came around the answer (most) states found wasn't to fight it and ban it but to set up channels which would spread their versions of events, which in better countries was mostly the truth. I can't see any viable option here either. If they are really worried they can force Facebook to put their state channel into all users' news feed. The ones you really one to reach will still ignore these but then they'd do so with everything else.
Radio and TV were both very regulated industries. That was partially a result of the limitations of their technology - the government controlled the bandwidth of both, and hence could mandate restrictions on content.
Internet bandwidth is effectively unlimited, and there is zero control on content.
Referring to "history" here makes no damn sense. Historically, the government regulated media companies extensively. What we are talking about now is regulated this industry as well.
I think the current attempts to identify and "ban" content that some person or group considers "fake news" is a overly blunt, restrictive, and eventually fruitless endeavor. The problem is not the content, the problem is the delivery and using technology to craft more and more extreme messages in order to capture a viewership. Fox News did this even without the internet, and the bad faith actors (and the neutral actors who have perverse incentives to assist them) are only going to get better and better and better at crafting that content.
Hell, you don't even have to lie to do this - you can craft an outrage machine while never technically saying something that is strictly a lie.
Did you guys see how apparently OAN is basically funded by AT&T?
QuoteAT&T is pushing back. In a statement to Deadline's Dade Hayes, AT&T said it "never had a financial interest in OAN's success and does not 'fund' OAN. When AT&T acquired DirecTV, we refused to carry OAN on that platform, and OAN sued DirecTV as a result. Four years ago, DirecTV reached a commercial carriage agreement with OAN, as it has with hundreds of other channels and as OAN has done with the other TV providers that carry its programming."
https://www.poynter.org/commentary/2021/is-att-backing-one-america-news/
I had not so I looked it up.
I own AT&T stock. :)
I wonder if this is less about big tech and social media (I think there are other issues there) - and if it's maybe the mainstreaming of internet/very online culture.
I am too old to use Tik Tok - but I saw this article on "couch guy":
https://www.cnet.com/news/who-is-couch-guy-on-tiktok-the-internets-latest-obsession-explained/
Basically there was a Tik Tok of a girl surprising her college boyfriend with some sappy music on top - he looks a bit shocked and gets up slowly to hug her. Since then #couchguy hashtagged videos have over 650 million views. It's broken into two broad streams: one is second-by-second breakdowns of the video with all couch guy's questionable behaviour and red flags (he's sat on a couch with three girls, does the girl he's next to slip him his phone? why?); the other is parodies (guy who's clearly not overjoyed to see his girlfriend, guy surrounded by women cooing over him greeting his girlfriend; and, inevitably, meta-parodies of the true crime sleuthing). There's now also people doing their own surprise reunion videos as well as push back against the "couch guy is cheating scum" discourse.
This isn't directly related to politics or anything you guys have been talking about. But the first video posted about two weeks and has already blown up and (as someone smarter than me on Twitter) pointed out something about social media is that it creates the content and the theories and the analysis (hello languish! :o) and the conspiracies that you used to need an entire dedicated community to do- 4chan, subreddits etc. This is created more or less spontaneously and instantly. And if you go back beyond the 4chan/subreddit communities you'd find the fringe UFO or Jim Garrison JFK world where images of a UFO or the Zapruder film were microanalysed by obsessives.
I think when social media emerged the expectation of its creators and all of us as users was that it would be a new way of communicating but it would be our world going online. What, I think, has happened is actually we have mainstreamed the very online culture that was once primarily populated by shut-ins and other internet eccentrics. That is now a large part of online discourse and I think that drives a lot of these issues - trolling, disinformation, extremism, conspiracy-mindedness etc. It's what used to be a very niche and quite guarded online world (like Languish) becoming one of the main streams of the internet.
I don't think it's very healthy for us generally, but I do regularly think that stuff I see reminds me of a subreddit gone rogue except it's now massive.
Quote from: Jacob on October 09, 2021, 02:56:28 PM
Did you guys see how apparently OAN is basically funded by AT&T?
Yes. And I'm not convinced this report is entirely fact-based. Not without some more evidence than the testimony of a known liar.
Quote from: Tamas on October 09, 2021, 03:48:25 AM
I still think the novelty of people falling into echo chambers and the evil influences is overstated, so I am with The Brain on this one.
Sure, there are people who actually believe Hillary Clinton ran a pedo ring from a pizza place but are they really a bigger portion of the populace than, say, followers of Scientology? And the "milder" form of Trumpism where everything not supporting their ignorant narrative is a lie - I hate to break it to you people but that's how a lot of people thought about politics before the Internet. You just weren't aware of it. If this level of ignorance can overtake half your country the problem perhaps is not with Facebook.
Yes, people who believe that Hillary Clinton is a blood thirty cannibal is much greater than people who believe in Scientology. 15 percent of the American public believe that the world is run by satanic pedophiles. There's about 40,000 scientologists worldwide.
Ok, but the Scientologists have more power and possibly more money than that 15%. ;)
Someone really should have told Mark Zuckerberg that Neuromancer is supposed to be a dystopian science fiction novel, not a business plan:
QuoteFacebook is planning to rebrand the company with a new name
Mark Zuckerberg wants to be known for building the METAVERSE!
Facebook is planning to change its company name next week to reflect its focus on building the METAVERSE!, according to a source with direct knowledge of the matter.
The coming name change, which CEO Mark Zuckerberg plans to talk about at the company's annual Connect conference on October 28th, but could unveil sooner, is meant to signal the tech giant's ambition to be known for more than social media and all the ills that entail. The rebrand would likely position the blue Facebook app as one of many products under a parent company overseeing groups like Instagram, WhatsApp, Oculus, and more. A spokesperson for Facebook declined to comment for this story.
Facebook already has more than 10,000 employees building consumer hardware like AR glasses that Zuckerberg believes will eventually be as ubiquitous as smartphones. In July, he told The Verge that, over the next several years, "we will effectively transition from people seeing us as primarily being a social media company to being a METAVERSE! company."
A rebrand could also serve to further separate the futuristic work Zuckerberg is focused on from the intense scrutiny Facebook is currently under for the way its social platform operates today. A former employee turned whistleblower, Frances Haugen, recently leaked a trove of damning internal documents to The Wall Street Journal and testified about them before Congress. Antitrust regulators in the US and elsewhere are trying to break the company up, and public trust in how Facebook does business is falling.
Facebook isn't the first well-known tech company to change its company name as its ambitions expand. In 2015, Google reorganized entirely under a holding company called Alphabet, partly to signal that it was no longer just a search engine, but a sprawling conglomerate with companies making driverless cars and health tech. And Snapchat rebranded to Snap Inc. in 2016, the same year it started calling itself a "camera company" and debuted its first pair of Spectacles camera glasses.
I'm told that the new Facebook company name is a closely-guarded secret within its walls and not known widely, even among its full senior leadership. A possible name could have something to do with Horizon, the name of the still-unreleased VR version of Facebook-meets-Roblox that the company has been developing for the past few years. The name of that app was recently tweaked to Horizon Worlds shortly after Facebook demoed a version for workplace collaboration called Horizon Workrooms.
FACEBOOK HAS BEEN LAYING THE GROUNDWORK FOR A BRANDING CHANGE
Aside from Zuckerberg's comments, Facebook has been steadily laying the groundwork for a greater focus on the next generation of technology. This past summer it set up a dedicated METAVERSE! team. More recently, it announced that the head of AR and VR, Andrew Bosworth, will be promoted to chief technology officer. And just a couple of days ago Facebook announced plans to hire 10,000 more employees to work on the METAVERSE! in Europe.
The METAVERSE! is "going to be a big focus, and I think that this is just going to be a big part of the next chapter for the way that the internet evolves after the mobile internet," Zuckerberg told The Verge's Casey Newton this summer. "And I think it's going to be the next big chapter for our company too, really doubling down in this area."
Complicating matters is that, while Facebook has been heavily promoting the idea of the METAVERSE! in recent weeks, it's still not a concept that's widely understood. The term was coined originally by sci-fi novelist Neal Stephenson to describe a virtual world people escape to from a dystopian, real world. Now it's being adopted by one of the world's largest and most controversial companies — and it'll have to explain why its own virtual world is worth diving into.
But don't worry Facebook (or whatever it will be called) will build a responsible
METAVERSE! (https://about.fb.com/news/2021/09/building-the-metaverse-responsibly/) :)
I hope the exclamation marks are yours...
Quote from: Eddie Teach on October 20, 2021, 10:06:30 AM
I hope the exclamation marks are yours...
They're implied in the original article and Facebook page.
And inspired by Darren Aronofsky's mother!
I'm doubting that the name change in the article is real. First of all, it's incredibly dumb-sounding, and secondly, it is unlikely that a new name not even known to the senior executives is going to be leaked to The Verge.
I don't think that Metaverse is going to be the new brand name (it seems to come from the Zuck quote mentioned in the article).
It might indeed be a move like Google, where everything is under Alphabet, but the existing brands are what the average consumer interacts with.
Yeah, as I understand it "metaverse" (or "METAVERSE!") is the buzz-word describing the new... uh... tech paradigm? Like Web 2.0, IoT, Cyberspace, Virtual Reality et. al.
Personally, I'm expecting the rebranded name to be METAFACE, with MetaZUCK being the runner up.
I actually kind of hope they go for MEGACORP or something. :D
Zuckerberg is a big enough nerd he should go with Compu Global Hyper Meganet.
Quote from: Syt on October 20, 2021, 12:49:31 PM
I actually kind of hope they go for MEGACORP or something. :D
MetaCorp – run by MetaZuck who desires to rule
THE METAVERSE! Only a dedicated group of metallurgists and metaphysical poets (a holdout from The Protectorate) dare to defy MetaZuck. With the power of metameterials and hyperlinear poems (which some may term "Meta-verse") they've hindered him. In order to triumph MetaZuck realizes that he must become
THE METAVERSE! (This is a terrible fate as MetaZuck must always sit still and so must subsist solely on a diet of Metamucil). While MetaZuck has been able to program
THE METAVERSE! so that it doesn't explode when exposed to paradoxes; alas he failed to consider metaphor or metaphysical wit.
Exciting climactic scene:
The chair MetaZuck sat upon ; like an iron throne, glittered in the pale blue light of a thousand monitors that filled the cavernous chamber. Minions, clad head to toe in black, stood as statues in front of the black as mourning doors. From the base of MetaZuck's skull and his arms in legs ran wires and cables ran in profusion across the slate floor behind the rows of monitors. In the dim light only MetaZuck's eyes showed; those eyes, those terrible eyes, those lidless yellow eyes which seemed to know every secret, every thought.
"Bring him" he sneered, his voice filled with cold command.
His minnions left and swiftly returned bearing the body of a man in colorful clothes.
He looked upon the limp body for a moment and chuckled. "Wake him."
Another minion brought out a bucket of water and doused the man.
"At my back I always hear," he muttered as he shook his head and slowly sat up.
"So this is the great Marvell," said MetaZuck.
"Illustrious is the term I prefer," replied Marvell.
"Illustrious or otherwise you have failed. My minions have terminated Donne and Herbert, only you remain."
"What about Crashaw?"
"Who?"
"Never mind."
"You shall bear witness to my triumph. All knowledge is mine. I know all ; your deepest, darkest secrets are all mine. Your secret support of the monarchy, your flirtation with Catholicism, I know it all."
"So now you reign as lord of THE METAVERSE!"
"I am more than lord. I do not rule THE METAVERSE! I am THE METAVERSE!"
"But you are doomed to sit there through deserts of vast eternity."
"There's no need to walk in THE METAVERSE!"
"Then you shall have naught but quaint memories of vegetable love, a smile, the stroke of the fur of a cat."
"Very funny, I am no Blofeld and I do not have a cat."
"The cat is a metaphor."
"ERROR" said MetaZuck, mechanically "CANNOT PROCESS, A cat is a cat."
"Not necessarily, not always, just as you are a metaphoric king for you rule a land of barren hillocks and sterile orchids where the hum of amorous birds is never heard."
"ERROR" repeated MetaZuck, "CANNOT PROCESS."
"What need is there to process in your parched land where shoots and stalks hang limp in the breeze never to bloom?"
"ERROR," MetaZuck said, "ERROR, ERROR" his eyes flickered and then shut off, lifeless, cold and dead. One by one the monitors shut off. In the distance there was an explosion, and then another.
"It looks like he didn't have but world enough and time," said Marvell. The minions stared and then alarms started to sound. They looked at each other and turned to run.
The only applicable metaphysical poetry I know is from early 80s pop
I'm coding all the things that I know you'll like
Making good programming
I gotta handle it just right
You know what I mean
I took you to an intimate Starbucks
Then to a suggestive VR meet
There's nothing left to develop
Unless it's binarily
Let's get metaphysical, metaphysical
I wanna get metaphysical
Let's get into metaphysical
Let me hear your ousia talk, your ousia talk
Let me hear your ousia talk
Let's get metaphysical, metaphysical
I wanna get metaphysical
Let's get into metaphysical
Let me hear your ousia talk, your ousia talk
Let me hear your ousia talk
It's hard to think of a company I trust less than Facebook to do this.
Quelle surprise here.
Though somewhat amusing to me as in recent months I have been seeing far right folk referring to "the twitter crowd" and taking it as read that this obviously means the left.
BBC News - Twitter's algorithm favours right-leaning politics, research finds
https://www.bbc.co.uk/news/technology-59011271
Quote from: Tyr on October 22, 2021, 12:10:40 PM
Quelle surprise here.
Though somewhat amusing to me as in recent months I have been seeing far right folk referring to "the twitter crowd" and taking it as read that this obviously means the left.
BBC News - Twitter's algorithm favours right-leaning politics, research finds
https://www.bbc.co.uk/news/technology-59011271
There's other research that about 70% of Twitter users identify as on the centre or extreme left - I think especially among frequent tweeters.
I don't know about amplifying tweets and I don't use "Home" view because I hate it :blush: But in terms of users I think Twitter is definitely left while Facebook is far more right - which might just reflect the user base Twitter is younger, more educated with, in my experience, a lot more media/arts/journalist people.
Quote from: Sheilbh on October 22, 2021, 12:17:55 PM
Quote from: Tyr on October 22, 2021, 12:10:40 PM
Quelle surprise here.
Though somewhat amusing to me as in recent months I have been seeing far right folk referring to "the twitter crowd" and taking it as read that this obviously means the left.
BBC News - Twitter's algorithm favours right-leaning politics, research finds
https://www.bbc.co.uk/news/technology-59011271
There's other research that about 70% of Twitter users identify as on the centre or extreme left - I think especially among frequent tweeters.
I don't know about amplifying tweets and I don't use "Home" view because I hate it :blush: But in terms of users I think Twitter is definitely left while Facebook is far more right - which might just reflect the user base Twitter is younger, more educated with, in my experience, a lot more media/arts/journalist people.
Twitter is also much, much smaller.
Twitter has 200 million daily active users. That's pretty good. But Facebook has 1.9 billion daily active users.
From tech reporter, Mark Di Stefano:
QuoteMark Di Stefano
@MarkDiStef
On Monday, more than a dozen news organisations including AP, CNN, USA Today and Fox (!) are planning to drop stories on Facebook's head. They've been coordinating on the whistleblower's documents through a private Slack group. Story with @SylviaVarnham
:hmm: :mmm:
Here's hoping it's something good!
Nick Clegg doing a sterling job yet again :lol:
A guy I know has just made a lot of money, I suspect several thousand, from the pump and dump of stocks in trumps social media thingy.
Nice to see the trumpy wealth being milked my normal people a bit.
Just wish I'd been paying attention :lol: :(
Quote from: Tyr on October 23, 2021, 04:55:22 PM
A guy I know has just made a lot of money, I suspect several thousand, from the pump and dump of stocks in trumps social media thingy.
Nice to see the trumpy wealth being milked my normal people a bit.
Just wish I'd been paying attention :lol: :(
So it's commendable to defraud Trump supporters. :)
Not sure commendable is the right word. Funny maybe?
What happened?
I can't help being a bit sour seeing governments and "traditional" media being in such agreement over curbing and censoring social media. It might be necessary, but it's quite clear why these actors are eager to do it.
Quote from: Eddie Teach on October 24, 2021, 12:46:56 AM
What happened?
A Trump-aligned social media platform just merged with a SPAC and went public via the back door. The stock (DWAC) shot up from ~$10 to ~$130, then slumped down to ~$90, all in the course of about three days.
Not exactly a traditional pump and dump, but there's plenty of reason for skepticism. This ain't the pink sheets, though.
Quote from: Admiral Yi on October 23, 2021, 10:42:38 PM
Quote from: Tyr on October 23, 2021, 04:55:22 PM
A guy I know has just made a lot of money, I suspect several thousand, from the pump and dump of stocks in trumps social media thingy.
Nice to see the trumpy wealth being milked my normal people a bit.
Just wish I'd been paying attention :lol: :(
So it's commendable to defraud Trump supporters. :)
Trump was the one defrauding them.
He just took some of trumps crumbs.
Quote from: Habbaku on October 24, 2021, 08:19:25 AM
Quote from: Eddie Teach on October 24, 2021, 12:46:56 AM
What happened?
A Trump-aligned social media platform just merged with a SPAC and went public via the back door. The stock (DWAC) shot up from ~$10 to ~$130, then slumped down to ~$90, all in the course of about three days.
Not exactly a traditional pump and dump, but there's plenty of reason for skepticism. This ain't the pink sheets, though.
It's an interesting move because the Trump-oriented company that is going to be part of the merger doesn't even exist yet. When it will be created and what it will be is unknown, as is the ownership stakes. This is a complete pig in a poke. MAGAts and their money are soon parted.
Quote from: Tyr on October 24, 2021, 10:22:59 AM
Trump was the one defrauding them.
He just took some of trumps crumbs.
I misunderstood. I thought your buddy was the one pumping.
Quote from: Admiral Yi on October 24, 2021, 03:32:31 PM
Quote from: Tyr on October 24, 2021, 10:22:59 AM
Trump was the one defrauding them.
He just took some of trumps crumbs.
I misunderstood. I thought your buddy was the one pumping.
'
Well yeah, technically he was, along with everyone else jumping in to buy it. Though I don't think he's that rich to substantially impact stock prices much on his own.
The blame ultimately lies on the big money folks who are actively encouraging people to buy in and pump up the price on the expectation they can jump out at the top and leave the suckers holding it- those who see this happening and slice off something for themselves without actively encouraging others to buy themselves I see no issue with.
Quote from: Tyr on October 24, 2021, 03:49:10 PM
Well yeah, technically he was, along with everyone else jumping in to buy it. Though I don't think he's that rich to substantially impact stock prices much on his own.
The blame ultimately lies on the big money folks who are actively encouraging people to buy in and pump up the price on the expectation they can jump out at the top and leave the suckers holding it- those who see this happening and slice off something for themselves without actively encouraging others to buy themselves I see no issue with.
My understanding of a pump and dump is 1) you buy cheap 2) you talk it up then 3) you sell it. If your friend just bought cheap then sold high it wouldn't fit my understanding of pump and dump.
By the same token anyone can talk up a stock. You don't have to be big money folks.
On a completely different note, I learned what synthetic CDOs are the other day. They're securities that entitle you to a payment stream from credit default swap premiums.
How much substance is there to the Google anti-trust case?
Court docs: https://storage.courtlistener.com/recap/gov.uscourts.nysd.564903/gov.uscourts.nysd.564903.152.0_1.pdf
Twitter take that says it's a big deal, but hey it's social media so I don't know: https://twitter.com/fasterthanlime/status/1452053938195341314
Running list of Facebook Files/Facebook Papers stories:
https://www.protocol.com/facebook-papers
I think separate to big tech and social media there is now a big question of what to do about Facebook.
Quote from: Sheilbh on October 25, 2021, 03:50:02 PM
Running list of Facebook Files/Facebook Papers stories:
https://www.protocol.com/facebook-papers
I think separate to big tech and social media there is now a big question of what to do about Facebook.
Yeah... you know, I'd never even thought about how FB impacts places like India or Ethiopia or Myanmar.
Quote from: Jacob on October 25, 2021, 04:53:21 PM
Yeah... you know, I'd never even thought about how FB impacts places like India or Ethiopia or Myanmar.
It's huge and in some countries - it is the internet.
It's like AOL used to be. You get (basic) data for free if you sign up for Facebook, but you access the internet through Facebook. Its power is really extraordinary - it is infrastructure in large parts of the world.
I'd read about Myanmar and Rohingya a while ago - I think on Buzzfeed - and as well as the general Facebook issues there just weren't enough people with language skills to moderate the content being flagged. In that case it's almost like they own the airwaves for transmission and, say, Radio Television Libre des Mille Collines.
Edit: And, frankly, I think the stuff that's gone wrong in the US is a fraction of the problems Facebook has caused in other countries such as India.
Yeah... I'm thinking there's a real risk we'll see a Rwanda style genocide driven by Facebook if they don't get their shit under control. And I don't think they will, unless they're made to.
If we're going to demand that Facebook shut down incitement, don't we have to do the same for telephone, email and snailmail?
Quote from: Admiral Yi on October 25, 2021, 05:47:11 PM
If we're going to demand that Facebook shut down incitement, don't we have to do the same for telephone, email and snailmail?
If we want to dismiss false equivalencies, don't we also have to do the same for all comparisons?
Quote from: Admiral Yi on October 25, 2021, 05:47:11 PM
If we're going to demand that Facebook shut down incitement, don't we have to do the same for telephone, email and snailmail?
That's a fair point in relation to Facebook Messenger and WhatsApp, which are problematic. I don't think it really applies to the rest of Facebook.
Quote from: Sheilbh on October 25, 2021, 05:49:37 PM
That's a fair point in relation to Facebook Messenger and WhatsApp, which are problematic. I don't think it really applies to the rest of Facebook.
I think it does. In both cases they are conduits, not creators of incitement.
I think that it is reasonable to ask that companies not monetize incitement by amplifying it, but I have yet to see any suggestion by critics of, say, Facebook that actually proposes a solution to the problem of assholes having access to the internet. What, precisely, are such critics proposing be done?
Quote from: Admiral Yi on October 25, 2021, 06:27:20 PM
I think it does. In both cases they are conduits, not creators of incitement.
WhatsApp, Messenger, telephone, email, normal mail are all direct messages to an individual/location. They are (arguably - I think it's questionable around WhatsApp and Messenger at least) just the pipes. In the UK and, I imagine in the US, as well it is a crime to intercept someone's phone calls (and arguably emails) and listen in, or to tamper with their mail. Although I'd not in the US you did have the Comstock laws so it's not without precedent.
Facebook is a way of broadcasting to the world and their entire business model is based on listening in and working what generates the most engagement - plus developing profiles of us based on those public broadcasts. That is not the business model of Royal Mail or the telcos - they're regulated infrastructure.
I can get behind the argument that they are conduits - but we regulate them and treat them like conduits as a form of infrastructure and public good. Alternately they're not a conduit in which case it's not the same argument demanding that they are responsible for shutting down or just liable for the content on their platforms (in the same way as other publishers or broadcasters are liable).
I need to read up on the latest status and I don't have a settled view on it but there's "online harms" legislation which has been working through the system here - lots of white papers and consultations - for about two years (I think it was prompted by the terrorist in Christchurch attack livestreaming on Facebook).
I see your point Shelf.
Quote from: grumbler on October 25, 2021, 06:39:01 PM
I think that it is reasonable to ask that companies not monetize incitement by amplifying it, but I have yet to see any suggestion by critics of, say, Facebook that actually proposes a solution to the problem of assholes having access to the internet. What, precisely, are such critics proposing be done?
I think a lot goes down to defining what the problem is. I don't think the problem is "assholes having access to the internet" but rather, as you say, monetizing incitement by amplifying it.
QuoteIn February 2019, not long before India's general election, a pair of Facebook employees set up a dummy account to better understand the experience of a new user in the company's largest market. They made a profile of a 21-year-old woman, a resident of North India, and began to track what Facebook showed her.
At first, her feed filled with soft-core porn and other, more harmless, fare. Then violence flared in Kashmir, the site of a long-running territorial dispute between India and Pakistan. Indian Prime Minister Narendra Modi, campaigning for reelection as a nationalist strongman, unleashed retaliatory airstrikes that India claimed hit a terrorist training camp.
Soon, without any direction from the user, the Facebook account was flooded with pro-Modi propaganda and anti-Muslim hate speech. "300 dogs died now say long live India, death to Pakistan," one post said, over a background of laughing emoji faces. "These are pakistani dogs," said the translated caption of one photo of dead bodies lined-up on stretchers, hosted in the News Feed.
An internal Facebook memo, reviewed by The Washington Post, called the dummy account test an "integrity nightmare" that underscored the vast difference between the experience of Facebook in India and what U.S. users typically encounter. One Facebook worker noted the staggering number of dead bodies.
About the same time, in a dorm room in northern India, 8,000 miles away from the company's Silicon Valley headquarters, a Kashmiri student named Junaid told The Post he watched as his real Facebook page flooded with hateful messages. One said Kashmiris were "traitors who deserved to be shot." Some of his classmates used these posts as their profile pictures on Facebook-owned WhatsApp.
Junaid, who spoke on the condition that only his first name be used for fear of retribution, recalled huddling in his room one evening as groups of men marched outside chanting death to Kashmiris. His phone buzzed with news of students from Kashmir being beaten in the streets — along with more violent Facebook messages.
"Hate spreads like wildfire on Facebook," Junaid said. "None of the hate speech accounts were blocked."
For all of Facebook's troubles in North America, its problems with hate speech and disinformation are dramatically worse in the developing world. Internal company documents made public Saturday reveal that Facebook has meticulously studied its approach abroad — and was well aware that weaker moderation in non-English-speaking countries leaves the platform vulnerable to abuse by bad actors and authoritarian regimes.
"The painful reality is that we simply can't cover the entire world with the same level of support," Samidh Chakrabarti, then the company's civic integrity lead, wrote in a 2019 post on Facebook's message board, adding that the company managed the problem by tiering countries for investment.
https://www.washingtonpost.com/technology/2021/10/24/india-facebook-misinformation-hate-speech/
... I accidentally hit post before I got anywhere with this :blush:, so if you're wondering what the point is there isn't much of one other than to show that FB is doing notably worse in some markets (India) than others (the US), and that they're aware of it.
Quote from: Admiral Yi on October 25, 2021, 05:47:11 PM
If we're going to demand that Facebook shut down incitement, don't we have to do the same for telephone, email and snailmail?
I think such comparisons that don't take into account the efficacy of different methods are sophist. The fact of the matter is that Facebook is a far more effective conduit of propaganda than two cans tied with a string. Therefore, it may be reasonable to subject Facebook to regulations that two cans tied with a string are not subjected to.
That's something I don't get.
Facebook misinformation - micro targeting, large communities, interlinking all the people in the world to various degrees of seperation. I get how this is screwing up the world in a new way.
WhatsApp however...
Being person to person largely it's fascinating how it is managing to fill the same role. I recall reading with Asian communities whatsapp is worse than Facebook for spreading dangerous nonsense.
I wonder how much you could accomplish by simply making it illegal to sell advertising on social media. Break the incentive that social media sites have to keep users engaged at all costs.
Quote from: Tyr on October 26, 2021, 09:14:56 AM
That's something I don't get.
Facebook misinformation - micro targeting, large communities, interlinking all the people in the world to various degrees of seperation. I get how this is screwing up the world in a new way.
WhatsApp however...
Being person to person largely it's fascinating how it is managing to fill the same role. I recall reading with Asian communities whatsapp is worse than Facebook for spreading dangerous nonsense.
John Oliver had a segment about that, focusing on the spread of misinformation among non-English speaking communities: https://youtu.be/l5jtFqWq5iU
Really good piece on the global side/issues of Facebook - one of so many. And the line about the Urdu rules is incredible. This is much of the world's internet:
QuoteHow Facebook neglected the rest of the world, fueling hate speech and violence in India
A trove of internal documents show Facebook didn't invest in key safety protocols in the company's largest market.
By Cat Zakrzewski, Gerrit De Vynck, Niha Masih and Shibani Mahtani
October 24, 2021 at 7:00 a.m. EDT
In February 2019, not long before India's general election, a pair of Facebook employees set up a dummy account to better understand the experience of a new user in the company's largest market. They made a profile of a 21-year-old woman, a resident of North India, and began to track what Facebook showed her.
At first, her feed filled with soft-core porn and other, more harmless, fare. Then violence flared in Kashmir, the site of a long-running territorial dispute between India and Pakistan. Indian Prime Minister Narendra Modi, campaigning for reelection as a nationalist strongman, unleashed retaliatory airstrikes that India claimed hit a terrorist training camp.
Soon, without any direction from the user, the Facebook account was flooded with pro-Modi propaganda and anti-Muslim hate speech. "300 dogs died now say long live India, death to Pakistan," one post said, over a background of laughing emoji faces. "These are pakistani dogs," said the translated caption of one photo of dead bodies lined-up on stretchers, hosted in the News Feed.
An internal Facebook memo, reviewed by The Washington Post, called the dummy account test an "integrity nightmare" that underscored the vast difference between the experience of Facebook in India and what U.S. users typically encounter. One Facebook worker noted the staggering number of dead bodies.
About the same time, in a dorm room in northern India, 8,000 miles away from the company's Silicon Valley headquarters, a Kashmiri student named Junaid told The Post he watched as his real Facebook page flooded with hateful messages. One said Kashmiris were "traitors who deserved to be shot." Some of his classmates used these posts as their profile pictures on Facebook-owned WhatsApp.
Junaid, who spoke on the condition that only his first name be used for fear of retribution, recalled huddling in his room one evening as groups of men marched outside chanting death to Kashmiris. His phone buzzed with news of students from Kashmir being beaten in the streets — along with more violent Facebook messages.
"Hate spreads like wildfire on Facebook," Junaid said. "None of the hate speech accounts were blocked."
For all of Facebook's troubles in North America, its problems with hate speech and disinformation are dramatically worse in the developing world. Internal company documents made public Saturday reveal that Facebook has meticulously studied its approach abroad — and was well aware that weaker moderation in non-English-speaking countries leaves the platform vulnerable to abuse by bad actors and authoritarian regimes.
"The painful reality is that we simply can't cover the entire world with the same level of support," Samidh Chakrabarti, then the company's civic integrity lead, wrote in a 2019 post on Facebook's message board, adding that the company managed the problem by tiering countries for investment.
This story is based on those documents, known as the Facebook Papers, which were disclosed to the Securities and Exchange Commission by whistleblower Frances Haugen, and composed of research, slide decks and posts on the company message board — some previously reported by the Wall Street Journal. It is also based on documents independently reviewed by The Post, as well as more than a dozen interviews with former Facebook employees and industry experts with knowledge of the company's practices abroad.
The SEC disclosures, provided to Congress in redacted form by Haugen's legal counsel and reviewed by a consortium of news organizations including The Post, suggest that as Facebook pushed into the developing world it didn't invest in comparable protections.
Facebook whistleblower Frances Haugen tells lawmakers that meaningful reform is necessary 'for our common good'
According to one 2020 summary, although the United States comprises less than 10 percent of Facebook's daily users, the company's budget to fight misinformation was heavily weighted toward America, where 84 percent of its "global remit/language coverage" was allocated. Just 16 percent was earmarked for the "Rest of World," a cross-continent grouping that included India, France and Italy.
Facebook spokesperson Dani Lever said that the company had made "progress" and had "dedicated teams working to stop abuse on our platform in countries where there is heightened risk of conflict and violence. We also have global teams with native speakers reviewing content in over 70 languages along with experts in humanitarian and human rights issues."
Many of these additions had come in the past two years. "We've hired more people with language, country and topic expertise. We've also increased the number of team members with work experience in Myanmar and Ethiopia to include former humanitarian aid workers, crisis responders, and policy specialists," Lever said.
Meanwhile, in India, Lever said, the "hypothetical test account inspired deeper, more rigorous analysis of our recommendation systems."
Globally there are over 90 languages with over 10 million speakers. In India alone, the government recognizes 122 languages, according to its 2001 census.
In India, where the Hindu-nationalist Bharatiya Janata Party — part of the coalition behind Modi's political rise — deploys inflammatory rhetoric against the country's Muslim minority, misinformation and hate speech can translate into real-life violence, making the stakes of these limited safety protocols particularly high. Researchers have documented the BJP using social media, including Facebook and WhatsApp, to run complex propaganda campaigns that scholars say play to existing social tensions against Muslims.
Members from the Next Billion Network, a collective of civil society actors working on technology-related harms in the global south, warned Facebook officials in the United States that unchecked hate speech on the platform could trigger large-scale communal violence in India, in multiple meetings held between 2018 and 2019, according to three people with knowledge of the matter, who spoke on the condition of anonymity to describe sensitive matters.
How misinformation on WhatsApp led to a mob killing in India
Despite Facebook's assurances it would increase moderation efforts, when riots broke out in Delhi last year, calls to violence against Muslims remained on the site, despite being flagged, according to the group. Gruesome images, claiming falsely to depict violence perpetrated by Muslims during the riots, were found by The Post. Facebook labeled them with a fact check, but they remained on the site as of Saturday.
More than 50 people were killed in the turmoil, the majority of them Muslims.
"They were told, told, told and they didn't do one damn thing about it," said a member of the group who attended the meetings. "The anger [from the global south] is so visceral on how disposable they view our lives."
Facebook said it removed content that praised, supported or represented violence during the riots in Delhi.
India is the world's largest democracy and a growing economic powerhouse, making it more of a priority for Facebook than many other countries in the global south. Low-cost smartphones and cheap data plans have led to a telecom revolution, with millions of Indian users coming online for the first time every year. Facebook has made great efforts to capture these customers, and its signature app has 410 million users according to the Indian government, more than the entire population of the United States.
The company activated large teams to monitor the platform during major elections, dispatched representatives to engage with activists and civil society groups, and conducted research surveying Indian people, finding many were concerned about the quantity of misinformation on the platform, according to several documents.
But despite the extra attention, the Facebook that Indians interact with is missing many of the key guardrails the company deployed in the United States and other mostly-English-speaking countries for years. One document stated that Facebook had not developed algorithms that could detect hate speech in Hindi and Bengali, despite them being the fourth- and seventh-most spoken languages in the world, respectively. Other documents showed how political actors spammed the social network with multiple accounts, spreading anti-Muslim messages across people's news feeds in violation of Facebook's rules.
The company said it introduced hate-speech classifiers in Hindi in 2018 and Bengali in 2020; systems for detecting violence and incitement in Hindi and Bengali were added in 2021.
Pratik Sinha, co-founder of Alt News, a fact-checking site in India that routinely debunks viral fake and inflammatory posts, said that while misinformation and hate speech proliferate across multiple social networks, Facebook sometimes doesn't takedown bad actors.
"Their investment in a country's democracy is conditional," Sinha said. "It is beneficial to care about it in the U.S. Banning Trump works for them there. They can't even ban a small-time guy in India."
'Bring the world closer together'
Facebook's mission statement is to "bring the world closer together," and for years, voracious expansion into markets beyond the United States has fueled its growth and profits.
Social networks that let citizens connect and organize became a route around governments that had controlled and censored centralized systems like TV and radio. Facebook was celebrated for its role in helping activists organize protests against authoritarian governments in the Middle East during the Arab Spring.
For millions of people in Asia, Africa and South America, Facebook became the primary way they experience the Internet. Facebook partnered with local telecom operators in countries such as Myanmar, Ghana and Mexico to give free access to its app, along with a bundle of other basic services like job listings and weather reports. The program, called "Free Basics," helped millions get online for the first time, cementing Facebook's role as a communication platform all around the world and locking millions of users into a version of the Internet controlled by an individual company. (While India was one of the first countries to get Free Basics in 2015, backlash from activists who argued that the program unfairly benefited Facebook led to its shutdown.)
In late 2019, the Next Billion Network ran a multicountry study, separate from the whistleblower's documents, of Facebook's moderation and alerted the company that large volumes of legitimate complaints, including death threats, were being dismissed in countries throughout the global south, including Pakistan, Myanmar and India, because of technical issues, according to a copy of the report reviewed by The Post.
It found that cumbersome reporting flows and a lack of translations were discouraging users from reporting bad content, the only way content is moderated in many of the countries that lack more automated systems. Facebook's community standards, the set of rules that users must abide by, were not translated into Urdu, the national language of Pakistan. Instead, the company flipped the English version so it read from right to left, mirroring the way Urdu is read.
In June 2020, a Facebook employee posted an audit of the company's attempts to make its platform safer for users in "at-risk countries," a designation given to nations Facebook marks as especially vulnerable to misinformation and hate speech. The audit showed Facebook had massive gaps in coverage. In countries including Myanmar, Pakistan and Ethiopia, Facebook didn't have algorithms that could parse the local language and identify posts about covid-19. In India and Indonesia, it couldn't identify links to misinformation, the audit showed.
In Ethiopia, the audit came a month after its government postponed federal elections, a major step in a buildup to a civil war that broke out months later. In addition to being unable to detect misinformation, the audit found Facebook also didn't have algorithms to flag hate speech in the country's two biggest local languages.
After negative coverage, Facebook has made dramatic investments. For example, after a searing United Nations report connected Facebook to an alleged genocide against the Rohingya Muslim minority in Myanmar, the region became a priority for the company, which began flooding it with resources in 2018, according to interviews with two former Facebook employees with knowledge of the matter, who, like others, spoke on the condition of anonymity to describe sensitive matters.
Facebook took several steps to tighten security and remove viral hate speech and misinformation in the region, according to multiple documents. One note, from 2019, showed that Facebook expanded its list of derogatory terms in the local language and was able to catch and demote thousands of slurs. Ahead of Myanmar's 2020 elections, Facebook launched an intervention that promoted posts from users' friends and family and reduced viral misinformation, employees found.
A former employee said that it was easy to work on the company's programs in Myanmar, but there was less incentive to work on problematic issues in lower-profile countries, meaning many of the interventions deployed in Myanmar were not used in other places.
"Why just Myanmar? That was the real tragedy," the former employee said.
'Pigs' and fearmongering
In India, internal documents suggest Facebook was aware of the number of political messages on its platforms. One internal post from March shows a Facebook employee believed a BJP worker was breaking the site's rules to post inflammatory content and spam political posts. The researcher detailed how the worker used multiple accounts to post thousands of "politically-sensitive" messages on Facebook and WhatsApp during the run-up to the elections in the state of West Bengal. The efforts broke Facebook's rules against "coordinated inauthentic behavior," the employee wrote. Facebook denied that the operation constituted coordinated activity, but said it took action.
A case study about harmful networks in India shows that pages and groups of the Rashtriya Swayamsevak Sangh, an influential Hindu-nationalist group associated with the BJP, promoted fearmongering anti-Muslim narratives with violent intent. A number of posts compared Muslims to "pigs" and cited misinformation claiming the Koran calls for men to rape female family members.
The group had not been flagged, according to the document, given what employees called "political sensitivities." In a slide deck in the same document, Facebook employees said the posts also hadn't been found because the company didn't have algorithms that could detect hate speech in Hindi and Bengali.
Facebook in India has been repeatedly criticized for a lack of a firewall between politicians and the company. One deck on political influence on content policy from December 2020 acknowledged the company "routinely makes exceptions for powerful actors when enforcing content policy," citing India as an example.
"The problem which arises is that the incentives are aligned to a certain degree," said Apar Gupta, executive director of the Internet Freedom Foundation, a digital advocacy group in India. "The government wants to maintain a level of political control over online discourse and social media platforms want to profit from a very large, sizable and growing market in India."
Facebook says its global policy teams operate independently and that no single team's opinion has more influence than the other.
Earlier this year, India enacted strict new rules for social media companies, increasing government powers by requiring the firms to remove any content deemed unlawful within 36 hours of being notified. The new rules have sparked fresh concerns about government censorship of U.S.-based social media networks. They require companies to have an Indian resident on staff to coordinate with local law enforcement agencies. The companies are also required to have a process where people can directly share complaints with the social media networks.
But Junaid, the Kashmiri college student, said Facebook had done little to remove the hate-speech posts against Kashmiris. He went home to his family after his school asked Kashmiri students to leave for their own safety. When he returned to campus 45 days after the 2019 bombing, the Facebook post from a fellow student calling for Kashmiris to be shot was still on their account.
Regine Cabato in Manila contributed to this report.
And it's worth noting the Vietnam story for all the risks to free speech side. There Facebook formally agreed with the government to take down and report "anti-state" content from 2020, but had been taking it down since 2018 just without a formal agreement. It is a weird situation where we can defend Facebook for the purposes of promoting free speech when it is something they will absolutely sacrifice if it opens a potential market in an authoritarian country.
Jesus, what a mess.
I look at the Western aghast at Facebook, grave concerns shared by all their governments and traditional media, and then I look at Hungary and I see social media as the only one not controlled and/or domineered by the government. I don't know what pro-democracy people would do without it. Well, they could rely on the few barely-visited online news sites they have left I guess, but even most of those have stopped allowing comment sections because of the laws in place which seek to protect the populace from extreme and offending comments and content.
It's not social media but very much targeted content: I should do a longer post about how Youtube has effectively turned into independent TV in Hungary. Multiple channels have sprung up by now where they do political and cultural interviews and analyst shows which in normal countries would be on TV. The non-government approved part of Hungarian political life has entirely moved to Youtube and Facebook.
Get a grip, people. Banning advertisement i.e. shutting down, social media? Orban most certainly approves. Considering Mein Kamp was printed as a book I'd also consider banning ads in print media as well.
Quote from: Tamas on October 27, 2021, 03:31:02 AM
I look at the Western aghast at Facebook, grave concerns shared by all their governments and traditional media, and then I look at Hungary and I see social media as the only one not controlled and/or domineered by the government. I don't know what pro-democracy people would do without it. Well, they could rely on the few barely-visited online news sites they have left I guess, but even most of those have stopped allowing comment sections because of the laws in place which seek to protect the populace from extreme and offending comments and content.
It's not social media but very much targeted content: I should do a longer post about how Youtube has effectively turned into independent TV in Hungary. Multiple channels have sprung up by now where they do political and cultural interviews and analyst shows which in normal countries would be on TV. The non-government approved part of Hungarian political life has entirely moved to Youtube and Facebook.
Get a grip, people. Banning advertisement i.e. shutting down, social media? Orban most certainly approves. Considering Mein Kamp was printed as a book I'd also consider banning ads in print media as well.
Was Mein Kampf written in an attempt to sell ad revenue? I had no idea!
Nobody has said anything about shutting down social media. Total red herring.
Right now social media by and large is completely unregulated. We are talking about how to regulate it so that it's incentives line up with societies needs from modern communication. This is the same problem every new media technology has run into, ever.
If advertising on social media is made illegal, how are they to maintain themselves financially?
Quote from: Tamas on October 27, 2021, 09:25:18 AM
If advertising on social media is made illegal, how are they to maintain themselves financially?
I don't know. But lots of products manage to survive without advertising.
Are you arguing that the only possible way for social media to work is through advertising? That is a rather different argument, isn't it?
Quote from: Berkut on October 27, 2021, 09:36:11 AM
Quote from: Tamas on October 27, 2021, 09:25:18 AM
If advertising on social media is made illegal, how are they to maintain themselves financially?
I don't know. But lots of products manage to survive without advertising.
Are you arguing that the only possible way for social media to work is through advertising? That is a rather different argument, isn't it?
If you want to be grumbler about it then yes I skipped to the only logical conclusion of your suggestion without outlining the obvious logical process to get there.
I haven't encountered an online product that didn't have either ads or another method of generating revenue / covering running costs.
Additionally, I don't think even enacting such a law would be feasible. You would have to define "social media" in a way that's not trivial to weasel out of, without banning ads on things like online newspapers with comment sections, or forums etc.
But I am happy to concede that there might be a way involving no money that would allow social media to exist, because it doesn't matter.
More importantly, I just disagree with the basic premise that "solving" social media would solve the underlying political issues to which we are allowed exposure to by social media. We could more easily go back to the times of more extreme opinions being ignored festering in their own corners, but that's it. And yes, those corners would have a slightly harder time to grow and combine, but it would not make them go away.
And there are great, great political benefits to social media from the points of view of enabling and maintaining free speech. Yes, it allows the nazis and other assorted scum to get together and form their bubbles, but it also gives the same tools to liberals and pro-democracy forces otherwise deliberately kept isolated and without a platform. The same algorithms that feed people viewing far right content with more far right content also feed people looking for moderate content in fascist regimes with more moderate content.
Yes there are risks and negatives and maybe we are not yet at the optimum level of regulation. But I feel like there's way too much focus given to social media. People see things they don't like on social media, be it racism, gay people, ethnic hatred, liberal views, political conmen, etc, and the first reaction is "this needs to be regulated!". Which is of course something that will be happily parroted by government and the more traditional media because it's in their interest. And suddenly, you find yourself on the same platform as the Hungarian PM.
https://www.theguardian.com/commentisfree/2021/oct/24/society-blame-big-tech-online-regulation
QuoteThe push for online regulation risks absolving the right of responsibility for the toxicity they continually stoke
Every time a dramatic, unforeseen political event happens, there follows a left-field fixation that some out-of-control technology created it. Whenever this fear about big tech comes around we are told that something new, even more toxic, has infiltrated our public discourse, triggering hatred towards politicians and public figures, conspiracy theories about Covid and even major political events like Brexit. The concern over anonymity online becomes a particular worry – as if ending it will somehow, like throwing a blanket at a raging house fire, subdue our fevered state.
You may remember that during the summer's onslaught of racist abuse towards black players in the England football team, instead of reckoning with the fact that racism still haunts this country, we busied ourselves with bluster about how "cowards" online would be silenced if we only just demanded they identify themselves.
We resort to this explanation, that shadowy social media somehow stimulate our worst impulses, despite there being little evidence that most abuse is from unidentifiable sources. After England's defeat in the Euro 2020 final, Twitter revealed that 99% of the abuse on its site directed at England footballers was not anonymous.
The same arguments were made in the aftermath of MP David Amess's killing – that doing something about online abuse would make politicians safer. It was a rehash of a 2018 moment when Theresa May pledged to regulate online behaviour because a "tone of bitterness and aggression has entered into our public debate".
Good old social media, always there to paper over the giant cracks of our political failures. Bad tech is a convenient fall guy for a whole gang of perpetrators. It has been particularly useful in recent years, when Brexit has enabled rightwing politicians and press to engage in the most divisive, dangerous rhetoric, particularly towards the country's political and legal institutions, then point to social media when that rhetoric serves its purpose of eroding tolerance and trust.
But when parliament and the supreme court – attacked by the media and politicians for variously being saboteurs, traitors and opponents of the will of the people – come under fire from members of the public, that is an entirely different matter. The faceless public becomes the only protagonist. This allows everyone, from the mainstream press to publishers of far-right conspiracy theories, to distance themselves from the scene of the crime and innocently propose earnest-sounding solutions to our country's crises of racism and loss of faith in our politics.
A notice keeps David Amess's seat in the Commons free.
PM urged to enact 'David's law' against social media abuse after Amess's death
Read more
The corrupting influence of technology companies is also a compelling explanation for them because it means that something can be done. This is partly down to a sort of dominant liberal technocratic sensibility that reaches for a tool kit to fix social and political problems, as one would approach a broken machine. The result is "solutionism", the belief there is a technological remedy for most issues, because human behaviour is essentially rational and can be mapped out, analysed and then adjusted.
It's all so much easier than squaring up to the gnarly facts that the world is messy; humans are infinitely suggestible and manipulable; and most of the time our political behaviour is a manifestation of long-term currents spread by political parties and dominant economic ideologies. This reluctance to trace how we arrived at a place we don't like was clearly demonstrated by the stubbornness with which so many people held to the belief that Brexit was an aberration. Not acknowledging that it was, in fact, a culmination of a campaign that lasted years, and the result of our failed economic model and of decades of anti-immigration obsession. Someone must have cheated, these people told themselves, so a sort of tech calamity thesis carried the day. And the perfect culprit presented itself in the form of Cambridge Analytica and a convenient cartoon cast including shady Russian powers, Nigel Farage and Dominic Cummings.
The right, too, loves a tech panic to explain away unhappy results. Tech growing faster than it can be controlled and then turning on its creators is a universal bogeyman, a nervousness captured in Isaac Asimov's first law of robotics in 1942: "A robot may not injure a human being or, through inaction, allow a human being to come to harm."
When companies reach the scale and reach of Facebook, they can appear, to the right, a little too much like big governments infringing on individual privacy and freedoms. This fear is then easily capitalised on, and all sorts of unlikely victims can claim they are silenced by platforms biased against their politics. When Donald Trump intends to launch a new social media network to "stand up to the tyranny of big tech", he is echoing the whine of many across the political spectrum. Those who, rather than admit their thinking is less popular than they would like, prefer to believe they are simply conspired against.
Social media companies do regularly fail in their responsibilities to manage the kind of hate speech and abuse that poses a danger for everyone from vulnerable children to ethnic minorities and members of parliament. It is clear that the management of harmful content online cannot be left to tech platforms themselves and that some form of regulation is now long overdue. One hopes the current UK online safety bill will now address that.
But fixating solely on reforming big tech risks turning into a huge displacement exercise. While we rightly focus on the excesses of tech platforms that have turned abuse and lies into lucre, we must also realise that the bad robot theory is tempting because it places the problem not only outside of our institutions, but outside of our very selves. There are other anonymous players who need to be named in this crisis of discord – those parties in our politics and our media who have created so much discontent and hostility that it all regularly overflows in the sewers of social media.
Interesting that you posted an article that directly disagrees with you.
QuoteSocial media companies do regularly fail in their responsibilities to manage the kind of hate speech and abuse that poses a danger for everyone from vulnerable children to ethnic minorities and members of parliament. It is clear that the management of harmful content online cannot be left to tech platforms themselves and that some form of regulation is now long overdue.
At the least that rather nicely aligns with what those of us who want some kind of "fix".
Quote from: Berkut on October 27, 2021, 11:35:17 AM
Interesting that you posted an article that directly disagrees with you.
QuoteSocial media companies do regularly fail in their responsibilities to manage the kind of hate speech and abuse that poses a danger for everyone from vulnerable children to ethnic minorities and members of parliament. It is clear that the management of harmful content online cannot be left to tech platforms themselves and that some form of regulation is now long overdue.
At the least that rather nicely aligns with what those of us who want some kind of "fix".
It also rather nicely ignores the rest of a long article. But I don't have the energy to stand in the way of regulating zeal.
I now kinda wish fbook had gone with Metaverse!
Quote from: Tamas on October 27, 2021, 09:25:18 AM
If advertising on social media is made illegal, how are they to maintain themselves financially?
The same way most other products and services do so- by getting the ostensible users of the service to pay for it based on the value that it provides to them. Or by giving away the base product for free but giving special capabilities, "skins" etc for premium users. I think linkedin does the latter - seems to work for them.
Tamas, the philosophers have decided that it leads to wrongthink. What more reason for a ban could you possibly need?
Vice, predictably, is not impressed by the new meta. :P
https://www.vice.com/en/article/qjb485/zuckerberg-facebook-new-name-meta-metaverse-presentation
QuoteZuckerberg Announces Fantasy World Where Facebook Is Not a Horrible Company
Facebook's new name is "Meta," and its new mission is to invent a 'metaverse' that will make us all forget what it's done to our existing reality.
Moments before announcing Facebook is changing its name to "Meta" and detailing the company's "metaverse" plans during a Facebook Connect presentation on Thursday, Mark Zuckerberg said "some people will say this isn't a time to focus on the future," referring to the massive, ongoing scandal plaguing his company relating to the myriad ways Facebook has made the world worse. "I believe technology can make our lives better. The future will be built by those willing to stand up and say this is the future we want."
The future Zuckerberg went on to pitch was a delusional fever dream cribbed most obviously from dystopian science fiction and misleading or outright fabricated virtual reality product pitches from the last decade. In the "metaverse—an "embodied" internet where we are, basically, inside the computer via a headset or other reality-modifying technology of some sort—rather than hang out with people in real life you could meet up with them as Casper-the-friendly-ghost-style holograms to do historically fun and stimulating activities such as attend concerts or play basketball.
These presentations had the familiar vibe of an overly-ambitious video game reveal. In the concert example, one friend is present in reality while the other is not; the friend joins the concert inexplicably as a blue Force ghost and the pair grab "tickets" to a "metaverse afterparty" in which NFTs are for sale. This theme continued throughout as people wandered seamlessly into virtual fantasy worlds over and over, and the presentation lacked any sense of what this so-called metaverse would look like in practice. It was flagrantly abstract, even metaphorical, showing more the dream of the metaverse than anything resembling reality. We're told that two real people, filmed with real cameras on real couches, are in a "digital space." When Zuckerberg reveals that Facebook is working on augmented reality glasses that could make any of this even a remote possibility, it doesn't show any actual glasses, only "simulated footage" of augmented reality from a first-person perspective.
"We have to fit hologram displays, projectors, batteries, radios, custom silicon chips, cameras, speakers, sensors to map the world around you, and more, into glasses that are five millimeters thick," Zuckerberg says.
Whatever the metaverse does look like, it is virtually guaranteed to not look or feel anything like what Facebook showed on Thursday.
While Zuckerberg was pitching this Black Mirror-ass future he claims we all want, it is worth checking out what was happening on Facebook's own platform at the present.
About 19,000 people were watching Zuckerberg cosplay as James Halliday, the fictional metaverse creator from the bleak Ready Player One series on Facebook's Live platform. Facebook's algorithm, meanwhile, was recommending that users also watch a Latina woman dominate and lick the stomach of a little person (11,000 viewers); it also recommended people watch a video game livestream pitched with a thumbnail of two CGI men fucking each other (4,000 viewers).
There's nothing wrong with wanting to watch either of those things, of course. But Zuckerberg's pitch of living, working, playing, and generally existing in a utopian, fake, Facebook-developed virtual world loaded with fun and friendly people, concerts where you can always be in the front row, seamless mixed-reality basketball games where you feel like you are actually playing basketball, and kicksass, uhh, NFTs you can use to modify your metaverse avatar, is a far cry from the disinformation, conspiracy theories, genocide-related, self-esteem destroying, spam, and general garbage content that exists on the platforms Facebook has already built.
There is no universe, meta-or-otherwise, in which people will not spread conspiracy theories, hate speech, and make threats online. In the metaverse, they will try to show each other their dicks, though it's worth noting right now that Facebook's current metaverse avatars do not have bodies that exist below their waists.
Zuckerberg repeatedly said Facebook alone won't build the metaverse. But the metaverse Facebook is building will be and has been built with Facebook developers to run on Facebook servers using Facebook hardware, which are connected to Facebook accounts.
About halfway through the delusional fever dream that was Facebook's biggest product announcement of all time, Mark Zuckerberg said that "the last few years have been humbling for me and our company in a lot of ways," as Facebook has nominally had to grapple with the harm it's done to this world. It's hard to find anything "humble" about a proposal to fundamentally remake human existence using technology that currently does not and may not ever exist and that few are currently clamoring for.
But Facebook's problems are too numerous to list, and so he is pitching products that don't exist for a reality that does not exist in a desperate attempt to change the narrative as it exists in reality, where we all actually live.
QuoteBut Zuckerberg's pitch of living, working, playing, and generally existing in a utopian, fake, Facebook-developed virtual world loaded with . . . seamless mixed-reality basketball games where you feel like you are actually playing basketball
Quoteit's worth noting right now that Facebook's current metaverse avatars do not have bodies that exist below their waists.
I think I may have spotted a flaw.
Behold THE METAVERSE! (https://www.youtube.com/watch?v=SAL2JZxpoGY) a world of unending happiness where you can play cards with your friends IN SPACE! and post videos of your dog running around, day or night.
I've never heard the Zuck's voice before. He sounds a little like Keanu Reeves.
From the IEEE Spectrum (https://spectrum.ieee.org/meta-offers-nothing-new-to-the-metaverse)
QuoteMeta Offers Nothing New to THE METAVERSE!
Facebook's got a new name, but a tired business model
MATTHEW S. SMITH17 DEC 20213 MIN READ
(https://spectrum.ieee.org/media-library/an-illustration-of-a-face-with-an-infinity-symbol-covering-the-face-and-people-in-vr-goggles-looking-up-and-pointing.png?id=28227468&width=1112&height=1062)
YOU MAY HAVE HEARD that Facebook is owned by a company that is no longer called Facebook but, instead, Meta (officially Meta Platforms). CEO Mark Zuckerberg detailed the name change in a CGI-laden presentation that spanned an hour and 17 minutes. THE METAVERSE!, he says, is what's next for the Internet.
"The next platform and medium will be even more immersive, an embodied Internet where you're in the experience, not just looking at it," he said.
The announcement was not well received. Journalists, pundits, and politicians saw it as an attempt to deflect attention from Facebook's real, present problems by focusing on a better, imagined future. I share this view. As a consumer-technology journalist, however, I have a different problem with Facebook's vision of THE METAVERSE!: It's not new. Not even close.
Zuckerberg's demo was basically a virtual reality hangout. It depicted a small group of people playing a game of cards in VR before one of Zuckerberg's friends, taking the form of a robot, teleports him to a fantastic virtual forest. Zuckerberg is also shown admiring a VR art installation and using a video call to speak with friends in the "meataverse."
The show might have impressed those who are new to augmented and virtual reality. But the tech-savvy certainly know that everything shown by Facebook—sorry, Meta—is possible right now and has been for several years. It's not even that expensive.
An excellent VR setup with a Valve Index and fast PC costs about US $3,000. A passable setup with a Vive Pro 2 and a midrange PC is around $1,500. Or you can hop in with Meta's Oculus Quest 2, which doesn't require a PC and starts at $299.
The experience doesn't fall far below Meta's demo. VRChat, a popular platform, has thousands of attractive 3D levels. The software can, if you opt for the more expensive VR setups, detect the movement of your limbs and face and animate your avatar to mimic your gestures and expressions. You can hang out with friends, play basic games, or explore virtual landscapes, as depicted in Meta's demonstration.
VRChat is used by tens of thousands of people every day, and the number continues to grow. A spike of users during New Years 2021 temporarily took down VRChat because the company's service provider thought it was experiencing a distributed denial of service attack.
Zuckerberg knows this. Facebook bought Oculus in 2014 and has released several iterations of Oculus hardware since. The company has its own virtual reality chat platform, Facebook Horizon, which more-or-less does what's shown in Meta's first demo but with less fidelity. VRChat is available on Oculus headsets along with competitors such as AltspaceVR and Rec Room. Still, the user base for VRChat and its ilk is tiny compared with Facebook's 2.91 billion active users.
Meta tried to set itself apart from the competition in a 10-minute segment toward the end of Zuckerberg's presentation. Michael Abrash, chief scientist of Meta's Reality Lab, showed prototype technology that will make avatars photorealistic, create lifelike interactive environments, and let users control input with subtle hand gestures.
All interesting stuff, to be sure, and it will require a massive R&D effort. This, in part, is the reason for Facebook's change of name. The company plans to spend a lot of money on THE METAVERSE!. That could be hard to justify without signaling a shift away from Facebook.
Yet by focusing on what THE METAVERSE! can be in the future, Meta deflected from a question that cuts to the core of Zuckerberg's vision. If that VR vision is already accessible on hardware that costs as little as a midrange Android smartphone, why aren't consumers already eager to experience it?
That's the trillion-dollar question—and I don't think Meta has the answer.
I know some people here have VR Goggles; does anyone use VR Chat or the augmented reality features that MetaZuck was touting in his demonstration on
THE METAVERSE!?
I saw one thing the other day that claimed Intel figures computation power needs to increase by a factor of roughly 1,000 to support Zuck's version. So there's a bit of additional R&D that needs to happen.
There need to be a few killer apps for the Metaverse to take off, I reckon. If you can do sex / porn/ prostitution on the Metaverse, that may do it. Other than that I'm out of ideas... good thing I'm not in charge over at Metaface.
Something to keep you up at night: AI's Six Worst Case Scenarios (https://spectrum.ieee.org/ai-worst-case-scenarios)
QuoteHOLLYWOOD'S WORST-CASE scenario involving artificial intelligence (AI) is familiar as a blockbuster sci-fi film: Machines acquire humanlike intelligence, achieving sentience, and inevitably turn into evil overlords that attempt to destroy the human race. This narrative capitalizes on our innate fear of technology, a reflection of the profound change that often accompanies new technological developments.
However, as Malcolm Murdock, machine-learning engineer and author of the 2019 novel The Quantum Price, puts it, "AI doesn't have to be sentient to kill us all. There are plenty of other scenarios that will wipe us out before sentient AI becomes a problem."
"We are entering dangerous and uncharted territory with the rise of surveillance and tracking through data, and we have almost no understanding of the potential implications."
—Andrew Lohn, Georgetown University
In interviews with AI experts, IEEE Spectrum has uncovered six real-world AI worst-case scenarios that are far more mundane than those depicted in the movies. But they're no less dystopian. And most don't require a malevolent dictator to bring them to full fruition. Rather, they could simply happen by default, unfolding organically—that is, if nothing is done to stop them. To prevent these worst-case scenarios, we must abandon our pop-culture notions of AI and get serious about its unintended consequences.
1. When Fiction Defines Our Reality...
Unnecessary tragedy may strike if we allow fiction to define our reality. But what choice is there when we can't tell the difference between what is real and what is false in the digital world?
In a terrifying scenario, the rise of deepfakes—fake images, video, audio, and text generated with advanced machine-learning tools—may someday lead national-security decision-makers to take real-world action based on false information, leading to a major crisis, or worse yet, a war.
Andrew Lohn, senior fellow at Georgetown University's Center for Security and Emerging Technology (CSET), says that "AI-enabled systems are now capable of generating disinformation at [large scales]." By producing greater volumes and variety of fake messages, these systems can obfuscate their true nature and optimize for success, improving their desired impact over time.
The mere notion of deepfakes amid a crisis might also cause leaders to hesitate to act if the validity of information cannot be confirmed in a timely manner.
Marina Favaro, research fellow at the Institute for Research and Security Policy in Hamburg, Germany, notes that "deepfakes compromise our trust in information streams by default." Both action and inaction caused by deepfakes have the potential to produce disastrous consequences for the world.
2. A Dangerous Race to the Bottom
When it comes to AI and national security, speed is both the point and the problem. Since AI-enabled systems confer greater speed benefits on its users, the first countries to develop military applications will gain a strategic advantage. But what design principles might be sacrificed in the process?
Things could unravel from the tiniest flaws in the system and be exploited by hackers. Helen Toner, director of strategy at CSET, suggests a crisis could "start off as an innocuous single point of failure that makes all communications go dark, causing people to panic and economic activity to come to a standstill. A persistent lack of information, followed by other miscalculations, might lead a situation to spiral out of control."
Vincent Boulanin, senior researcher at the Stockholm International Peace Research Institute (SIPRI), in Sweden, warns that major catastrophes can occur "when major powers cut corners in order to win the advantage of getting there first. If one country prioritizes speed over safety, testing, or human oversight, it will be a dangerous race to the bottom."
For example, national-security leaders may be tempted to delegate decisions of command and control, removing human oversight of machine-learning models that we don't fully understand, in order to gain a speed advantage. In such a scenario, even an automated launch of missile-defense systems initiated without human authorization could produce unintended escalation and lead to nuclear war.
3. The End of Privacy and Free Will
With every digital action, we produce new data—emails, texts, downloads, purchases, posts, selfies, and GPS locations. By allowing companies and governments to have unrestricted access to this data, we are handing over the tools of surveillance and control.
With the addition of facial recognition, biometrics, genomic data, and AI-enabled predictive analysis, Lohn of CSET worries that "we are entering dangerous and uncharted territory with the rise of surveillance and tracking through data, and we have almost no understanding of the potential implications."
Michael C. Horowitz, director of Perry World House, at the University of Pennsylvania, warns "about the logic of AI and what it means for domestic repression. In the past, the ability of autocrats to repress their populations relied upon a large group of soldiers, some of whom may side with society and carry out a coup d'etat. AI could reduce these kinds of constraints."
The power of data, once collected and analyzed, extends far beyond the functions of monitoring and surveillance to allow for predictive control. Today, AI-enabled systems predict what products we'll purchase, what entertainment we'll watch, and what links we'll click. When these platforms know us far better than we know ourselves, we may not notice the slow creep that robs us of our free will and subjects us to the control of external forces.
4. A Human Skinner Box
The ability of children to delay immediate gratification, to wait for the second marshmallow, was once considered a major predictor of success in life. Soon even the second-marshmallow kids will succumb to the tantalizing conditioning of engagement-based algorithms.
Social media users have become rats in lab experiments, living in human Skinner boxes, glued to the screens of their smartphones, compelled to sacrifice more precious time and attention to platforms that profit from it at their expense.
Helen Toner of CSET says that "algorithms are optimized to keep users on the platform as long as possible." By offering rewards in the form of likes, comments, and follows, Malcolm Murdock explains, "the algorithms short-circuit the way our brain works, making our next bit of engagement irresistible."
To maximize advertising profit, companies steal our attention away from our jobs, families and friends, responsibilities, and even our hobbies. To make matters worse, the content often makes us feel miserable and worse off than before. Toner warns that "the more time we spend on these platforms, the less time we spend in the pursuit of positive, productive, and fulfilling lives."
5. The Tyranny of AI Design
Every day, we turn over more of our daily lives to AI-enabled machines. This is problematic since, as Horowitz observes, "we have yet to fully wrap our heads around the problem of bias in AI. Even with the best intentions, the design of AI-enabled systems, both the training data and the mathematical models, reflects the narrow experiences and interests of the biased people who program them. And we all have our biases."
As a result, Lydia Kostopoulos, senior vice president of emerging tech insights at the Clearwater, Fla.–based IT security company KnowBe4, argues that "many AI-enabled systems fail to take into account the diverse experiences and characteristics of different people." Since AI solves problems based on biased perspectives and data rather than the unique needs of every individual, such systems produce a level of conformity that doesn't exist in human society.
Even before the rise of AI, the design of common objects in our daily lives has often catered to a particular type of person. For example, studies have shown that cars, hand-held tools including cellphones, and even the temperature settings in office environments have been established to suit the average-size man, putting people of varying sizes and body types, including women, at a major disadvantage and sometimes at greater risk to their lives.
When individuals who fall outside of the biased norm are neglected, marginalized, and excluded, AI turns into a Kafkaesque gatekeeper, denying access to customer service, jobs, health care, and much more. AI design decisions can restrain people rather than liberate them from day-to-day concerns. And these choices can also transform some of the worst human prejudices into racist and sexist hiring and mortgage practices, as well as deeply flawed and biased sentencing outcomes.
6. Fear of AI Robs Humanity of Its Benefits
Since today's AI runs on data sets, advanced statistical models, and predictive algorithms, the process of building machine intelligence ultimately centers around mathematics. In that spirit, said Murdock, "linear algebra can do insanely powerful things if we're not careful." But what if people become so afraid of AI that governments regulate it in ways that rob humanity of AI's many benefits? For example, DeepMind's AlphaFold program achieved a major breakthrough in predicting how amino acids fold into proteins, making it possible for scientists to identify the structure of 98.5 percent of human proteins. This milestone will provide a fruitful foundation for the rapid advancement of the life sciences. Consider the benefits of improved communication and cross-cultural understanding made possible by seamlessly translating across any combination of human languages, or the use of AI-enabled systems to identify new treatments and cures for disease. Knee-jerk regulatory actions by governments to protect against AI's worst-case scenarios could also backfire and produce their own unintended negative consequences, in which we become so scared of the power of this tremendous technology that we resist harnessing it for the actual good it can do in the world.
Good article - all of those points are of concern.
Came across https://en.wikipedia.org/wiki/The_Metamorphosis_of_Prime_Intellect the other day (thanks Atun-Shei) about a super powerful AI that runs the world on the basis of Asimov's Three Laws. It's heavily dystopian, I suppose?
I haven't paying much attention, but has Berkut fixed Social Media yet?
Quote from: Razgovory on January 10, 2022, 04:47:12 PM
I haven't paying much attention, but has Berkut fixed Social Media yet?
Nearly, a major breakthrough was banning ALL CAPS words from social media.
Shit, was that my job?
Quote from: crazy canuck on January 10, 2022, 02:57:19 PM
Good article - all of those points are of concern.
Yes. And virtually none of them are new. You could have written essentially the same article in the early 20th century based on the technology of that era.
Quote from: Berkut on January 11, 2022, 08:42:17 AM
Shit, was that my job?
Would you rather handle Big Tech?
Quote from: The Minsky Moment on January 11, 2022, 09:32:48 AM
Quote from: crazy canuck on January 10, 2022, 02:57:19 PM
Good article - all of those points are of concern.
Yes. And virtually none of them are new. You could have written essentially the same article in the early 20th century based on the technology of that era.
For various technologies yes - and in fact the article itself makes the same point. But this is all now directly relevant to one pervasive technology.
Quote from: The Minsky Moment on January 11, 2022, 09:32:48 AM
Quote from: crazy canuck on January 10, 2022, 02:57:19 PM
Good article - all of those points are of concern.
Yes. And virtually none of them are new. You could have written essentially the same article in the early 20th century based on the technology of that era.
This ignores that not all technology is the same. It's like arguing that nukes are really no different then bombs which were no different from cannon balls which was no different then....
But lets put that aside for now.
Arguing that this is no big deal because we have had all technological innovation before is really missing the point. Technological innovation in the past has often, maybe even almost always, seen massive disruption of human society come along with it.
The printing press was an incredible technological advance. Invented in the 1440s in Europe. Really took off, and revolutionized that ability of everyone to engage in mass communication.
Europe then entered a period of decades of almost constant war that saw somewhere around 10 million people killed. Obviously, it is incorrect to say that this was solely the result of technological change - these things are always complex - but my point is simply that history is replete with bloody revolutions, upheaval, death and destruction. So saying "Well, that's nothing new!" doesn't make me feel any better about the threat. I would rather my kids don't live through the next French or Russian revolution, personally. Even if we accept that in the long run the long term outcome was positive. I would rather figure out a way to get the benefits of new technology without the massive butchery and chaos that is so common throughout human history.