News:

And we're back!

Main Menu

How to fix Big Tech and Social Media

Started by Berkut, June 22, 2021, 12:28:14 PM

Previous topic - Next topic

Sheilbh

Quote from: Admiral Yi on October 25, 2021, 06:27:20 PM
I think it does.  In both cases they are conduits, not creators of incitement.
WhatsApp, Messenger, telephone, email, normal mail are all direct messages to an individual/location. They are (arguably - I think it's questionable around WhatsApp and Messenger at least) just the pipes. In the UK and, I imagine in the US, as well it is a crime to intercept someone's phone calls (and arguably emails) and listen in, or to tamper with their mail. Although I'd not in the US you did have the Comstock laws so it's not without precedent.

Facebook is a way of broadcasting to the world and their entire business model is based on listening in and working what generates the most engagement - plus developing profiles of us based on those public broadcasts. That is not the business model of Royal Mail or the telcos - they're regulated infrastructure.

I can get behind the argument that they are conduits - but we regulate them and treat them like conduits as a form of infrastructure and public good. Alternately they're not a conduit in which case it's not the same argument demanding that they are responsible for shutting down or just liable for the content on their platforms (in the same way as other publishers or broadcasters are liable).

I need to read up on the latest status and I don't have a settled view on it but there's "online harms" legislation which has been working through the system here - lots of white papers and consultations - for about two years (I think it was prompted by the terrorist in Christchurch attack livestreaming on Facebook).
Let's bomb Russia!

Admiral Yi


Jacob

#167
Quote from: grumbler on October 25, 2021, 06:39:01 PM
I think that it is reasonable to ask that companies not monetize incitement by amplifying it, but I have yet to see any suggestion by critics of, say, Facebook that actually proposes a solution to the problem of assholes having access to the internet.  What, precisely, are such critics proposing be done?

I think a lot goes down to defining what the problem is. I don't think the problem is "assholes having access to the internet" but rather, as you say, monetizing incitement by amplifying it.

QuoteIn February 2019, not long before India's general election, a pair of Facebook employees set up a dummy account to better understand the experience of a new user in the company's largest market. They made a profile of a 21-year-old woman, a resident of North India, and began to track what Facebook showed her.

At first, her feed filled with soft-core porn and other, more harmless, fare. Then violence flared in Kashmir, the site of a long-running territorial dispute between India and Pakistan. Indian Prime Minister Narendra Modi, campaigning for reelection as a nationalist strongman, unleashed retaliatory airstrikes that India claimed hit a terrorist training camp.

Soon, without any direction from the user, the Facebook account was flooded with pro-Modi propaganda and anti-Muslim hate speech. "300 dogs died now say long live India, death to Pakistan," one post said, over a background of laughing emoji faces. "These are pakistani dogs," said the translated caption of one photo of dead bodies lined-up on stretchers, hosted in the News Feed.

An internal Facebook memo, reviewed by The Washington Post, called the dummy account test an "integrity nightmare" that underscored the vast difference between the experience of Facebook in India and what U.S. users typically encounter. One Facebook worker noted the staggering number of dead bodies.

About the same time, in a dorm room in northern India, 8,000 miles away from the company's Silicon Valley headquarters, a Kashmiri student named Junaid told The Post he watched as his real Facebook page flooded with hateful messages. One said Kashmiris were "traitors who deserved to be shot." Some of his classmates used these posts as their profile pictures on Facebook-owned WhatsApp.

Junaid, who spoke on the condition that only his first name be used for fear of retribution, recalled huddling in his room one evening as groups of men marched outside chanting death to Kashmiris. His phone buzzed with news of students from Kashmir being beaten in the streets — along with more violent Facebook messages.

"Hate spreads like wildfire on Facebook," Junaid said. "None of the hate speech accounts were blocked."

For all of Facebook's troubles in North America, its problems with hate speech and disinformation are dramatically worse in the developing world. Internal company documents made public Saturday reveal that Facebook has meticulously studied its approach abroad — and was well aware that weaker moderation in non-English-speaking countries leaves the platform vulnerable to abuse by bad actors and authoritarian regimes.

"The painful reality is that we simply can't cover the entire world with the same level of support," Samidh Chakrabarti, then the company's civic integrity lead, wrote in a 2019 post on Facebook's message board, adding that the company managed the problem by tiering countries for investment.

https://www.washingtonpost.com/technology/2021/10/24/india-facebook-misinformation-hate-speech/

... I accidentally hit post before I got anywhere with this  :blush:, so if you're wondering what the point is there isn't much of one other than to show that FB is doing notably worse in some markets (India) than others (the US), and that they're aware of it.

DGuller

Quote from: Admiral Yi on October 25, 2021, 05:47:11 PM
If we're going to demand that Facebook shut down incitement, don't we have to do the same for telephone, email and snailmail?
I think such comparisons that don't take into account the efficacy of different methods are sophist.  The fact of the matter is that Facebook is a far more effective conduit of propaganda than two cans tied with a string.  Therefore, it may be reasonable to subject Facebook to regulations that two cans tied with a string are not subjected to.

Josquius

That's something I don't get.
Facebook misinformation - micro targeting, large communities, interlinking all the people in the world to various degrees of seperation. I get how this is screwing up the world in a new way.
WhatsApp however...
Being person to person largely it's fascinating how it is managing to fill the same role. I recall reading with Asian communities whatsapp is worse than Facebook for spreading dangerous nonsense.
██████
██████
██████

Berkut

I wonder how much you could accomplish by simply making it illegal to sell advertising on social media. Break the incentive that social media sites have to keep users engaged at all costs.
"If you think this has a happy ending, then you haven't been paying attention."

select * from users where clue > 0
0 rows returned

Syt

Quote from: Tyr on October 26, 2021, 09:14:56 AM
That's something I don't get.
Facebook misinformation - micro targeting, large communities, interlinking all the people in the world to various degrees of seperation. I get how this is screwing up the world in a new way.
WhatsApp however...
Being person to person largely it's fascinating how it is managing to fill the same role. I recall reading with Asian communities whatsapp is worse than Facebook for spreading dangerous nonsense.

John Oliver had a segment about that, focusing on the spread of misinformation among non-English speaking communities: https://youtu.be/l5jtFqWq5iU
I am, somehow, less interested in the weight and convolutions of Einstein's brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops.
—Stephen Jay Gould

Proud owner of 42 Zoupa Points.

Sheilbh

Really good piece on the global side/issues of Facebook - one of so many. And the line about the Urdu rules is incredible. This is much of the world's internet:
QuoteHow Facebook neglected the rest of the world, fueling hate speech and violence in India
A trove of internal documents show Facebook didn't invest in key safety protocols in the company's largest market.
By Cat Zakrzewski, Gerrit De Vynck, Niha Masih and Shibani Mahtani
October 24, 2021 at 7:00 a.m. EDT

In February 2019, not long before India's general election, a pair of Facebook employees set up a dummy account to better understand the experience of a new user in the company's largest market. They made a profile of a 21-year-old woman, a resident of North India, and began to track what Facebook showed her.

At first, her feed filled with soft-core porn and other, more harmless, fare. Then violence flared in Kashmir, the site of a long-running territorial dispute between India and Pakistan. Indian Prime Minister Narendra Modi, campaigning for reelection as a nationalist strongman, unleashed retaliatory airstrikes that India claimed hit a terrorist training camp.

Soon, without any direction from the user, the Facebook account was flooded with pro-Modi propaganda and anti-Muslim hate speech. "300 dogs died now say long live India, death to Pakistan," one post said, over a background of laughing emoji faces. "These are pakistani dogs," said the translated caption of one photo of dead bodies lined-up on stretchers, hosted in the News Feed.

An internal Facebook memo, reviewed by The Washington Post, called the dummy account test an "integrity nightmare" that underscored the vast difference between the experience of Facebook in India and what U.S. users typically encounter. One Facebook worker noted the staggering number of dead bodies.


About the same time, in a dorm room in northern India, 8,000 miles away from the company's Silicon Valley headquarters, a Kashmiri student named Junaid told The Post he watched as his real Facebook page flooded with hateful messages. One said Kashmiris were "traitors who deserved to be shot." Some of his classmates used these posts as their profile pictures on Facebook-owned WhatsApp.

Junaid, who spoke on the condition that only his first name be used for fear of retribution, recalled huddling in his room one evening as groups of men marched outside chanting death to Kashmiris. His phone buzzed with news of students from Kashmir being beaten in the streets — along with more violent Facebook messages.

"Hate spreads like wildfire on Facebook," Junaid said. "None of the hate speech accounts were blocked."


For all of Facebook's troubles in North America, its problems with hate speech and disinformation are dramatically worse in the developing world. Internal company documents made public Saturday reveal that Facebook has meticulously studied its approach abroad — and was well aware that weaker moderation in non-English-speaking countries leaves the platform vulnerable to abuse by bad actors and authoritarian regimes.

"The painful reality is that we simply can't cover the entire world with the same level of support," Samidh Chakrabarti, then the company's civic integrity lead, wrote in a 2019 post on Facebook's message board, adding that the company managed the problem by tiering countries for investment.

This story is based on those documents, known as the Facebook Papers, which were disclosed to the Securities and Exchange Commission by whistleblower Frances Haugen, and composed of research, slide decks and posts on the company message board — some previously reported by the Wall Street Journal. It is also based on documents independently reviewed by The Post, as well as more than a dozen interviews with former Facebook employees and industry experts with knowledge of the company's practices abroad.

The SEC disclosures, provided to Congress in redacted form by Haugen's legal counsel and reviewed by a consortium of news organizations including The Post, suggest that as Facebook pushed into the developing world it didn't invest in comparable protections.

Facebook whistleblower Frances Haugen tells lawmakers that meaningful reform is necessary 'for our common good'

According to one 2020 summary, although the United States comprises less than 10 percent of Facebook's daily users, the company's budget to fight misinformation was heavily weighted toward America, where 84 percent of its "global remit/language coverage" was allocated. Just 16 percent was earmarked for the "Rest of World," a cross-continent grouping that included India, France and Italy.

Facebook spokesperson Dani Lever said that the company had made "progress" and had "dedicated teams working to stop abuse on our platform in countries where there is heightened risk of conflict and violence. We also have global teams with native speakers reviewing content in over 70 languages along with experts in humanitarian and human rights issues."

Many of these additions had come in the past two years. "We've hired more people with language, country and topic expertise. We've also increased the number of team members with work experience in Myanmar and Ethiopia to include former humanitarian aid workers, crisis responders, and policy specialists," Lever said.

Meanwhile, in India, Lever said, the "hypothetical test account inspired deeper, more rigorous analysis of our recommendation systems."


Globally there are over 90 languages with over 10 million speakers. In India alone, the government recognizes 122 languages, according to its 2001 census.

In India, where the Hindu-nationalist Bharatiya Janata Party — part of the coalition behind Modi's political rise — deploys inflammatory rhetoric against the country's Muslim minority, misinformation and hate speech can translate into real-life violence, making the stakes of these limited safety protocols particularly high. Researchers have documented the BJP using social media, including Facebook and WhatsApp, to run complex propaganda campaigns that scholars say play to existing social tensions against Muslims.

Members from the Next Billion Network, a collective of civil society actors working on technology-related harms in the global south, warned Facebook officials in the United States that unchecked hate speech on the platform could trigger large-scale communal violence in India, in multiple meetings held between 2018 and 2019, according to three people with knowledge of the matter, who spoke on the condition of anonymity to describe sensitive matters.

How misinformation on WhatsApp led to a mob killing in India

Despite Facebook's assurances it would increase moderation efforts, when riots broke out in Delhi last year, calls to violence against Muslims remained on the site, despite being flagged, according to the group. Gruesome images, claiming falsely to depict violence perpetrated by Muslims during the riots, were found by The Post. Facebook labeled them with a fact check, but they remained on the site as of Saturday.

More than 50 people were killed in the turmoil, the majority of them Muslims.

"They were told, told, told and they didn't do one damn thing about it," said a member of the group who attended the meetings. "The anger [from the global south] is so visceral on how disposable they view our lives."

Facebook said it removed content that praised, supported or represented violence during the riots in Delhi.

India is the world's largest democracy and a growing economic powerhouse, making it more of a priority for Facebook than many other countries in the global south. Low-cost smartphones and cheap data plans have led to a telecom revolution, with millions of Indian users coming online for the first time every year. Facebook has made great efforts to capture these customers, and its signature app has 410 million users according to the Indian government, more than the entire population of the United States.

The company activated large teams to monitor the platform during major elections, dispatched representatives to engage with activists and civil society groups, and conducted research surveying Indian people, finding many were concerned about the quantity of misinformation on the platform, according to several documents.

But despite the extra attention, the Facebook that Indians interact with is missing many of the key guardrails the company deployed in the United States and other mostly-English-speaking countries for years. One document stated that Facebook had not developed algorithms that could detect hate speech in Hindi and Bengali, despite them being the fourth- and seventh-most spoken languages in the world, respectively. Other documents showed how political actors spammed the social network with multiple accounts, spreading anti-Muslim messages across people's news feeds in violation of Facebook's rules.

The company said it introduced hate-speech classifiers in Hindi in 2018 and Bengali in 2020; systems for detecting violence and incitement in Hindi and Bengali were added in 2021.


Pratik Sinha, co-founder of Alt News, a fact-checking site in India that routinely debunks viral fake and inflammatory posts, said that while misinformation and hate speech proliferate across multiple social networks, Facebook sometimes doesn't takedown bad actors.

"Their investment in a country's democracy is conditional," Sinha said. "It is beneficial to care about it in the U.S. Banning Trump works for them there. They can't even ban a small-time guy in India."

'Bring the world closer together'

Facebook's mission statement is to "bring the world closer together," and for years, voracious expansion into markets beyond the United States has fueled its growth and profits.

Social networks that let citizens connect and organize became a route around governments that had controlled and censored centralized systems like TV and radio. Facebook was celebrated for its role in helping activists organize protests against authoritarian governments in the Middle East during the Arab Spring.

For millions of people in Asia, Africa and South America, Facebook became the primary way they experience the Internet. Facebook partnered with local telecom operators in countries such as Myanmar, Ghana and Mexico to give free access to its app, along with a bundle of other basic services like job listings and weather reports. The program, called "Free Basics," helped millions get online for the first time, cementing Facebook's role as a communication platform all around the world and locking millions of users into a version of the Internet controlled by an individual company. (While India was one of the first countries to get Free Basics in 2015, backlash from activists who argued that the program unfairly benefited Facebook led to its shutdown.)

In late 2019, the Next Billion Network ran a multicountry study, separate from the whistleblower's documents, of Facebook's moderation and alerted the company that large volumes of legitimate complaints, including death threats, were being dismissed in countries throughout the global south, including Pakistan, Myanmar and India, because of technical issues, according to a copy of the report reviewed by The Post.

It found that cumbersome reporting flows and a lack of translations were discouraging users from reporting bad content, the only way content is moderated in many of the countries that lack more automated systems. Facebook's community standards, the set of rules that users must abide by, were not translated into Urdu, the national language of Pakistan. Instead, the company flipped the English version so it read from right to left, mirroring the way Urdu is read.

In June 2020, a Facebook employee posted an audit of the company's attempts to make its platform safer for users in "at-risk countries," a designation given to nations Facebook marks as especially vulnerable to misinformation and hate speech. The audit showed Facebook had massive gaps in coverage. In countries including Myanmar, Pakistan and Ethiopia, Facebook didn't have algorithms that could parse the local language and identify posts about covid-19. In India and Indonesia, it couldn't identify links to misinformation, the audit showed.

In Ethiopia, the audit came a month after its government postponed federal elections, a major step in a buildup to a civil war that broke out months later. In addition to being unable to detect misinformation, the audit found Facebook also didn't have algorithms to flag hate speech in the country's two biggest local languages.

After negative coverage, Facebook has made dramatic investments. For example, after a searing United Nations report connected Facebook to an alleged genocide against the Rohingya Muslim minority in Myanmar, the region became a priority for the company, which began flooding it with resources in 2018, according to interviews with two former Facebook employees with knowledge of the matter, who, like others, spoke on the condition of anonymity to describe sensitive matters.


Facebook took several steps to tighten security and remove viral hate speech and misinformation in the region, according to multiple documents. One note, from 2019, showed that Facebook expanded its list of derogatory terms in the local language and was able to catch and demote thousands of slurs. Ahead of Myanmar's 2020 elections, Facebook launched an intervention that promoted posts from users' friends and family and reduced viral misinformation, employees found.

A former employee said that it was easy to work on the company's programs in Myanmar, but there was less incentive to work on problematic issues in lower-profile countries, meaning many of the interventions deployed in Myanmar were not used in other places.

"Why just Myanmar? That was the real tragedy," the former employee said.

'Pigs' and fearmongering

In India, internal documents suggest Facebook was aware of the number of political messages on its platforms. One internal post from March shows a Facebook employee believed a BJP worker was breaking the site's rules to post inflammatory content and spam political posts. The researcher detailed how the worker used multiple accounts to post thousands of "politically-sensitive" messages on Facebook and WhatsApp during the run-up to the elections in the state of West Bengal. The efforts broke Facebook's rules against "coordinated inauthentic behavior," the employee wrote. Facebook denied that the operation constituted coordinated activity, but said it took action.

A case study about harmful networks in India shows that pages and groups of the Rashtriya Swayamsevak Sangh, an influential Hindu-nationalist group associated with the BJP, promoted fearmongering anti-Muslim narratives with violent intent. A number of posts compared Muslims to "pigs" and cited misinformation claiming the Koran calls for men to rape female family members.

The group had not been flagged, according to the document, given what employees called "political sensitivities." In a slide deck in the same document, Facebook employees said the posts also hadn't been found because the company didn't have algorithms that could detect hate speech in Hindi and Bengali.


Facebook in India has been repeatedly criticized for a lack of a firewall between politicians and the company. One deck on political influence on content policy from December 2020 acknowledged the company "routinely makes exceptions for powerful actors when enforcing content policy," citing India as an example.

"The problem which arises is that the incentives are aligned to a certain degree," said Apar Gupta, executive director of the Internet Freedom Foundation, a digital advocacy group in India. "The government wants to maintain a level of political control over online discourse and social media platforms want to profit from a very large, sizable and growing market in India."


Facebook says its global policy teams operate independently and that no single team's opinion has more influence than the other.

Earlier this year, India enacted strict new rules for social media companies, increasing government powers by requiring the firms to remove any content deemed unlawful within 36 hours of being notified. The new rules have sparked fresh concerns about government censorship of U.S.-based social media networks. They require companies to have an Indian resident on staff to coordinate with local law enforcement agencies. The companies are also required to have a process where people can directly share complaints with the social media networks.

But Junaid, the Kashmiri college student, said Facebook had done little to remove the hate-speech posts against Kashmiris. He went home to his family after his school asked Kashmiri students to leave for their own safety. When he returned to campus 45 days after the 2019 bombing, the Facebook post from a fellow student calling for Kashmiris to be shot was still on their account.

Regine Cabato in Manila contributed to this report.

And it's worth noting the Vietnam story for all the risks to free speech side. There Facebook formally agreed with the government to take down and report "anti-state" content from 2020, but had been taking it down since 2018 just without a formal agreement. It is a weird situation where we can defend Facebook for the purposes of promoting free speech when it is something they will absolutely sacrifice if it opens a potential market in an authoritarian country.
Let's bomb Russia!

Berkut

"If you think this has a happy ending, then you haven't been paying attention."

select * from users where clue > 0
0 rows returned

Tamas

I look at the Western aghast at Facebook, grave concerns shared by all their governments and traditional media, and then I look at Hungary and I see social media as the only one not controlled and/or domineered by the government.  I don't know what pro-democracy people would do without it. Well, they could rely on the few barely-visited online news sites they have left I guess, but even most of those have stopped allowing comment sections because of the laws in place which seek to protect the populace from extreme and offending comments and content.

It's not social media but very much targeted content: I should do a longer post about how Youtube has effectively turned into independent TV in Hungary. Multiple channels have sprung up by now where they do political and cultural interviews and analyst shows which in normal countries would be on TV. The non-government approved part of Hungarian political life has entirely moved to Youtube and Facebook.

Get a grip, people. Banning advertisement i.e. shutting down, social media? Orban most certainly approves. Considering Mein Kamp was printed as a book I'd also consider banning ads in print media as well.

Berkut

Quote from: Tamas on October 27, 2021, 03:31:02 AM
I look at the Western aghast at Facebook, grave concerns shared by all their governments and traditional media, and then I look at Hungary and I see social media as the only one not controlled and/or domineered by the government.  I don't know what pro-democracy people would do without it. Well, they could rely on the few barely-visited online news sites they have left I guess, but even most of those have stopped allowing comment sections because of the laws in place which seek to protect the populace from extreme and offending comments and content.

It's not social media but very much targeted content: I should do a longer post about how Youtube has effectively turned into independent TV in Hungary. Multiple channels have sprung up by now where they do political and cultural interviews and analyst shows which in normal countries would be on TV. The non-government approved part of Hungarian political life has entirely moved to Youtube and Facebook.

Get a grip, people. Banning advertisement i.e. shutting down, social media? Orban most certainly approves. Considering Mein Kamp was printed as a book I'd also consider banning ads in print media as well.

Was Mein Kampf written in an attempt to sell ad revenue? I had no idea!

Nobody has said anything about shutting down social media. Total red herring.

Right now social media by and large is completely unregulated. We are talking about how to regulate it so that it's incentives line up with societies needs from modern communication. This is the same problem every new media technology has run into, ever.
"If you think this has a happy ending, then you haven't been paying attention."

select * from users where clue > 0
0 rows returned

Tamas

If advertising on social media is made illegal, how are they to maintain themselves financially?

Berkut

Quote from: Tamas on October 27, 2021, 09:25:18 AM
If advertising on social media is made illegal, how are they to maintain themselves financially?

I don't know. But lots of products manage to survive without advertising.

Are you arguing that the only possible way for social media to work is through advertising? That is a rather different argument, isn't it?
"If you think this has a happy ending, then you haven't been paying attention."

select * from users where clue > 0
0 rows returned

Tamas

Quote from: Berkut on October 27, 2021, 09:36:11 AM
Quote from: Tamas on October 27, 2021, 09:25:18 AM
If advertising on social media is made illegal, how are they to maintain themselves financially?

I don't know. But lots of products manage to survive without advertising.

Are you arguing that the only possible way for social media to work is through advertising? That is a rather different argument, isn't it?

If you want to be grumbler about it then yes I skipped to the only logical conclusion of your suggestion without outlining the obvious logical process to get there.

I haven't encountered an online product that didn't have either ads or another method of generating revenue / covering running costs.

Additionally, I don't think even enacting such a law would be feasible. You would have to define "social media" in a way that's not trivial to weasel out of, without banning ads on things like online newspapers with comment sections, or forums etc.

But I am happy to concede that there might be a way involving no money that would allow social media to exist, because it doesn't matter.

More importantly, I just disagree with the basic premise that "solving" social media would solve the underlying political issues to which we are allowed exposure to  by social media. We could more easily go back to the times of more extreme opinions  being ignored festering in their own corners, but that's it. And yes, those corners would have a slightly harder time to grow and combine, but it would not make them go away.

And there are great, great political benefits to social media from the points of view of enabling and maintaining free speech. Yes, it allows the nazis and other assorted scum to get together and form their bubbles, but it also gives the same tools to liberals and pro-democracy forces otherwise deliberately kept isolated and without a platform. The same algorithms that feed people viewing far right content with more far right content also feed people looking for moderate content in fascist regimes with more moderate content.

Yes there are risks and negatives and maybe we are not yet at the optimum level of regulation. But I feel like there's way too much focus given to social media. People see things they don't like on social media, be it racism, gay people, ethnic hatred, liberal views, political conmen, etc, and the first reaction is "this needs to be regulated!". Which is of course something that will be happily parroted by government and the more traditional media because it's in their interest. And suddenly, you find yourself on the same platform as the Hungarian PM.

Tamas

https://www.theguardian.com/commentisfree/2021/oct/24/society-blame-big-tech-online-regulation

QuoteThe push for online regulation risks absolving the right of responsibility for the toxicity they continually stoke


Every time a dramatic, unforeseen political event happens, there follows a left-field fixation that some out-of-control technology created it. Whenever this fear about big tech comes around we are told that something new, even more toxic, has infiltrated our public discourse, triggering hatred towards politicians and public figures, conspiracy theories about Covid and even major political events like Brexit. The concern over anonymity online becomes a particular worry – as if ending it will somehow, like throwing a blanket at a raging house fire, subdue our fevered state.

You may remember that during the summer's onslaught of racist abuse towards black players in the England football team, instead of reckoning with the fact that racism still haunts this country, we busied ourselves with bluster about how "cowards" online would be silenced if we only just demanded they identify themselves.


We resort to this explanation, that shadowy social media somehow stimulate our worst impulses, despite there being little evidence that most abuse is from unidentifiable sources. After England's defeat in the Euro 2020 final, Twitter revealed that 99% of the abuse on its site directed at England footballers was not anonymous.

The same arguments were made in the aftermath of MP David Amess's killing – that doing something about online abuse would make politicians safer. It was a rehash of a 2018 moment when Theresa May pledged to regulate online behaviour because a "tone of bitterness and aggression has entered into our public debate".

Good old social media, always there to paper over the giant cracks of our political failures. Bad tech is a convenient fall guy for a whole gang of perpetrators. It has been particularly useful in recent years, when Brexit has enabled rightwing politicians and press to engage in the most divisive, dangerous rhetoric, particularly towards the country's political and legal institutions, then point to social media when that rhetoric serves its purpose of eroding tolerance and trust.

But when parliament and the supreme court – attacked by the media and politicians for variously being saboteurs, traitors and opponents of the will of the people – come under fire from members of the public, that is an entirely different matter. The faceless public becomes the only protagonist. This allows everyone, from the mainstream press to publishers of far-right conspiracy theories, to distance themselves from the scene of the crime and innocently propose earnest-sounding solutions to our country's crises of racism and loss of faith in our politics.

A notice keeps David Amess's seat in the Commons free.
PM urged to enact 'David's law' against social media abuse after Amess's death
Read more
The corrupting influence of technology companies is also a compelling explanation for them because it means that something can be done. This is partly down to a sort of dominant liberal technocratic sensibility that reaches for a tool kit to fix social and political problems, as one would approach a broken machine. The result is "solutionism", the belief there is a technological remedy for most issues, because human behaviour is essentially rational and can be mapped out, analysed and then adjusted.

It's all so much easier than squaring up to the gnarly facts that the world is messy; humans are infinitely suggestible and manipulable; and most of the time our political behaviour is a manifestation of long-term currents spread by political parties and dominant economic ideologies. This reluctance to trace how we arrived at a place we don't like was clearly demonstrated by the stubbornness with which so many people held to the belief that Brexit was an aberration. Not acknowledging that it was, in fact, a culmination of a campaign that lasted years, and the result of our failed economic model and of decades of anti-immigration obsession. Someone must have cheated, these people told themselves, so a sort of tech calamity thesis carried the day. And the perfect culprit presented itself in the form of Cambridge Analytica and a convenient cartoon cast including shady Russian powers, Nigel Farage and Dominic Cummings.


The right, too, loves a tech panic to explain away unhappy results. Tech growing faster than it can be controlled and then turning on its creators is a universal bogeyman, a nervousness captured in Isaac Asimov's first law of robotics in 1942: "A robot may not injure a human being or, through inaction, allow a human being to come to harm."

When companies reach the scale and reach of Facebook, they can appear, to the right, a little too much like big governments infringing on individual privacy and freedoms. This fear is then easily capitalised on, and all sorts of unlikely victims can claim they are silenced by platforms biased against their politics. When Donald Trump intends to launch a new social media network to "stand up to the tyranny of big tech", he is echoing the whine of many across the political spectrum. Those who, rather than admit their thinking is less popular than they would like, prefer to believe they are simply conspired against.

Social media companies do regularly fail in their responsibilities to manage the kind of hate speech and abuse that poses a danger for everyone from vulnerable children to ethnic minorities and members of parliament. It is clear that the management of harmful content online cannot be left to tech platforms themselves and that some form of regulation is now long overdue. One hopes the current UK online safety bill will now address that.

But fixating solely on reforming big tech risks turning into a huge displacement exercise. While we rightly focus on the excesses of tech platforms that have turned abuse and lies into lucre, we must also realise that the bad robot theory is tempting because it places the problem not only outside of our institutions, but outside of our very selves. There are other anonymous players who need to be named in this crisis of discord – those parties in our politics and our media who have created so much discontent and hostility that it all regularly overflows in the sewers of social media.