Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
How much longer is Twitter going to last?
A few weeks
A few months
A few years
About as long as the rest of humanity
View Results
 
  • Post
  • Reply
Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.
Since it's about to become even more acutely relevant, here's one of the key recommendations from my OP material in the Media Analysis and Criticism thread:

quote:

Oh my god, log off of Twitter
Are you and your friends getting your news from twitter? You’re hosed in the head. No, seriously, it has hosed you up on levels you cannot recognize. Twitter is brain poison, and the medium is taking every bit as smelly a dump in your brain as every internet-poisoned racist or hot take artist you’ve ever encountered on there. The comedians you follow aren’t more insightful, the journos you follow aren’t giving you inside scoops, and the information you’re getting is virtually never reaching you before everyone else.

The power of twitter to gently caress up brains is not just that it gives you material that you agree with, or makes you angry at things you disagree with. Twitter makes the information it gives you seem as if it reflects the world. Feeds are fishbowls. The tiny, myopic, ultratailored worldview that twitter gives you fills up your vision and gives you the illusion of understanding much larger, more complex issues. That’s the real danger of social media - not just being wrong, but being certain. If you get your information from twitter, if you have an account and log in and use it regularly - or even if you just socialize with a set of people who pass you information from the site, well, they’re just mediators for the exact same phenomenon.

Even before Musk's purchase, twitter's very close to the worst case scenario for accurately communicating about or discussing things. It's practically optimized to strip information of context and drive people down alienating, radicalizing, ingroup-outgroup oriented approaches to information.

I'm going to work up a post on social network analysis methods for that thread and I'll crosspost it here- I've been putting it off because it's a difficult subject to explain concisely without a bunch of example images.

Adbot
ADBOT LOVES YOU

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.
Discord was, for a period, really infamous as a recruiting format for the alt-right; the siloed servers and the gaming focus made it really effective for this. I don't have much info on how the company tried to address this, though I recall that at least some tightening of policies occurred.

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.
Jeffrey should be running ads for SA right now, drinking that milkshake.

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.

Main Paineframe posted:

Maybe planting physical hardware to maintain network access or something? I don't know. He closed the offices for the initial layoff wave, too. I guess with layoffs at this scale, it's not like you can just have security show up and walk everyone out.

It really does stand out how much Musk deeply distrusts Twitter employees, too. We saw a lot of that during the transition too - nearly the entire moderation staff was locked out of their mod tools for a few days, and he sent managers scouring their payrolls to make sure that their teams existed and that there weren't any nonexistent "ghost employees" drawing a salary.

It's bizarre - I haven't heard of him being nearly this paranoid in his other companies. Is it because he inherited the workforce and didn't have a chance to stuff it with toadies from the very beginning? Or is he just scared that it's full of liberal operatives out to destroy free speech at any cost? Who the gently caress knows, but it's honestly impressive how quickly he's tanked employee morale.

It may be to find some gossamer-thin pretext for a narrative of internal sabotage.

Out of curiosity, what has all of this done to Musk’s net worth?

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.

Tuxedo Gin posted:

I've never in my life given a poo poo or clicked on anything about Tesla stock, and yet Twitter now is constantly pushing tweets about how I need to go all in on Tesla to my feed.

I know nothing about regulation in this area; would that be legal?

vvv this, then, might be the purpose for which Musk ultimately sought to buy Twitter.

Discendo Vox fucked around with this message at 02:53 on Nov 27, 2022

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.

koolkal posted:

He could also just buy the loan himself or pay it off and he would own it debt free. He would have to sell more Tesla stock though

Is this closer to the leveraged buyout/firesale scenario?

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.

evilweasel posted:

this would be doubling down on the purchase, throwing an additional $13b (maybe he can secure a discount) after the previous bad money. but it would delever the company because it would eliminate all funded debt, so as long as twitter could generate positive cash flow it wouldn't need to go bankrupt

I see, thanks. I think I’m still struggling to comprehend how thoroughly Musk’s brain is boiled. A competent monster would be able to do so, so much harm with a privately held twitter…

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.
Twitter dissolves Trust and Safety Council
Meanwhile, a former top Twitter official fled his home amid attacks following Musk tweets

quote:

Twitter on Monday night abruptly dissolved its Trust and Safety Council, the latest sign that Elon Musk is unraveling years of work and institutions created to make the social network safer and more civil.

Members of Twitter’s Trust and Safety Council received an email with the subject line, “Thank You,” that informed them the council was no longer “the best structure” to bring “external insights into our product and policy development work.”

The email dissolution arrived less than an hour before members of the council were expecting to meet with Twitter executives via Zoom to discuss recent developments, according to people familiar with the matter who spoke on the condition of anonymity to discuss the plans.

Dozens of civil rights leaders, academics and advocates from around the world had volunteered their time for years to help improve safety on the platform.

“We are grateful for your engagement, advice and collaboration in recent years and wish you every success in the future,” said the email, which was simply signed “Twitter.”

In less than two months, Musk has undone years of investments in trust and safety at Twitter — dismissing key parts of the workforce and bringing back accounts that previously had been suspended.

The Trust and Safety Council unraveled after Musk himself had pitched the creation of a content moderation council that would have weighed in on key content moderation decisions, but later appeared to change his mind about introducing such a body.

And in the process, he has exposed some of the company’s current and former employees to online harassment.

Yoel Roth, Twitter’s former head of trust and safety, and his family were forced from their home after Elon Musk’s tweets misrepresented Roth’s academic writing about sexual activity and children. The online mob also sent threats to people Roth had replied to on Twitter, forcing some of Roth’s family and friends to delete their Twitter accounts, according to a person familiar with Roth’s situation who spoke on the condition of anonymity due to concerns about Roth’s safety.

[article continues at link]

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.
When you think about it, has anyone ever seen Elon and Jeffrey in the same place at once?

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.

Electric Phantasm posted:

What the gently caress does that even mean? This is like the Dilbert guy going "everything is on the table" talking like some lovely movie villain.

Educated guess it’s another “Twitter files revelation”.

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.

And this in a thread with kim dotcom.

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.
The folks Musk was posing with at the world cup final were basically a who's who of evil.
https://www.washingtonpost.com/investigations/2022/12/20/elon-musk-spotted-world-cup-final/

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.

Can you explain this for the rest of us? I get there are errors happening, but do they represent anything specific?

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.
Any ramifications for SA embedding?

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.
https://mobile.twitter.com/s2pidfuck/status/1622769797955227648

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.
Sounds like the public would benefit from a free add on that solely blocks Musk.

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.

Young Freud posted:

So, there's a tweet about an excerpt from a Chinese Passenger Car Association criticizing Tesla that has been getting attention because Twitter pops up this message when anyone interacts with it...
https://twitter.com/ObeseChess/status/1627840456091668480?s=20

It's not just likes, but retweets, bookmarks, etc.

I think we're heading into 320 liability territory.

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.

Rocko Bonaparte posted:

What is section 320?

Sorry, that should be 230.

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.
Even from the available info it's easy to see how this drives homophily within groups previously identified by the algorithm, and the degree to which its extant language model dictates the structure of that homophily. These rules reflect and reinforce informational feedback into existing groups.

...really wish I had time to finish my SNA effortpost for the media lit thread.

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.
Twitter appears to be systematically blocking functions linking to or promoting substack pages.

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.

WarpedLichen posted:

Was this bill already talked about?

https://www.cnn.com/2023/04/12/tech/arkansas-social-media-age-limit/index.html

Is this going to do anything? Was the sole purpose to generate bribes for carve outs?

I've not got time atm to read and research it, but there are a lot of social media restriction and regulation bills floating through every state legislature, R and D, in addition to activity from various federal agencies in the area.

Here's a link to the bill itself:
https://www.arkleg.state.ar.us/Bills/Detail?id=SB396&ddBienniumSession=2023%2F2023R

As a general rule at the level of state legislation, whatever else will be true about a bill like this, every draft and revision will also be incompetently drafted. That complicates reading hidden motives into it.

Discendo Vox fucked around with this message at 20:32 on Apr 15, 2023

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.

Adenoid Dan posted:

Twitter is telling researchers to pay $42,000 or delete all the data they've collected. A provision that allows twitter to require them to delete the data was in the contacts they signed but presumably they thought it was not to just extort them later.

This is going to gently caress up a lot of research and grad programs.

I'll note here the decahose was always of limited utility because it was deliberately not predictably representative, and incomplete for the purpose of all net modeling. Prior to its current trashfire ownership, Twitter already had a strong incentive to obscure how it operated and what its effects were (because they were bad), and they were more sophisticated in presenting the illusion of access.

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.

FlamingLiberal posted:

Not sure if anyone has been following this dumb bullshit, but a bunch of Blue Checks, Joe Rogan, and some rear end in a top hat who works for The Blaze have been harassing a famous immunologist who works at a children's hospital and created a free Covid vaccine for poor countries because he didn't want to debate loving RFK Jr on Rogan's show. This started on Twitter but then this morning the same rear end in a top hat who harassed Brittney Griner at the airport a week or so ago showed up at the doctor's house to harass him.

Could you name the doc or the harasser? I want to look into the organizations involved a bit more.

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.
Okay, interesting, thanks. It doesn't appear so much coordinated by any larger group as that Rogan and Stein (not, notably, higher ups at the Blaze; there's no broader coverage push) have realized there's synergy in feeding each other's viewergroups.

I've not covered RFK in detail, but I've got some effortposts on the origins, causes and methods of the antivax movement over in the covid thread that may be of interest.

Discendo Vox fucked around with this message at 02:05 on Jun 19, 2023

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.
Much of the "stickiness" of twitter is itself a product of design; it's the same "but everyone else is on..." that is the market power exploitation at the core of all the "dominant" social media platforms that seek lage-scale advertising as a part of their business model. We had this conversation about Facebook.

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.

Neo Rasa posted:

Is this some weird thing where the same way Musk's fans think he's going to save human civilization or whatever, he's expecting some rando fan of his/his fans to save Twitter and his rep or something? Like it's just jaw dropping to me to to like you say just casually drop something like that to whoever.

Musk is simultaneously a sociopath, thoroughly poisoned by all the same things that make twitter bad for everyone else, further brainwormed by whatever weird PR bot poo poo he pulls (which includes drawing attention by saying things like this), and frequently high off his gourd.

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.
I suspect that facebook treats even scrolling over a post as a read for the purposes of interest analysis, meaning that the user is very rapidly shunted into a feedback loop on however a topic is tagged.

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.

Bar Ran Dun posted:

Do you think on phones or tablets that they can track at the level of what one’s eyes are looking at?

Edit: I think I remember seeing patent stuff for that posted at some point. No idea when that was.

not generally, I think something just coming onscreen would be how it's worked. In practice it may be possible to track mouseover, and more broadly, one could use research to make assumptions about where onscreen someone looks most frequently.

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.
Oh, that- well sure, with camera access, gaze tracking tech exists.

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.
Imagine being that rich and that insecure.

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.

I AM GRANDO posted:

What was The Metaverse? Facebook is the same piece of garbage it’s been for ~8 years.

This video lays it out.

e:f;b

Discendo Vox fucked around with this message at 05:36 on Jul 24, 2023

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.

Young Freud posted:

Quick, while he's abandoned the trademark, someone register Twitter and just have it be Twitter: A Something Awful Company. That way, when he has to go back because everyone else is a step ahead of him regarding the X copyright, he has to pay all of us off.

You have to actually be conducting business in commerce using the mark. I'm not sure we'd want to sully the Something Awful brand with that association.

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.
From the Washington Post:

Under India’s pressure, Facebook let propaganda and hate speech thrive

The whole thing is worth a read- it's got excellent internal links to additional sources, and I've not got time atm to copy them all in. Of note, the propaganda campaigns under discussion passed into other countries and userbases. Gosh, if only there was some sort of media literacy thread to discuss how to handle this sort of thing.

quote:

Nearly three years ago, Facebook’s propaganda hunters uncovered a vast social media influence operation that used hundreds of fake accounts to praise the Indian army’s crackdown in the restive border region of Kashmir and accuse Kashmiri journalists of separatism and sedition.

What they found next was explosive: The network was operated by the Indian army’s Chinar Corps, a storied unit garrisoned in the Muslim-majority Kashmir Valley, the heart of Indian Kashmir and one of the most militarized regions in the world.

But when the U.S.-based supervisor of Facebook’s Coordinated Inauthentic Behavior (CIB) unit told colleagues in India that the unit wanted to delete the network’s pages, executives in the New Delhi office pushed back. They warned against antagonizing the government of a sovereign nation over actions in territory it controls. They said they needed to consult local lawyers. They worried they could be imprisoned for treason.

Those objections staved off action for a full year while the Indian army unit continued to spread disinformation that put Kashmiri journalists in danger. The deadlock was resolved only when top Facebook executives intervened and ordered the fake accounts deleted.

Facebook’s cautious approach to moderating pro-government content in India was often exacerbated by a long-standing dynamic: Employees responsible for rooting out hackers and propagandists — often based in the United States — frequently clashed with executives in India who were hired for their political experience or relationships with the government, and who held political views that aligned with the BJP’s.

Interviews with more than 20 current and former employees and a review of newly obtained internal Facebook documents illustrate how executives repeatedly shied away from punishing the BJP or associated accounts. The interviews and documents show that local Facebook executives failed to take down videos and posts of Hindu nationalist leaders, even when they openly called for killing Indian Muslims.

In 2019, after damning media reports and whistleblower disclosures, Facebook’s parent company, now named Meta, bowed to pressure and hired an outside law firm to examine its handling of human rights in India. That probe found that Facebook did not stop hate speech or calls for action ahead of violence, including a bloody religious riot in Delhi in 2020 that was incited by Hindu nationalist leaders and left more than 50 people, mostly Muslims, dead. Meta never published the document, strictly limited which executives saw it and issued a public summary that emphasized the culpability of “third parties.”

Social media companies today do not lose much when they call out the Russian or Chinese governments for propaganda or dismantle networks of fake accounts tied to those countries. Most U.S. social media platforms are banned in those countries, or they do not generate significant revenue there.

[Vox sez: I think we can think of a few exceptions here!]

But India is at the forefront of a worrying trend, according to Silicon Valley executives from multiple companies who have dealt with the issues. The Modi administration is setting an example for how authoritarian governments can dictate to American social media platforms what content they must preserve and what they must remove, regardless of the companies’ rules. Countries including Brazil, Nigeria and Turkey are following the India model, executives say. In 2021, Brazil’s then president, Jair Bolsonaro, sought to prohibit social networks from removing posts, including his own, that questioned whether Brazil’s elections would be rigged. In Nigeria, then-President Muhammadu Buhari banned Twitter after it removed one of his tweets threatening a severe crackdown against rebels.

The day before May’s tight election in Turkey, Twitter agreed to ban accounts at the direction of the administration of President Recep Tayyip Erdogan, including that of investigative journalist Cevheri Guven, an Erdogan critic.

[...]
In public, Indian officials argued that Kashmir’s Muslims would benefit from closer integration with India. Meanwhile, the Chinar Corps covertly spread its messaging. Jibran Nazir, a Kashmiri journalist working in central India, said he was “shocked” to one day find his photo adopted as the avatar of two anonymous Twitter accounts spreading the #NayaKashmir, or “New Kashmir” hashtag, which touted Kashmir’s prosperity under New Delhi’s control.

“They were recently created accounts that had more than 1,000 followers each,” Nazir recalled. “The accounts wanted to show Kashmiris are doing well, which they’re not.”

The Chinar Corps’ stealth operation kept pushing that line — but also went further. It singled out independent Kashmiri journalists by name, disclosing their personal information and attacking them using the anonymous Twitter accounts @KashmirTraitors and @KashmirTraitor1, according to Stanford’s analysis and The Post’s review.

One target was journalist Qazi Shibli and his publication, the Kashmiriyat.

“@TheKashmiriyat posts #fake news on the various operations conducted by the #IndianArmy causing hate amongst people for the #Army,” @KashmirTraitors wrote in a series of tweets. “Even the positive things like ration distribution that are happening in #Kashmir are shown in a negative prospect in posts of @TheKashmiriyat.”

“The #traitor behind this account and website is @QaziShibli (born in 1993) who has been detained numerous times under various charges for cybercrimes and posting content against national security.”

Shibli’s home was raided, and he was jailed repeatedly on charges including violation of the Public Safety Act, according to the Committee to Protect Journalists. The pressure online was crippling, Shibli said.

These problems were systemic- I've not copied in the material, but it was driven by politically connected execs who, even when replaced, were replaced by people even closer to Modi.

quote:

After U.S. Facebook employees in 2020 warned that Indian Hindu nationalist groups were spreading the hashtag #coronajihad, implying that Indian Muslims were intentionally spreading the coronavirus in a conspiracy to wage holy war, a content policy staffer for the region pushed back, arguing that the meme didn’t amount to hate speech because it wasn’t explicitly targeting a people, two former employees recalled. (Facebook eventually barred searches for that hashtag, but searching for just “coronajihad” returns accusatory posts.)

In late 2019, Facebook data scientist Sophie Zhang tried to remove an inauthentic network that she said included the page of a BJP member of Parliament. She was repeatedly stymied by the company’s special treatment of politicians and partners, known as Xcheck or “cross check.” Facebook later said many of the accounts were taken down though it could not establish that the BJP member of parliament’s page had been part of the network.

The following year, documents obtained by Facebook employee-turned-whistleblower Frances Haugen show, Kashmiris were deluged with violent images and hate speech after military and police operations there. Facebook said it subsequently removed some “borderline content and civic and political Groups from our recommendation systems.”

In one internal case study on India seen by The Post, Facebook found that pages with ties to the Hindu nationalist umbrella organization Rashtriya Swayamsevak Sangh (RSS) compared Muslims to “pigs” and falsely claimed that the Quran calls for men to rape female family members. But Facebook employees did not internally nominate the RSS — with which the BJP is affiliated — for a hate group designation given “political sensitivities,” the case study found.

Facebook knew the problems were severe and chose to bury the results.

quote:

As the controversy over its handling of hate in India grew in 2019, Facebook hired the law firm Foley Hoag to study and write about its performance there in what is called a human rights impact assessment. Some rights groups worried that the firm would go easy on Facebook because one of its human rights lawyers at the time, Brittan Heller, was married to Gleicher, Facebook’s head of security policy.

But the firm interviewed outside experts and Facebook employees and found that dozens of pages that were calling Muslims rapists and terrorists and describing them as an enemy to be eliminated had not been removed, even after being reported.

Foley Hoag cited multiple underlying issues, including the lack of local experts in hate speech, the application of U.S. speech standards when Indian laws called for greater restriction of attacks on religion, and a legalistic approach that, for example, withheld action if a subject of threats was not explicitly targeted for their ethnicity or religion.

Foley Hoag found that the company allowed incendiary hate speech to spread in the lead-up to deadly riots in Delhi in 2020 and violence elsewhere, according to people briefed on its lengthy document. It recommended that the company publish the report, name a vice president for human rights and hire more people versed in Indian cultures.

Instead of releasing the findings, Facebook wrote a mostly positive four-page summary and buried it toward the end of an 83-page global human rights report in July 2022. That readout said the law firm “noted the potential for Meta’s platforms to be connected to salient human rights risks caused by third parties.” It said the actual report had made undisclosed recommendations, which the company was “studying.”

These problems are ongoing and in all countries.

quote:

Facebook executives similarly downplayed problems reported by outside groups. The London Story, a Netherlands-based human rights group, reported hundreds of posts that it said violated the company’s rules. Facebook asked for more information, and then asked for it in a different format, then said it would work to improve things if the group stayed quiet. When nothing happened, the group succeeded in getting a meeting with the company’s Oversight Board, created to handle a small number of high-profile content disputes.

It took more than a year to remove a 2019 video with 32 million views, according to the London Story’s executive director, Ritumbra Manuvie.

In the video, Yati Narsinghanand, a right-wing cleric, says in Hindi to a crowd: “I want to eliminate Muslims and Islam from the face of earth.” Facebook took it down just before the London Story released a report on the issue in 2022.

Versions were then posted again. One remained visible as of Monday, but on Tuesday, after Facebook was asked for comment, it was no longer available.

Oh hey did you think this was just a facebook thing?

quote:

When Facebook’s investigators brought their Kashmir findings to the India office, they expected a chilly response. The India team frequently argued that Facebook policies didn’t apply to a particular case. Sometimes, they argued that they didn’t apply to sovereign governments.

But this time, their rejection was strident.

“They said they could be arrested and charged with treason,” said a person involved in the dispute.

[...]


Blocked by their own colleagues, Facebook’s U.S. threat team passed the Chinar Corps information to their counterparts at Twitter. Facebook employees said they had been hoping that Twitter would follow the leads and root out the parallel operation on that platform. The team’s members also hoped that Twitter would do the first takedown, giving Facebook political cover so it wouldn’t have to face government retribution alone and its internal dispute could be resolved.

Twitter, which had been more forceful in pushing back against the Indian government, took no action. It told Facebook staff that it was having technical issues.

In truth, the San Francisco company was changing direction.

The Indian police raids and public comments from government officials criticizing the company had scared off firms that Twitter had planned to use for promotion, former Twitter employees said. “We saw a very obvious slowdown in user growth,” one former policy leader said. “The government is very influential there.”

The former executive added: “We had just promised [Wall] Street 3x user growth, and the only way that was going to be possible was with India.”

Another former policy staffer said Twitter’s bigger problem was physical threats to employees, while former safety chief Yoel Roth wrote in the New York Times this month that Twitter’s lawyers had warned that workers in India might be charged with sedition, which carries a death penalty.

In any case, Twitter was tired of leading the way with takedowns, and it changed how it treated the government overall. The company did not respond to a request for comment.

As has been reported elsewhere, the nonprofit groups trying to investigate and stop this stuff have limited ability, and are now under legal threat:

quote:

As Facebook’s India team delayed acting on the Chinar inauthentic network, the propaganda investigators in Washington and California worked on less controversial subjects.
“You have only so much time in the day, and if you know you are going to run into political challenges, you might spend your time investigating in Azerbaijan or somewhere else that won’t be an issue. Call it a chilling effect. That dynamic is real,” said Fishman, the former senior Facebook executive.

Even after the India campaign was addressed, the execs continued to gently caress with it. Note this is after the campaign had been allowed to spread and get socialized, maximizing this effect:

quote:

The impasse continued until the U.S. team demanded action from Nick Clegg, then Meta’s powerful vice president of global affairs, who had been put in charge of India public policy. Clegg was later named president of global affairs.

Finally, after discussions with Facebook’s top lawyers, Clegg ruled in favor of the threat team, employees said.

But the India executives had a request: They asked that Facebook at least break with past practice and not disclose the takedown.

Since coming under fire for failing to spot Russian propagandists using its platform during the 2016 U.S. presidential campaign, Facebook has routinely announced significant removals of disinformation. It often describes what the campaign was trying to do and how it did it, and there is frequently direct attribution to a national government or enough detail for readers to guess.

The idea is to increase transparency that could help disinformation hunters and deter its spreaders from trying again. Smaller takedowns are described more briefly in quarterly summaries.

This time, the India side argued that it would be unwise to embarrass the Indian military and that doing so would increase the likelihood of legal action.

Clegg and Facebook chief legal officer Jennifer Newstead agreed, staffers said. At their direction, Facebook changed its policy to state that it would disclose takedowns unless doing so would endanger employees.

[Vox sez: Why yes, this does incentivize governments conducting propaganda operations to threaten social media employees.]

Following standard practice, Facebook removed the fake accounts, and the official Chinar Corps pages they had been working with on Facebook and Instagram, on Jan. 28, 2022. (After the Indian army publicly complained about the takedown of the official pages, they were reinstated.)

That March, Twitter followed Facebook and quietly removed the Chinar Corps’ parallel network on its platform and shared it with researchers. In private meetings with Facebook and Twitter executives, the army defended its fake accounts and said they were necessary to combat Pakistani disinformation.

Facebook didn’t disclose the takedown, and Twitter hasn’t issued what had been twice-yearly summaries of its enforcement actions since one for the period that ended in December 2021.

A month later, Facebook issued a quarterly “adversarial threat report” that listed takedowns of inauthentic networks targeting users in Iran, Azerbaijan, Ukraine, Brazil, Costa Rica, El Salvador and the Philippines.

It said nothing about India.

(USER WAS PUT ON PROBATION FOR THIS POST)

Discendo Vox fucked around with this message at 21:33 on Sep 30, 2023

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.
That was actually a misassigned payment; Visa had just purchased an SA account and sent the check to the wrong place.

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.

Young Freud posted:

You know, I've read the transcripts from this Dealbook interview and they haven't caught how weird his voice is. He's like croaking out words and repeating phrases in this weird cadence. He looks like he's about to transform in to a Deep One on-stage.

I do think this is a pattern of behavior that's gotten much worse over time. Dude's the ultimate antidrug PSA.

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.

Nervous posted:

You're not wrong but I never understood the appeal of Something Awful. It's just another poorly moderated Internet forum that will inevitably outrage you and deteriorate your mental health.

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.

Nervous posted:

I mean yeah, but the average level of discourse is better here imo.

Agreed; it's worth digging into the why of it at some point, particularly if we want that to continue to be true.

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.

Boris Galerkin posted:

I just use 1Password to generate my work 2FA codes though. I already use 1Password as my personal password/2fa manager so it's literally something I already use. iOS's built in password manager now does 2FA and I'm assuming Android's default version does too, if you don't want an app. I'd understand your guy's opinions on this if this required a dedicated app but it doesn't and this way doesn't give work IT any access to your device either. You just scan a QR code in your 2FA app and it's set up.

I also use 1Password, then if I am forced to change it I use 2Password, then 3Password, etc

Adbot
ADBOT LOVES YOU

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.

I AM GRANDO posted:

What could any government do that’s more insidious than what the corporations profiting from the apps do with them every day? It’s like saying the CIA can get into the walk-ins at McDonalds.

The Chinese government has a omnipresent system of combined surveillance, propaganda and censorship directed both inward and outward. McDonalds does not. McDonalds also doesn't have police power.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply