|
oliveoil posted:I got the impression the founder thinks search shouldn't be allowed to reveal it though, which makes me wonder how much of it was designed with support for this material in mind. Suppose for a moment that it did reveal the scope of the issue; the only response would be people thinking the search function is broken because it keeps spitting out CP, and then there would be requests to revert it back to what it was before. This might be an extreme example, but it is hardly a different problem from other types of spam and undesirable content cluttering things up. I don't think it is really possible to interpolate if the Mastodon creators are secretly pro-CP or actually rabidly anti-CP on this matter, because their actions would be the same in either instance. Even Twitter had functionality to exclude certain accounts from search results, which they frequently applied to people posting NSFW materials on the reg.
|
# ? Jan 4, 2023 01:46 |
|
|
# ? Jun 3, 2024 05:53 |
|
Cabbit posted:I'm sure whatever big name advertisers are left are going to love having their ads show up alongside a political ad for some white supremacist dickbag. Yeah, they're kinda vague about which causes they're thinking of... Morroque posted:Suppose for a moment that it did reveal the scope of the issue; the only response would be people thinking the search function is broken because it keeps spitting out CP, and then there would be requests to revert it back to what it was before. I like to think the increased awareness would lead to arrests. Am I naive?
|
# ? Jan 4, 2023 02:12 |
|
In my experience, submitting CSAM activity reports is usually handled by the moderators specifically, and usually only after it is either reported to them by other users or detected automatically via an anti-virus-like tool such as PhotoDNA -- after which the reporting process is usually automated. If the police do or do not follow up on it is usually handled by a law enforcement clearinghouse whose acronym I forget, so it is all rather standardized. (In the US, anyway.) The public availability of the CP in question afterwards is the subject to the whim of the Internet Janitors on duty, so it wouldn't be available long enough to really benefit from the general public being able to report on it. This stuff is the mod team's job, not the general user's.
|
# ? Jan 4, 2023 02:27 |
|
Could you not say the same about the Internet as a whole, or a common protocol like e-mail? These systems are used to facilitate the trafficking of CSAM all the time, and the options of what YOU personally can do about it are basically: ignore it, and report it to authorities if it comes to your attention. The fact that Google (I hope) makes it purposefully difficult to search for that poo poo isn't evidence that they're complicit in hiding the fact it exists. It's a very reasonable decision to limit its spread and avoid people seeing disgusting, vile poo poo. If there are individual servers that are not doing due diligence in preventing the sharing of CSAM, then absolutely gently caress them, I hope the operators go to prison. But I don't think you can blame the software, any more than you can blame a search engine or a web browser for the presence of CSAM on the wider internet, or a P2P protocol for pedos using it.
|
# ? Jan 4, 2023 02:30 |
|
A bit about the Mastodon issues referenced in that article, the 2 instances they're referring to are Pawoo, an instance maintained by Pixiv, which is basically a Japanese equivalent of Deviantart, and mstdn JP which is essentially the main Japanese server. Both are based in Japan and run based on Japanese law and thus have tons of horny drawings of underage anime girls, which is both pretty popular and legal there. Mastodon in general was majority Japanese users prior to the recent Twitter surge. TLDR: close ADTRW (USER WAS PUT ON PROBATION FOR THIS POST)
|
# ? Jan 4, 2023 03:24 |
|
As objectionable as underage cartoons are, they are not CSAM in that no children were harmed in their production.
|
# ? Jan 4, 2023 04:22 |
|
This is such a lovely article. I'm having trouble even really articulating how badly it misrepresents Mastodon and its devs. Their linked evidence about "hiding CSAM" is literally just a project issue where they're discussing how to avoid caching that exact objectionable content from other servers that they can't control. I can't even argue that they're coming from a lack of understanding because there's a reasonably well done article on the same site that discusses Mastodon's / the fediverse's structure with a clear understanding of it.
|
# ? Jan 4, 2023 05:09 |
Yeah, that article is not prioritizing good faith journalism.Clarste posted:As objectionable as underage cartoons are, they are not CSAM in that no children were harmed in their production. Simulated CSAM is still CSAM under your average Western legal doctrine. oliveoil posted:I like to think the increased awareness would lead to arrests. Am I naive? You are. If you follow through the references in the material you posted, you'll see that the specific problem here is Pixiv, a Japanese website, running a Mastodon instance, probably hosted on a server in Japan, where users post loli porn, which in itself is not illegal in Japan. Mastodon founder/developers themselves, on the other hand, would about as liable for this as the inventor of email is for every case of illegal poo poo being emailed. Other server admins could be liable for allowing reposts, or, e.g., caching search results with such content, though. cinci zoo sniper fucked around with this message at 05:26 on Jan 4, 2023 |
|
# ? Jan 4, 2023 05:23 |
|
It's pretty much Jack Thompson level 'criticism', yeah. On another note, just me or is Twitter barely working today? Most embeds aren't showing up and not just for deleted tweets.
|
# ? Jan 4, 2023 06:29 |
|
nitter/libredirect is breaking way more in the last few days. the GBS musk thread also says that birdsite is very broken for AU/NZ.
|
# ? Jan 4, 2023 06:49 |
|
For the period of about two years, I was using a browser plug-in that automatically removed all algorithm-suggested content and bruteforced the timelines into chronological sorting with retweets and regular tweets in a separate tab. The reason I installed it was because my friends posted irregularly, but the suggestions were near-constant, that it buried the connections I cared about and I didn't learn about the fact that a friend of mine was sick in hospital until two weeks later. (Despite him posting about it.) It was also "suggesting" to me all the porn that one specific contact of mine was Liking, even after I had their retweets turned off. Still don't know why it was snitching on them specifically and not anyone else. The algorithm was getting some weird ideas and I couldn't get it to stop, so I got a plug-in which just nuked it all. Anyway, because of that plug-in, web Twitter has remained remarkably stable for me in the face of all of this. Either everything getting affected is the stuff I never liked, or the Github developer running it is doing some black magic to keep fixing all the mistakes client-side. It's ironic, really; for all Musk bloviates about wanting to remove all the bloat, there does theoretically exist a paired-down version of Twitter that actually is much nicer to use. ... it's just too bad that version is the one where paid-promotions and advertising do not exist, and all forms of engagement-manipulation are serriptiously removed. ... and those forms of engagement-manipulation did have legs. My Twitter usage went from being on constantly to only needing to check the site once a day.
|
# ? Jan 4, 2023 07:01 |
|
Clarste posted:As objectionable as underage cartoons are, they are not CSAM in that no children were harmed in their production. This discussion got me curious, so I looked it up. Turns out US law has other ideas: 18 U.S. Code § 1466A posted:
Granted, the US is not the entire world (turns out shota/loli is legal in Germany because no real children are "harmed" in its production), but it makes me wonder how the gently caress so many American artists get away with openly advertising that they produce this poo poo on social media.
|
# ? Jan 4, 2023 20:34 |
|
Kith posted:This discussion got me curious, so I looked it up. Turns out US law has other ideas: I imagine that's because it would be a waste of time and resources going after someone drawing some dubious pics when that isn't going to do a whole lot. They would also need to prove the work is both obscene and lacks any serious value, which could be difficult. I only know of a single person charged under 1466A and they also had child porn of real children~
|
# ? Jan 4, 2023 21:19 |
Please create a separate thread for this discussion, if you feel like going in detail.
|
|
# ? Jan 4, 2023 21:56 |
|
lol Ain't nobody going to make a thread about the legality of loli but the tech nightmare thread might work
|
# ? Jan 4, 2023 23:41 |
|
Xand_Man posted:lol Ain't nobody going to make a thread about the legality of loli but the tech nightmare thread might work libertarian thread, duh.
|
# ? Jan 4, 2023 23:46 |
|
I feel like it's pretty on-topic to wonder why underage content on Social Media isn't being more aggressively prosecuted in the Social Media thread, but I'm also not about to get involved with a thread that's almost certainly going to turn into a hideous clusterfuck, thanks all the same
|
# ? Jan 5, 2023 01:24 |
|
Xand_Man posted:lol Ain't nobody going to make a thread about the legality of loli but the tech nightmare thread might work Reopen the DeviantArt thread, just for this discussion.
|
# ? Jan 5, 2023 01:31 |
|
GoutPatrol posted:libertarian thread, duh. Too bad "First off I'm not a pedophile" is not still the thread title
|
# ? Jan 5, 2023 01:35 |
|
Kith posted:I feel like it's pretty on-topic to wonder why underage content on Social Media isn't being more aggressively prosecuted in the Social Media thread, but I'm also not about to get involved with a thread that's almost certainly going to turn into a hideous clusterfuck, thanks all the same I believe the discussion which is specifically forbidden is the "what is/should be the legal definition of CSAM?" discussion. Presumably the moderation of such material is still on topic in the social media thread. The legal definition is important only insofar as it represents the legal obligations of social media networks, not what they ought to do. There's nothing saying you, as the operator of a social media platform, have to put up with creepy poo poo of questionable legality just because you don't believe it to be legally forbidden outright. Reddit already went through that with /r/jailbait, which was probably legal, but definitely disgusting. YouTube went through it too. You don't have to allow that sort of thing, and you shouldn't, even if the courts can't lock people up for it. To compare it to another relevant issue: there's no law against misogynist or racist speech in many/most countries. But that doesn't mean you have to wait until it reaches the point of criminal activity to do something about racist or misogynist speech. You can just kick people out! The Mastodon issue seems to be a question of: "well, what if instance operators don't get rid of that poo poo?" but that fails to recognize the advantages it offers, which is that, if you're operating an instance, you don't have to limit your moderation to what anyone else says, you can just get rid of racists and misogynists and creepy pedos because you don't like them!
|
# ? Jan 5, 2023 02:45 |
|
I think the issue being pitched is the interconnectivity of Mastodon meaning that you are on the same "network" as the posters of objectionable content, but that's a weak-rear end tenuous link made by an article that seems to have a fairly overt agenda against the network.
|
# ? Jan 5, 2023 04:49 |
PT6A posted:I believe the discussion which is specifically forbidden is the "what is/should be the legal definition of CSAM?" discussion. Presumably the moderation of such material is still on topic in the social media thread. Precisely. If you want to discuss “do social media networks bide the existing CSAM laws appropriately” - this is the thread. However, if you want to discuss “what should count as CSAM”, I’d like it to be a separate thread, so that it’s simpler to gas it or issue the appropriate permabans if anyone goes really of the rails, since that’s a discussion that’s bound to at the very least attract someone too edgy for their own good. cinci zoo sniper fucked around with this message at 08:49 on Jan 5, 2023 |
|
# ? Jan 5, 2023 08:47 |
PT6A posted:To compare it to another relevant issue: there's no law against misogynist or racist speech in many/most countries. But that doesn't mean you have to wait until it reaches the point of criminal activity to do something about racist or misogynist speech. You can just kick people out! The Mastodon issue seems to be a question of: "well, what if instance operators don't get rid of that poo poo?" but that fails to recognize the advantages it offers, which is that, if you're operating an instance, you don't have to limit your moderation to what anyone else says, you can just get rid of racists and misogynists and creepy pedos because you don't like them! The collective owners of the arguably nebulous “western core” of Mastodon have done just that in fact, on several occasions. The server software has functionality for excommunicating specific instances, which severs the ties bidirectionally. Say, we have instances A, B, and C, with B being something polarising, like the server for rolling coal or whatever. If A blocks B, “home users” of neither can access the content of the other. However, if I’m not mistaken, if someone on C reposts something from B, a user from A could see that on C. And so I believe they’ve carried out this blocking against the instances featured in the aforementioned story, and then more recently against some Trump-associated instances - perhaps something else at some point too, since I’m relatively new to Mastodon. I believe there’s also a thing where some instance admins are going to block you if your instance’s blocklist does not contain at the very least all entries of their blocklist (this mechanism was a major part in the squeeze against overtly Trump-aligned instances, if I recall correctly).
|
|
# ? Jan 5, 2023 08:59 |
|
https://twitter.com/EladNehorai/status/1610760782375170048?s=20&t=YUPrc5ff0iJn2viLBzbNDA
|
# ? Jan 5, 2023 21:02 |
|
I mean yeah, it seems like he asked people like Andy Ngo who he should ban, which says a lot in itself
|
# ? Jan 6, 2023 04:32 |
|
e: thought this was fake at 2nd glance, but at 3rd its real This was on Dec 24th, dont remember hearing about it til now though. https://twitter.com/ReutersTech/status/1611138057075318787 OgNar fucked around with this message at 05:02 on Jan 6, 2023 |
# ? Jan 6, 2023 04:58 |
|
OgNar posted:e: nm, it was a fake post I mean, haveibeenpwned found the email I use for Twitter in this breach.
|
# ? Jan 6, 2023 05:01 |
|
Absurd Alhazred posted:I mean, haveibeenpwned found the email I use for Twitter in this breach. Yeah, i saw the name Reuterstech and replies saying fake, but then realized those users were conservatives.
|
# ? Jan 6, 2023 05:07 |
|
At least there don't seem to be passwords there - I'm not sure if I trust password reset or change to still be working.
|
# ? Jan 6, 2023 05:08 |
|
I think this is the breach that happened a while back by exploiting an API. The threat is more about identities than it is about account stealing, although shared passwords do open it up as a vector for grabbing accounts.
|
# ? Jan 6, 2023 15:13 |
|
OgNar posted:e: thought this was fake at 2nd glance, but at 3rd its real It's half right. Twitter users' email addresses and phone numbers were leaked, but it wasn't due to a hack. You know how every social media phone app has a "tell me if anyone in my contacts list has accounts on this site" button? Turns out that works by just uploading a list of phone numbers and email addresses to Twitter, which returns a list of usernames associated with those phone numbers and/or email addresses. An API call like that is inherently an information disclosure risk, and the devs need to proactively add extra security measures to detect and reject abuse, or else someone can just generate a list of every possible phone number and send it to that API endpoint. Unfortunately, Twitter's protections against that are fairly rudimentary. And careless practices by Twitter's devs have repeatedly caused various bugs that stripped away even those rudimentary protections and led to the disclosure of millions of username-phone pairs. This is just more of the same.
|
# ? Jan 6, 2023 15:14 |
|
Teratrain posted:I think the issue being pitched is the interconnectivity of Mastodon meaning that you are on the same "network" as the posters of objectionable content, but that's a weak-rear end tenuous link made by an article that seems to have a fairly overt agenda against the network. It's also weak because, you don't like being on a the same servers/social network as people that post objectionable content? Hope you don't have a Twitter or Facebook account because holy poo poo
|
# ? Jan 12, 2023 04:32 |
|
but birdsite and zuckbook make me feel good? also my current network is already there, never mind that me/society in general have always gone through a cycle of newest platform , but the current era's giants was an abnormally long reign. Also I'm going to text phone numbers of my contacts admonishing and trying to guilty people to stay because how else can I say in contact with them?
|
# ? Jan 12, 2023 06:32 |
|
Musk is breaking records as usual. https://www.theguardian.com/technology/2023/jan/12/elon-musk-breaks-world-record-for-largest-loss-of-personal-fortune-in-history
|
# ? Jan 12, 2023 12:42 |
|
He did it for posting
|
# ? Jan 12, 2023 13:15 |
|
https://9to5google.com/2023/01/12/twitter-api-appears-to-be-down-breaking-tweetbot-and-third-party-clients/ twitter just turned off and/or broke their api breaking basically everything but the official app so lol, lmao. thanks elon
|
# ? Jan 13, 2023 06:23 |
|
Neo Rasa posted:It's also weak because, you don't like being on a the same servers/social network as people that post objectionable content? Hope you don't have a Twitter or Facebook account because holy poo poo Yeah, that secjuice article has been posted I several relevant threads, so I'm going to cross-post my response here: Agents are GO! posted:Yeah, pawoo.net content and users are explicitly "moderated" (blocked) from the instance I'm on (peoplemaking.games). I honestly think that secjuice article seems pretty drat dumb, but I don't know enough about the site to call it intentionally bullshit, but it definitely doesn't make a good first impression.
|
# ? Jan 13, 2023 06:52 |
|
It's the kind of argument you see coming up a lot when a market leader is threatened by a competitor, waving vaguely in the direction of bad actors who are on literally every platform and acting as if they're a unique case.
|
# ? Jan 13, 2023 07:25 |
|
Kaboobi posted:https://9to5google.com/2023/01/12/twitter-api-appears-to-be-down-breaking-tweetbot-and-third-party-clients/ https://twitter.com/erinkwoo/status/1614399391959502850 looks like they did it on purpose. i don't have the ability to read the article though so no idea what the details are.
|
# ? Jan 15, 2023 01:56 |
|
|
# ? Jun 3, 2024 05:53 |
|
I've seen the article posted elsewhere. The jist of it is that it was intentional, but not even Twitter's own employees know why. They received the order to do it, and Comms was told to draft up language for it, but that second part either kept getting delayed or just didn't even know what to say.
|
# ? Jan 15, 2023 02:02 |