|
Kalit posted:I was trying to speak to the broader point of what projecthalaxy posted for combating these ideals. But focusing on the lawsuit itself, is this an actual issue? Like my previous comment, white supremacist/etc is more visible, but are YouTube/etc algorithms spreading those ideals at an increasing rate? I was talking about the way that the algorithm identifies what someone might have interests in, and suggests more content that it thinks might appeal to them, which is the specific thing at issue in the Gonzales case. The lawsuit isn't making any claims about Youtube magically radicalizing people out of nowhere. Instead, they're saying that Youtube detects people who are already sympathetic to radical ideologies, and feeds them more radicalizing content on purpose because it has correctly determined that they like to see that stuff. That study does not evaluate that claim, nor does it establish a methodology that would allow it to evaluate that claim. The study focuses exclusively on what a logged-out user with no watch history would see, and makes no attempt at all to compensate for non-account-based fingerprinting. I also have concerns about that study's categorization system. For example, they put CNN, MSNBC, and NBC in the "Partisan Left" category. Which, to be clear, is separate from their regular mainstream media category, which they named "Center/Left MSM" and which appears to be dominated by Joe Rogan and overseas news sources like the BBC, Sky News, and WION. Despite all that, the handy viewer they set up shows that the radicalization pipeline is very much active on smaller channels and categories - even in this ideal case with no coherent watch history to inform recommendations. If you watch a Fox News vid, it'll recommend other large news organizations without considering ideology, and will be just as likely to send you to "Partisan Left" sites like CNN as anywhere else. But when their bot clicked a video from a small MRA channel, the most likely recommendations for the next video were more MRA channels, then "anti-SJW" channels, then political channels they couldn't categorize (such as the Daily Mail), then "Partisan Right" (Fox News, The Sun, New York Post, etc). The alt-right talking heads onboarding sphere seems to be particularly tightly linked in their data. Ben Shapiro, Jordan Peterson, Joe Rogan, Steven Crowder, PragerU, and a few other names I don't recognize were very closely associated by the recommendation algorithm, such that watching a single video from one was highly likely to recommend you videos from the others in that group.
|
# ? Oct 4, 2022 04:10 |
|
|
# ? May 25, 2024 08:14 |
|
Main Paineframe posted:I was talking about the way that the algorithm identifies what someone might have interests in, and suggests more content that it thinks might appeal to them, which is the specific thing at issue in the Gonzales case. The lawsuit isn't making any claims about Youtube magically radicalizing people out of nowhere. Instead, they're saying that Youtube detects people who are already sympathetic to radical ideologies, and feeds them more radicalizing content on purpose because it has correctly determined that they like to see that stuff. So.... maybe I'm missing your point, but are you saying this is uniquely more dangerous than how recommendations have worked in the past? What you're describing sounds to me like how recommendations within media has always worked. For example, when I buy/bought physical media (books/CDs/records/etc), quite often they would list recommendations for other media with similar, often "radical", ideals. This seems similar to how Youtube algorithms works.
|
# ? Oct 4, 2022 04:29 |
|
Prediction: Walker is going to go up in the polls because of the abortion thing. There is nothing CHUDs love more than a hypocrisy elemental. Remember, fascists actively celebrate being able to flaunt the norms of society if it means they gain power. Many of their most devoted would prefer to have a candidate who's paid for abortions. They love doubling down, to show how free from consequences and consistency they are.
|
# ? Oct 4, 2022 04:37 |
|
Kalit posted:So.... maybe I'm missing your point, but are you saying this is uniquely more dangerous than how recommendations have worked in the past? What you're describing sounds to me like how recommendations within media has always worked. If my local bookstore had a list of recommended white supremacist literature posted anywhere on the premises, I would probably consider that a contribution to violent radicalism! As for whether it's uniquely more dangerous, I don't think anyone said it was! I certainly didn't, and I'm not sure where you got that claim from. You're arguing against points nobody made, while ignoring the points people are actually making. Lastly, there are extremely obvious differences between a physical list of recommendations posted on the wall vs personalized auto-generated recommendation lists instantly created and updated from your exact media-viewing history.
|
# ? Oct 4, 2022 04:41 |
|
Main Paineframe posted:If my local bookstore had a list of recommended white supremacist literature posted anywhere on the premises, I would probably consider that a contribution to violent radicalism! I'm still not understanding your point. I'm sorry if you think I'm arguing against points nobody made. This is why I tried to ask that clarifying question as my opening sentence in my previous response. Also, to clarify, I'm not talking about a physical list of recommendations posted on a wall of a bookstore/etc. I'm using an analogy of them posted in a book/CD/etc, which implies someone is already dabbling their feet into those ideals. This is the sense I got from that study I posted. So, can you help me understand what your point is? What are the extremely obvious differences between a static list of recommendations posted in, for example, a book and a dynamic algorithm generated by youtube? I'm genuinely confused. Kalit fucked around with this message at 04:51 on Oct 4, 2022 |
# ? Oct 4, 2022 04:49 |
|
Kalit posted:So, can you help me understand what your point is? What are the extremely obvious differences between a static list of recommendations posted in, for example, a book and a dynamic algorithm generated by youtube? I'm genuinely confused. One distinction with algorithmic feeds is that they present the user with the impression that the information is more broadly representative, and, within the platform, exclude other sources.
|
# ? Oct 4, 2022 05:19 |
|
FLIPADELPHIA posted:Prediction: Walker is going to go up in the polls because of the abortion thing. There is nothing CHUDs love more than a hypocrisy elemental. Remember, fascists actively celebrate being able to flaunt the norms of society if it means they gain power. Many of their most devoted would prefer to have a candidate who's paid for abortions. They love doubling down, to show how free from consequences and consistency they are. Yeah. The only consistent part of conservative ideology is hurting the people that they hate. Illegalizing abortion hurts the people that they hate, it's not about any moral stance regardless of what they say. Herschel Walker isn't someone that they hate, so it's ok that he paid for an abortion.
|
# ? Oct 4, 2022 07:32 |
|
Kalit posted:I'm still not understanding your point. I'm sorry if you think I'm arguing against points nobody made. This is why I tried to ask that clarifying question as my opening sentence in my previous response. One of them is written by the author or content creator, while the other is created by the bookstore or platform. One is crafted by a human being, while the other is generated automatically by an algorithm. One of them gives the same recommendations to everyone who reads the book, while the other gives different recommendations to each person. One of them is based solely on the author's own tastes, while the other is targeted and personalized using large amounts of data (not only from the userbase in general, but also from the specific viewer) to determine the optimal suggestions for maximum engagement. One of them requires you to go physically find and buy each item on the list of suggestions, while the other shoves one-click links right in your face and (by default) will automatically start playing the top suggestion after ten seconds even if you don't lift a finger or spend a single cent. And to bring this conversation back to the original point, only one of them currently enjoys legal protections protecting the entity creating the list of recommendations from any legal liability related to the list.
|
# ? Oct 4, 2022 07:43 |
|
I'm gonna say that algorithms are by and large a net negative on society and I'd be pleased if the Supreme Court banned them (well, made their use create liability, which is effectively the same thing for these companies). But I have no idea if that's the direction they're going to take.
|
# ? Oct 4, 2022 09:16 |
|
Clarste posted:I'm gonna say that algorithms are by and large a net negative on society and I'd be pleased if the Supreme Court banned them (well, made their use create liability, which is effectively the same thing for these companies). But I have no idea if that's the direction they're going to take. It's a supreme court that is currently a 6-3 fascist majority. It's technically possible that they might accidentally do something that benefits society but i don't know if it's likely.
|
# ? Oct 4, 2022 10:59 |
|
Clarste posted:I'm gonna say that algorithms are by and large a net negative on society and I'd be pleased if the Supreme Court banned them (well, made their use create liability, which is effectively the same thing for these companies). But I have no idea if that's the direction they're going to take. “Banning algorithms” does not make sense. At those scales, the algorithm is the product. If you have ever found a car repair video or guitar instruction video on YouTube, you've used an algorithm. Imagine a YouTube that was just an unfiltered stream of every video. That's unusable. You can make the case that algorithms must have certain properties, or that they may not consider certain types of input. That's really difficult and thoughtful work, but I definitely think there's room there for real improvements along those lines. But “banning algorithms” is just nonsense. All that will accomplish is killing the user-created internet.
|
# ? Oct 4, 2022 12:18 |
|
ColdPie posted:“Banning algorithms” does not make sense. At those scales, the algorithm is the product. If you have ever found a car repair video or guitar instruction video on YouTube, you've used an algorithm. Imagine a YouTube that was just an unfiltered stream of every video. That's unusable. There is a difference between an unprompted list of recommendations + autoplay, and using a search engine. Now, search engines these days also do this stuff where they filter for results they think you'll be interested in, but they were perfectly functional before they started doing that. Probably better. Edit: The legal issue at hand here is that Youtube isn't just objectively presenting the content without bias, they are telling you what you should watch. Which is extremely different from me asking for a big list of guitar instruction videos presented without editorial decision. Edit2: I suppose I should clarify that I meant "the algorithm" in a colloquial way, as in "the algorithm thinks I should watch more video game OST videos" and not in a general "math" way. Obviously literally everything we do involves some kind of algorithm, but we're only talking about the ones that recommend stuff to maximize engagement. Clarste fucked around with this message at 13:01 on Oct 4, 2022 |
# ? Oct 4, 2022 12:49 |
|
ColdPie posted:“Banning algorithms” does not make sense. At those scales, the algorithm is the product. If you have ever found a car repair video or guitar instruction video on YouTube, you've used an algorithm. Imagine a YouTube that was just an unfiltered stream of every video. That's unusable. If you want to be nerdly and literal about it, 'banning algorithms' would mean banning the written word and all of human language. That would be certainly one way to eradicate right wing propaganda, although maybe it would be a little bit of an over-reaction. In general, I don't tend to agree with the popular view here which is a 'top-down' view of media, where shadowy figures in board rooms make decisions on what The People Will Believe. I tend to think of media as being more like a magic mirror which shows readers/viewers things that they already wanted to see. silence_kit fucked around with this message at 13:11 on Oct 4, 2022 |
# ? Oct 4, 2022 13:02 |
|
Clarste posted:I'm gonna say that algorithms are by and large a net negative on society and I'd be pleased if the Supreme Court banned them (well, made their use create liability, which is effectively the same thing for these companies). But I have no idea if that's the direction they're going to take. What do you think an algorithm is
|
# ? Oct 4, 2022 13:06 |
|
Riptor posted:What do you think an algorithm is It's a series of tubes
|
# ? Oct 4, 2022 13:08 |
|
Clarste posted:I'm gonna say that algorithms are by and large a net negative on society and I'd be pleased if the Supreme Court banned them Right wing memes like this are starting to get funny
|
# ? Oct 4, 2022 13:10 |
|
To me it seems like computer algorithms are just the natural extension of sales driven targeted marketing. Like the way TV ads tailor their time slots to a specific demographic depending on the time of day or type of show being shown. Toy commercials and sugary cereal ads during saturday morning cartoons and after school shows. Cleaning, cooking and weight loss products on weekday afternoons during soap operas, game shows and talk shows. Ads targeting the elderly throughout the day. Sporting events show lots and lots of car and beer ads. Magazines and newspapers do the same thing so, naturally, you get different ads in Rolling Stone (music), Cosmopolitan (make up, skin care, shampoo and fashion), Playboy (liquor and tobacco), Ebony ("black" products), Better Homes and Gardens or Sports Illustrated, etc. Consumer Reports is an interesting outlier since it eschews advertising but it's still targeted and designed to sell articles. This has been going on for ages but the internet has drastically changed the delivery mechanism and now the "products" are fascism and racism - generated not by board meetings with marketing executives and salespeople but rather whatever they program the computer to think - and are designed to sell you an ideology rather than something you buy in a store. And I don't entirely know how you regulate that or, as others have pointed out, come up with a good way to decide who gets to do it, which is the real tricky part. Print and TV ads have SOME regulation surrounding what they can and cannot do or who they can target (tobacco to kids, for instance) but it's always an uphill battle and in many ways is often unenforceable. The sheer volume alone of YT videos and internet content alone is enough to make this a really daunting task. Still, it'd be nice if YT and FB put as much effort into moderation of this toxic poo poo as they do going around looking for copyright violations and naughty words.
|
# ? Oct 4, 2022 13:21 |
|
BiggerBoat posted:Still, it'd be nice if YT and FB put as much effort into moderation of this toxic poo poo as they do going around looking for copyright violations and naughty words. Banning 'algorithms' though, is lol and the equivalent of saying 'ban grammar! If you can't string together verbs nouns and adjectives in a coherent manner, noone can be radicalized by the ideas they're espousing!' Which, while correct, is very much 'decapitation cures acne' levels of effective. For the uninitiated: An algorithm is a procedure used for solving a problem or performing a computation. Algorithms act as an exact list of instructions that conduct specified actions step by step in either hardware- or software-based routines. Algorithms are widely used throughout all areas of IT.
|
# ? Oct 4, 2022 13:33 |
|
Clarste posted:Now, search engines these days also do this stuff where they filter for results they think you'll be interested in, but they were perfectly functional before they started doing that. Probably better. A point of clarification here: reputable search engines absolutely do not tailor results. This is why when any 2 people search the same word/phrase in something like Google, they get the same results. Or if you compare your results with an incognito mode browser results
|
# ? Oct 4, 2022 13:46 |
|
Oracle posted:I think this is what is going to sink them, ultimately. This and the fact they seem to have absolutely no problem filtering out Nazi-related content in countries which ban it and forbid anyone else from serving it up (hi Germany) in order to do business there. They absolutely have the capability, they are doing it elsewhere. Section 230 is in trouble regardless of this case though. The white house directly called for congress to "remove special legal protections for large tech platforms" so....my prediction is that it is on borrowed time.
|
# ? Oct 4, 2022 14:21 |
|
Discendo Vox posted:One distinction with algorithmic feeds is that they present the user with the impression that the information is more broadly representative, and, within the platform, exclude other sources. Putting aside how this can't possibly be true for "algorithmic feeds" across the board, I'm not sure how you can even come to this conclusion about YouTube specifically. Their recommendation algorithm is specifically providing recommendations specific to your interests and watch history; how is that interpreted as being "broadly representative"? It's the opposite. And yes it excludes other sources but that's the point - if you're watching video after video about volleyball, it should exclude recommending that your next video relate to woodworking, or flying kites, in favor of something more pertinent to your interests
|
# ? Oct 4, 2022 14:37 |
|
Kalit posted:A point of clarification here: reputable search engines absolutely do not tailor results. This is why when any 2 people search the same word/phrase in something like Google, they get the same results. Or if you compare your results with an incognito mode browser results https://www.theverge.com/2018/12/4/18124718/google-search-results-personalized-unique-duckduckgo-filter-bubble https://tinybullyagency.com/why-googles-search-results-vary-from-person-to-person/
|
# ? Oct 4, 2022 15:02 |
|
Riptor posted:Putting aside how this can't possibly be true for "algorithmic feeds" across the board, I'm not sure how you can even come to this conclusion about YouTube specifically. Their recommendation algorithm is specifically providing recommendations specific to your interests and watch history; how is that interpreted as being "broadly representative"? It's the opposite. The user does not perceive or have explained the actual structure of the recommendation system, so the degree and form of exclusion isn't clear- and this has ramifications for the perception of the scope of information that isn't included, a fishbowl effect that's part of the radicalization pipeline. Note also that algo classifies what counts as "interests", using consumer patterns but also design decisions by the people making it. The process isn't somehow pure or directly reflective of the user's preference. The ability of these platforms to have this homophilic effect is part of their commodification and appeal, even as it's obviously perverse- which is why these methods are both proprietary and obfuscated.
|
# ? Oct 4, 2022 15:10 |
|
Clarste posted:There is a difference between an unprompted list of recommendations + autoplay, and using a search engine. Now, search engines these days also do this stuff where they filter for results they think you'll be interested in, but they were perfectly functional before they started doing that. Probably better. No it isn't. There are ten hundred billion guitar institutional videos out there. 98% of them are trash. They use algorithms to sort out the good ones from the bad. Bad actors abuse that same algorithm to put crazy videos in front of people. Sorting out how to get the good (high quality content) from the bad (crazy poo poo) is a really hard problem. There's a further problem that both avenues are profitable for the platforms so they're not really incentived to fix it. Throwing it all in the trash and just returning all videos that have “guitar instruction” in the user submitted keywords isn't a solution, it's the death of that platform. There's a problem here, but “banning algorithms” isn't the solution.
|
# ? Oct 4, 2022 15:16 |
|
Riptor posted:Putting aside how this can't possibly be true for "algorithmic feeds" across the board, I'm not sure how you can even come to this conclusion about YouTube specifically. Their recommendation algorithm is specifically providing recommendations specific to your interests and watch history; how is that interpreted as being "broadly representative"? It's the opposite. ColdPie posted:No it isn't. There are ten hundred billion guitar institutional videos out there. 98% of them are trash. They use algorithms to sort out the good ones from the bad. Bad actors abuse that same algorithm to put crazy videos in front of people. Sorting out how to get the good (high quality content) from the bad (crazy poo poo) is a really hard problem. There's a further problem that both avenues are profitable for the platforms so they're not really incentived to fix it. Throwing it all in the trash and just returning all videos that have “guitar instruction” in the user submitted keywords isn't a solution, it's the death of that platform. There's a problem here, but “banning algorithms” isn't the solution. edit: People ITT are really not understanding how much human intervention and work goes into something like youtube recs. It's not some bleep-bloop algorithm they set and forget. It's more like a huge program with machine learning components, and it's constantly being modified. While machine learning models tend to absorb biases from their training data, that it not what is going on when half your sidebar is Jorpan Peenerson. BTW, I am literally a data scientist, so I know more than the average bear about this stuff. cat botherer fucked around with this message at 15:23 on Oct 4, 2022 |
# ? Oct 4, 2022 15:18 |
|
Riptor posted:Putting aside how this can't possibly be true for "algorithmic feeds" across the board, I'm not sure how you can even come to this conclusion about YouTube specifically. Their recommendation algorithm is specifically providing recommendations specific to your interests and watch history; how is that interpreted as being "broadly representative"? It's the opposite. It's pretty common from what I've seen, heard, and experienced for someone to be watching a mundane anime or gaming video and get recommended alt-right content because a bunch of people who viewed the same video happened to like and watch alt-right content as well.
|
# ? Oct 4, 2022 15:19 |
|
Yeah I get Ben Shapiro recommendations all the time when watching gaming stuff and I've never once watched anything even close to his garbage on there.
|
# ? Oct 4, 2022 15:26 |
|
Catgirl Al Capone posted:It's pretty common from what I've seen, heard, and experienced for someone to be watching a mundane anime or gaming video and get recommended alt-right content because a bunch of people who viewed the same video happened to like and watch alt-right content as well. again the root problem is not the algorithm; it's the fact they won't remove these videos from their site. and, as it relates to the scotus case, the algorithm acting in this way is not an editorial choice
|
# ? Oct 4, 2022 15:29 |
|
It may be productive to work more closely from the petition that this is all about. Here are a couple quotes from it relevant to recent claims: At p. 31: quote:The text of section 230 clearly distinguishes between a system that provides to a user information that the user is actually seeking (as does a search engine) and a system utilized by an internet company to direct at a user information (such as a recommendation) that the company wants the user to have. Section 230(b) states that “It is the policy of the United States ... to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the internet and other computer services.” 47 U.S.C. § 230(b) (emphasis added). Congress found that “[t]he developing array of Internet and other interactive computer services ... offers users a greater degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops.” 47 U.S.C. § 230(a) (emphasis added). The core function of a search engine advances that policy, because it enables a user to select what information he or she will receive; on the other hand, when an interactive computer service makes a recommendation to a user, it is the service not the user that determines that the user will receive that recommendation. At p. 35: quote:The decision below insisted it was holding only that recommendations by an interactive computer service are protected by section 230 if those recommendations are made in a “neutral” manner. “We only reiterate that a website’s use of content-neutral algorithms, without more, does not expose it to liability....” [41a]. “[The complaint does not] allege that Google’s algorithms treated ISIS-created content differently than any other third-party created content.” Id. The Second
|
# ? Oct 4, 2022 15:34 |
|
Riptor posted:again the root problem is not the algorithm; it's the fact they won't remove these videos from their site. It makes far more sense to talk about the people who build the rec engine and their motivations. Facebook has oodles of data. They could very, very easily stop recommending far-right videos in unrelated contexts if they wanted to, but they don't. It makes them money. cat botherer fucked around with this message at 15:39 on Oct 4, 2022 |
# ? Oct 4, 2022 15:37 |
|
cat botherer posted:You keep talking about "algorithm" as if its some independent actor. It isn't. It receives constant manual intervention and tuning. Also, can we please stop calling it "algorithm"? It's not even the correct word. Sure. What do you suggest? Also can you provide some further reading on the manual intervention on YouTube specifically? Not doubting you, I'm genuinely interested to learn more.
|
# ? Oct 4, 2022 15:40 |
|
socialsecurity posted:Yeah I get Ben Shapiro recommendations all the time when watching gaming stuff and I've never once watched anything even close to his garbage on there. Try digging through your watch history (Library -> History). When I get weird stuff, it's usually fairly obvious what in my watch history is causing it (usually some video I clicked on SA or some news article), and you can remove it from the watch history to clean up your feed.
|
# ? Oct 4, 2022 15:43 |
|
ColdPie posted:Sure. What do you suggest? People have this idea that it's some kind of set-and-forget "AI" thing. Real ML is just statistics on computers, and requires massive amounts of tuning and intervention. I don't know how Youtube recs work, but usually those things are one of many variants on the idea of low-rank (non-negative) matrix factorization with some ranking thrown in there, sort of like approximating users with a smaller set of distinct archetype users. There's definitely vastly more to it than something like that though. We're talking about something that easily hundreds of people work on full-time. cat botherer fucked around with this message at 15:49 on Oct 4, 2022 |
# ? Oct 4, 2022 15:46 |
|
cat botherer posted:You keep talking about "algorithm" as if its some independent actor. It isn't. It receives constant manual intervention and tuning. I appreciate and understand that. But when a specific video is suggested to a user, that is not a manual intervention. No one at YouTube is saying "ah, this one logged in person should watch this specific video". Which is why arguing that their recommendation algorithm (which, yes, is a very complicated system which involves many people) is something that should be exempt from section 230 protections is absurd cat botherer posted:Also, can we please stop calling it "algorithm"? It's not even the correct word. it is the term being used in Gonzalez v Google which prompted this discussion: https://www.scotusblog.com/2022/10/court-agrees-to-hear-nine-new-cases-including-challenge-to-tech-companies-immunity-under-section-230/ quote:The question now before the court is whether Section 230 protects internet platforms when their algorithms target users and recommend someone else’s content. The case was filed by the family of an American woman killed in a Paris bistro in an ISIS attack in 2015. They brought their lawsuit under the Antiterrorism Act, arguing that Google (which owns YouTube) aided ISIS’s recruitment through YouTube videos – specifically, recommending ISIS videos to users through its algorithms. Riptor fucked around with this message at 16:05 on Oct 4, 2022 |
# ? Oct 4, 2022 15:58 |
|
Riptor posted:it is the term being used in Gonzalez v Google which prompted this discussion: "Algorithm" became a buzzword associated with AI stuff for some reason, but it's incorrect to use it that way. It doesn't even have anything to do with AI in particular. Spreadsheets use implementations of sorting algorithms, operating systems use scheduling algorithms, etc. There can be a shorthand for saying "X program uses the quicksort algorithm" when it would be more correct to say "X program uses an implementation of the quicksort algorithm" but that's not really what's going on here. The rec system assuredly is a big, distributed system using many different algorithms in some proprietary way.
|
# ? Oct 4, 2022 16:06 |
|
That seems like an implementation detail. Whatever method they're using to filter and present content, hereinafter the “hoobaz”, the result is exploitable by people in and out of the company to promote conspiracy theories and other nonsense. We agree on that, I think. What I'm saying is banning the entire concept of hoobaz will result in the platform’s death and likely all UGC at any significant scale. That's bad, imo. I'd argue the correct solution is antitrust enforcement. These companies are too powerful and have too little competition. Break them up, maybe mandate interoperability standards, and let the market sort it out. If the platform you're on has too many or too few Nazis for your tastes, you can switch to another. Interoperability would let users choose what other sites to pull content from, giving even more options for users and advertisers. This avoids all the sticky issues with government compelling speech from private companies. It doesn't even require any new laws, just enforcing the ones we already have.
|
# ? Oct 4, 2022 16:06 |
|
ColdPie posted:That seems like an implementation detail. Whatever method they're using to filter and present content, hereinafter the “hoobaz”, the result is exploitable by people in and out of the company to promote conspiracy theories and other nonsense. We agree on that, I think. What I'm saying is banning the entire concept of hoobaz will result in the platform’s death and likely all UGC at any significant scale. That's bad, imo. I don't think this is necessarily true. They will probably generate less money from their current tactics but there's little reason to think their platforms would instantly become unprofitable because they remove the recommendation engine.
|
# ? Oct 4, 2022 16:09 |
|
ColdPie posted:That seems like an implementation detail. Whatever method they're using to filter and present content, hereinafter the “hoobaz”, the result is exploitable by people in and out of the company to promote conspiracy theories and other nonsense. We agree on that, I think. What I'm saying is banning the entire concept of hoobaz will result in the platform’s death and likely all UGC at any significant scale. That's bad, imo. vvv A better term than "hoobaz" is just "program" or probably more appropriately, "system." Any platform is unusable without such a system, sure, but that does not imply it must recommend nazi videos. That is a human choice, not some inherent thing with the underlying technology. cat botherer fucked around with this message at 16:29 on Oct 4, 2022 |
# ? Oct 4, 2022 16:10 |
|
koolkal posted:I don't think this is necessarily true. They will probably generate less money from their current tactics but there's little reason to think their platforms would instantly become unprofitable because they remove the recommendation engine. Go to Youtube and type in any five character sequence of letters. Put it in quotes (or click the link) so it doesn't try to typo fix it. You will almost certainly get at least a few results, from all over the world, just on pure nonsense letters. Now imagine trying to sort through a list of very common words. At this scale you have to filter and sort the content. You can't not do it. An algorithm (sorry, “hoobaz”) is required or the platform is unusable.
|
# ? Oct 4, 2022 16:20 |
|
|
# ? May 25, 2024 08:14 |
|
ColdPie posted:Go to Youtube and type in any five character sequence of letters. Put it in quotes (or click the link) so it doesn't try to typo fix it. You will almost certainly get at least a few results, from all over the world, just on pure nonsense letters. Now imagine trying to sort through a list of very common words. At this scale you have to filter and sort the content. You can't not do it. An algorithm (sorry, “hoobaz”) is required or the platform is unusable. quote:The text of section 230 clearly distinguishes between a system that provides to a user information that the user is actually seeking (as does a search engine) and a system utilized by an internet company to direct at a user information (such as a recommendation) that the company wants the user to have. Section 230(b) states that “[i]t is the policy of the United States ... to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the internet and other computer services.” 47 U.S.C. § 230(b) (emphasis added). Congress found that “[t]he developing array of Internet and other interactive computer services ... offers users a greater degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops.” 47 U.S.C. § 230(a) (emphasis added). The core function of a search engine advances that policy, because it enables a user to select what information he or she will receive; on the other hand, when an interactive computer service makes a recommendation to a user, it is the service not the user that determines that the user will receive that recommendation. The case tries to make a distinction between search engines and recommendations in that the former requires the user to indicate they are seeking said information. So if a user searches for ISIS videos that's fine but Google should not just be placing them randomly on the side of your screen because you watched a gaming video.
|
# ? Oct 4, 2022 16:48 |