Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
World Famous W
May 25, 2007

BAAAAAAAAAAAA

BiggerBoat posted:

I'm 100% totally opposed to the death penalty, mind, but were I ever to be on death row and face my maker, I'll take "heroin overdose, please" and feel like an opioid overdose might be some sort of happy medium.
there is nothing pleasant about oding

Adbot
ADBOT LOVES YOU

Captain_Maclaine
Sep 30, 2001

Every moment that I'm alive, I pray for death!
Regarding undecideds,



I AM GRANDO posted:

I think most people who advocate for the death penalty really like the ritual aspects of it and would feel that it wasn’t really the death penalty if terror and pain weren’t properly measured aspects of it.

I seem to recall hearing that witnesses to Timothy McVeigh's execution did complain that it was anticlimactic and he didn't appear to be suffering overmuch.

Rappaport
Oct 2, 2013

This keeps getting brought up over and over, but anyway. If you absolutely, positively have to kill some people and don't want them to suffer, you could suffocate them using nitrogen. It does not trigger the physiological response a human being has to suffocating, since it does not involve messing with the carbon content of the blood stream. It probably isn't pleasant to pass out that way, but it's painless.

But the point of the death penalty is to terrify people, and satisfy the torture fantasies of others. This venn diagram probably has an area of over-lap, too.

g0del
Jan 9, 2001



Fun Shoe

Rappaport posted:

This keeps getting brought up over and over, but anyway. If you absolutely, positively have to kill some people and don't want them to suffer, you could suffocate them using nitrogen. It does not trigger the physiological response a human being has to suffocating, since it does not involve messing with the carbon content of the blood stream. It probably isn't pleasant to pass out that way, but it's painless.

But the point of the death penalty is to terrify people, and satisfy the torture fantasies of others. This venn diagram probably has an area of over-lap, too.
When I was an idiot teenager, I managed to make myself pass out this way by breathing way too deeply from a helium balloon for the squeaky voice. I can verify that the passing out part is painless and not unpleasant at all - barely even noticeable. Subjectively I was making funny voices, noticed that my vision was going black, then I was out. The whole thing felt like it was less than a second, and I never felt like I needed to breathe. The only pain I felt was after waking up from the bruises and scrapes I got falling down.

According to my cousins I was twitching and jerking around like I was having a seizure, so it might even satisfy the torture fantasies of sadists while still being painless for the victim.

James Garfield
May 5, 2012
Am I a manipulative abuser in real life, or do I just roleplay one on the Internet for fun? You decide!
One thing to remember is that most people aren't following politics closely. Lots of things sound weird when you imagine someone who knows what every candidate says about every issue and thinks all of the national issues are important, but make more sense if the undecided voters (or people who switched parties, or whatever else) just don't know very much about politics.

Fart Amplifier
Apr 12, 2003

Rappaport posted:

This keeps getting brought up over and over, but anyway. If you absolutely, positively have to kill some people and don't want them to suffer, you could suffocate them using nitrogen. It does not trigger the physiological response a human being has to suffocating, since it does not involve messing with the carbon content of the blood stream. It probably isn't pleasant to pass out that way, but it's painless.

But the point of the death penalty is to terrify people, and satisfy the torture fantasies of others. This venn diagram probably has an area of over-lap, too.

Inert gas suffocation is how I'd choose to go if I ever need assisted suicide.

It's entirely suffering free.

BiggerBoat
Sep 26, 2007

Don't you tell me my business again.

World Famous W posted:

there is nothing pleasant about oding

Right, and I mean...I'll take your word for it and I wouldn't know having never done heroin - let alone overdosed - but from what I can tell it's probably a better way to go out than the electric chair or whatever chemical bleach and weed killer cocktail they use for an execution in Arkansas currently. As I said, I'd prefer that the state never has the power to legally murder anyone but as long as we're doing that, an opioid overdose seems about as peaceful and painless as anything my non sadistic mind can think of.

I suppose I'm imagining something similar to hospice than whatever barbaric lava in people's veins we hem and haw about at the moment and argue about what's humane or not in a country where half of our society thinks we should have smaller government but salivates at the idea of locking up people for life and wants to fast track death sentences.

BiggerBoat fucked around with this message at 22:09 on Oct 8, 2022

Cranappleberry
Jan 27, 2009

World Famous W posted:

there is nothing pleasant about oding

Yeah. It does eventually cause people to pass out but before that it's not fun. It also works by making the person stop breathing, which is horrific, even if they are unconscious.

Even with stuff designed to anesthetize people, it's bad. Also, companies specifically stopped supplying certain drugs to states who executed prisoners, leading to a cottage industry of smaller compounding pharmacies doing illegal synthesis of fast-acting barbiturates to supply it at the behest of the states, as well as secret, illicit funds to pay for the drugs. States using other methods is leading to more cruelty- ODs. But eventually, if they can't get their hands on legal fentanyl...

There are "painless" ways to do it (certain gases) but 1. It's still cruel and 2. Why?!

Cranappleberry fucked around with this message at 00:09 on Oct 9, 2022

BIG-DICK-BUTT-FUCK
Jan 26, 2016

by Fluffdaddy

World Famous W posted:

there is nothing pleasant about oding

maybe not for bystanders but i'll say from personal experience that it's indistinguishable from passing out ... you just dont wake back up (without intervention)

Twincityhacker
Feb 18, 2011

Zwabu posted:

I wonder what could possibly ever change someone’s alignment these days. Can the GOP screw up badly enough for people to decide they aren’t a Republican anymore?

My Dad was a Republican for decades. I'm not sure when he jumped ship and started voting Democrat, but the absolute latest had to be 2008.

Zwabu
Aug 7, 2006

Cranappleberry posted:

Yeah. It does eventually cause people to pass out but before that it's not fun. It also works by making the person stop breathing, which is horrific, even if they are unconscious.

It's not bad in the way you are implying though. It is true that people who overdose on drugs like opioids or propofol stop breathing, but it's because the drive to breathe is removed, so the person doesn't feel like they need to take a breath at all and they don't have the feeling of suffocating like if they had the drive to breathe but were unable to do so because someone had a pillow over their face or something.

I'd rather go out with an OD than something like filling a chamber with nitrogen. While true that depending on the specific method that might work quickly because the deprivation of oxygen might render you unconscious before you could register the feeling of distress or suffocation, you don't have any of the respiratory drive removed the way a whopping dose of fentanyl or heroin would do. I wouldn't personally want to bet on the oxygen deprivation working before feeling the rest.

The main hardships associated with lethal injection would be - difficulty establishing intravenous access requiring multiple pokes, and especially if propofol is the agent, depending on where the IV is located the medicine can be quite caustic and cause pain going into the veins before taking effect.

Source - I'm someone who administers these medicines to others on the daily.

Edit: on reflection the nitrogen method might not be too bad because the main feeling of not breathing enough is due to not eliminating CO2 rather than being short of oxygen which mainly makes you feel faint. Since your breathing is unimpeded and you still eliminate CO2 freely, there might not be too much distress from the decline in oxygen, especially if it's rapid. But I'd still go for the IV method assuming not too much difficulty getting an IV in place.

Zwabu fucked around with this message at 01:08 on Oct 9, 2022

Foxfire_
Nov 8, 2010

Zwabu posted:

Edit: on reflection the nitrogen method might not be too bad because the main feeling of not breathing enough is due to not eliminating CO2 rather than being short of oxygen which mainly makes you feel faint. Since your breathing is unimpeded and you still eliminate CO2 freely, there might not be too much distress from the decline in oxygen, especially if it's rapid. But I'd still go for the IV method assuming not too much difficulty getting an IV in place.
Inert gas asphyxiation is painless; we know that from industrial accidents. One of the hazards of working around nitrogen tanks for welding or filling food packages is that leaks don't cause any distress. Workers can pass out and suffocate without ever realizing that anything is wrong. It kills a handful of people every year

Medullah
Aug 14, 2003

FEAR MY SHARK ROCKET IT REALLY SUCKS AND BLOWS

Twincityhacker posted:

My Dad was a Republican for decades. I'm not sure when he jumped ship and started voting Democrat, but the absolute latest had to be 2008.

I was a "republican" from about 1996-2002. Something really bothered me about Bill Clinton and by proxy, Al Gore. And yeah, not crazy about Hilary either. So I voted Bush in 2000 and then either didn't vote or voted Libertarian until 2018. Politics didn't do anything for me and the dream of true Libertarian where we're fiscally conservative but socially liberal appealed to me.

Of course then 2016 hit and holy poo poo did I reevaluate my political compass. Trump was so offensively bad that I had to start paying attention. Voted Democrat for the first time in 2018 and in 2020, though I still wish we could get a candidate under 60 that's actually popular and not "Not Trump"

Phlag
Nov 2, 2000

We make a special trip just for you, same low price.


(Warning: discussion on suicide and specific suicide methods incoming)

On the topic from a few days ago of holding big tech responsible for their recommendation algorithms, here is a much more tangible example of how loosely monitored systems like this can go wrong, and why companies need to be liable. Amazon is being sued by the parents of children who died by suicide using chemicals bought on Amazon. Amazon was recommending what the plaintiffs call "suicide kits" through their "Buy it with" groupings (encouraging customers to buy harmful chemicals, a digital scale, medication to prevent vomiting, and a book on suicide methods all at once). Their auto-complete also recommended adding the search term "suicide" to people searching for these chemicals.
https://twitter.com/arstechnica/status/1579056836212273152

Young Freud
Nov 26, 2006

Phlag posted:

(Warning: discussion on suicide and specific suicide methods incoming)

On the topic from a few days ago of holding big tech responsible for their recommendation algorithms, here is a much more tangible example of how loosely monitored systems like this can go wrong, and why companies need to be liable. Amazon is being sued by the parents of children who died by suicide using chemicals bought on Amazon. Amazon was recommending what the plaintiffs call "suicide kits" through their "Buy it with" groupings (encouraging customers to buy harmful chemicals, a digital scale, medication to prevent vomiting, and a book on suicide methods all at once). Their auto-complete also recommended adding the search term "suicide" to people searching for these chemicals.
https://twitter.com/arstechnica/status/1579056836212273152

I think this really comes down to regulating algorithms or at least putting a human in the loop to hit a button whenever these issues arise. The same thing happens with radicalization with YouTube videos or Facebook group and how that poo poo is being gamed to get more adherents to indoctrinate by abusing the traffic algorithms to those views. Like after a few hits, maybe have a manager come in and flag recommendations before getting they hurt more people.

haveblue
Aug 15, 2005



Toilet Rascal

Young Freud posted:

I think this really comes down to regulating algorithms or at least putting a human in the loop to hit a button whenever these issues arise. The same thing happens with radicalization with YouTube videos or Facebook group and how that poo poo is being gamed to get more adherents to indoctrinate by abusing the traffic algorithms to those views. Like after a few hits, maybe have a manager come in and flag recommendations before getting they hurt more people.

Amazon is going to run into the same problem Youtube runs into- there are not enough managers in the world to keep up with how often the algorithm fucks up

Young Freud
Nov 26, 2006

haveblue posted:

Amazon is going to run into the same problem Youtube runs into- there are not enough managers in the world to keep up with how often the algorithm fucks up

I guess at some point Amazon is going to have to do a cost analysis and see if costs more hiring content managers or paying out lawsuits.

Main Paineframe
Oct 27, 2010

Young Freud posted:

I guess at some point Amazon is going to have to do a cost analysis and see if costs more hiring content managers or paying out lawsuits.

There's always a third option: stop using that algorithm altogether.

It's not like it's some super smart demonstration of incredibly machine intelligence. It's just querying the database for things that are commonly bought in the same transaction as the item the user is looking at. It's not an essential part of the shopping experience, it's just Amazon trying to nudge people into buying more poo poo.

The problems here aren't unique to automated algorithms. The well-known anecdote of Target sending maternity advertisements to an expecting mother before her own family knew about her pregnancy came as a result of a few marketing folks asking the company's statistics department whether there was a way to tell if a woman was pregnant. By the time anyone realized the potential PR risk, the stats department had already worked out product cues that could be used to track pregnancy and created a database query for it. The resulting backlash was so bad that the marketing department actually went back and made the pregnancy advertisements less targeted on purpose so that people wouldn't feel like Target knew so much about them.

That was mostly human effort, but it still points to the big problem with automating this stuff: when it was under human control, it was done for a few products and/or demographics that were considered especially important, and any Bad Idea decisions were signed off by a human who had specifically asked for that Bad Idea. With an automated algorithm, it's done for everything by default, ensuring it will find every possible bad idea through sheer brute-force. No marketer is ever going to go and ask the stats department how to target suicidal customers, but the algorithm doesn't have any idea what "suicide" is - it just knows that these products are often bought together.

Oracle
Oct 9, 2004

Main Paineframe posted:

There's always a third option: stop using that algorithm altogether.

It's not like it's some super smart demonstration of incredibly machine intelligence. It's just querying the database for things that are commonly bought in the same transaction as the item the user is looking at. It's not an essential part of the shopping experience, it's just Amazon trying to nudge people into buying more poo poo.

The problems here aren't unique to automated algorithms. The well-known anecdote of Target sending maternity advertisements to an expecting mother before her own family knew about her pregnancy came as a result of a few marketing folks asking the company's statistics department whether there was a way to tell if a woman was pregnant. By the time anyone realized the potential PR risk, the stats department had already worked out product cues that could be used to track pregnancy and created a database query for it. The resulting backlash was so bad that the marketing department actually went back and made the pregnancy advertisements less targeted on purpose so that people wouldn't feel like Target knew so much about them.

That was mostly human effort, but it still points to the big problem with automating this stuff: when it was under human control, it was done for a few products and/or demographics that were considered especially important, and any Bad Idea decisions were signed off by a human who had specifically asked for that Bad Idea. With an automated algorithm, it's done for everything by default, ensuring it will find every possible bad idea through sheer brute-force. No marketer is ever going to go and ask the stats department how to target suicidal customers, but the algorithm doesn't have any idea what "suicide" is - it just knows that these products are often bought together.

On the other hand, they could keep this algorithm, not make it public, and instead include links to books about how to deal with depression for those making those purchases together and maybe a warning email to the parent-linked account stating their child was exhibiting danger signs of suicidality and to schedule a psych consult.

pseudorandom name
May 6, 2007

The actual problem is that Amazon is selling how-to guides for committing suicide, not that the algorithm is correctly identifying people with similar interests and recommending products based on that similarity. This is the same problem with YouTube -- not that YouTube correctly categorizing neo-nazis as having similar interests to other neo-nazis, but that YouTube is even hosting neo-nazi videos in the first place.

Blue Footed Booby
Oct 4, 2006

got those happy feet

Oracle posted:

On the other hand, they could keep this algorithm, not make it public, and instead include links to books about how to deal with depression for those making those purchases together and maybe a warning email to the parent-linked account stating their child was exhibiting danger signs of suicidality and to schedule a psych consult.

My first thought was "what about kids where the parents are the problem." :(

SidneyIsTheKiller
Jul 16, 2019

I did fall asleep reading a particularly erotic chapter
in my grandmother's journal.

She wrote very detailed descriptions of her experiences...

Oracle posted:

On the other hand, they could keep this algorithm, not make it public, and instead include links to books about how to deal with depression for those making those purchases together and maybe a warning email to the parent-linked account stating their child was exhibiting danger signs of suicidality and to schedule a psych consult.

That's an egregious violation of privacy, good lord. And that's before we even consider whether the algorithm is wrong.

Clarste
Apr 15, 2013

Just how many mistakes have you suffered on the way here?

An uncountable number, to be sure.

Oracle posted:

On the other hand, they could keep this algorithm, not make it public, and instead include links to books about how to deal with depression for those making those purchases together and maybe a warning email to the parent-linked account stating their child was exhibiting danger signs of suicidality and to schedule a psych consult.

How the gently caress would they know to do this when the recommendation algorithm is making these decisions with no human input? That's kind of the entire point here. To focus on how to solve this exact problem after we know it's a problem is to completely miss the point. Any post hoc solution is totally meaningless when the algorithm will simply make the exact same mistake again with some other combinations of purchases we can't predict.

pseudorandom name posted:

The actual problem is that Amazon is selling how-to guides for committing suicide, not that the algorithm is correctly identifying people with similar interests and recommending products based on that similarity. This is the same problem with YouTube -- not that YouTube correctly categorizing neo-nazis as having similar interests to other neo-nazis, but that YouTube is even hosting neo-nazi videos in the first place.

Ah, the solution to the problem of blind statistics engines guiding our entire lives is to remove everything that could possibly hurt us, one by one. With an infinite army of moderators. What if it wasn't an explicit "how-to guide" but rather a short work of fiction that portrayed the steps in detail? Should we also remove anything that talks about suicide at all?

Clarste fucked around with this message at 07:36 on Oct 10, 2022

Ghost Leviathan
Mar 2, 2017

Exploration is ill-advised.
As ever a key part is to identify what the problem actually is. YouTube continuing to host Nazi indoctrination material to the point where the algorithm actually promotes it to create more Nazis is the problem.

pseudorandom name
May 6, 2007

Clarste posted:

Ah, the solution to the problem of blind statistics engines guiding our entire lives is to remove everything that could possibly hurt us, one by one. With an infinite army of moderators. What if it wasn't an explicit "how-to guide" but rather a short work of fiction that portrayed the steps in detail? Should we also remove anything that talks about suicide at all?

They're called "retail buyers", not moderators, and they're how all retail businesses worked up until about twenty years ago and how the good retail businesses still work today. Good retail businesses also employ lots of other people in many other important roles, for example they have people whose job it is to ensure that their supply chain isn't filled with counterfeits, something that Amazon also doesn't bother to do.

H.R. Hufflepuff
Aug 5, 2005
The worst of all worlds

Ghost Leviathan posted:

As ever a key part is to identify what the problem actually is. YouTube continuing to host Nazi indoctrination material to the point where the algorithm actually promotes it to create more Nazis is the problem.

Yeah, this sums up the actual problem pretty well.

small butter
Oct 8, 2011

Amazon is generally very bad about intervening with automated and algorithmic stuff. A few months ago, I bought expired hand sanitizer of the same type 3 times. The first time, I told them about it. The second time, I told them about it and asked them to ensure that it won't happen again after I place an order for the same items, but it happened again. I told them to take it off their shelves and it was still there weeks later. I have many examples of this.

evilweasel
Aug 24, 2002

Ghost Leviathan posted:

As ever a key part is to identify what the problem actually is. YouTube continuing to host Nazi indoctrination material to the point where the algorithm actually promotes it to create more Nazis is the problem.

that's not correct, and this sort of smearing the difference between liability for hosting user-provided content, and liability for recommended content is a core problem in any of these discussions

section 230 represents a reasonable policy judgment - that it is not possible for any internet service hosting user-provided content to effectively screen it all, and so the choices are basically (a) no user-provided content sites; or (b) no liability for the mere hosting of such user-provided content sites. you can tweak around the edges of exactly how you do (b) but ultimately the volume of any user-provided content site means you have to pick one of those. you can have something DMCA-like where there becomes an obligation to take stuff down once you're notified, perhaps.

here's the thing: that is not what we are discussing here. amazon lumping suicide packs together is not mere liability for hosting user-supplied content (it is an open question if anything amazon does w/r/t its store should be considered section 230-ish). youtube's algorithm deliberately serving you extremist videos of one flavor or another because it recognizes those as driving "engagement" is not merely hosting user-supplied content.

there's a good policy argument that user-published content is a social good overall, even if there's quite a lot of bad content. but importantly - and people deliberately blur this - those arguments do not apply to algorithmic recommendation.

you can make similar arguments that if algorithmic recommendations expose the creator of the product to ordinary products liability claims, then you will get no algorithmic recommendations. there's two problems with that. first: it is not at all apparent that algorithmic recommendations are the same social good as self-published content. second, algorithmic recommendations are profit-seeking activities and as such have (as we have seen) a potential to tend towards creating negative externalities in search of profit. it is in youtube's financial interest for you to get hooked on extremist content, because assholes watching nazi videos watch a lot of them while people watching cooking videos watch a couple.

youtube wants you to think there is no option between forcing them to pre-approve each and every video uploaded to the site (in practice, largely ending the product) or allowing their algorithm to keep trying to find which flavor of extremist videos will get you hooked. that's not the case: youtube can have the right to not get sued every time someone uploads a nazi video they didn't know about, but be liable for what their algorithms decide to serve you up. we would be impoverished if some rando couldn't upload a video of, say, how to do some esoteric electronics project they were interested in that maybe a thousand people will ever watch (something that would not be cost-effective to allow if all videos had to be manually screened). we would not be impoverished if that video didn't have "auto-play" after it where the algorithm tried to figure out what would hook you best if that algorithm couldn't be forced to stay away from extremist content.

it is worth noting as well that it is precisely that recommendation algorithm that generates so much nazi youtubes. because it recommends them, making them profitable to make, and creating a whole ecosystem of them!

FishBulbia
Dec 22, 2021

Oracle posted:

On the other hand, they could keep this algorithm, not make it public, and instead include links to books about how to deal with depression for those making those purchases together and maybe a warning email to the parent-linked account stating their child was exhibiting danger signs of suicidality and to schedule a psych consult.

Yeah, they can also set up so they can tell if you're reading too much radical content, or just things about gay people or something, and then send that right to the local authorities, thereby keeping everyone safe :)

FishBulbia
Dec 22, 2021

They should just get rid of it. If you want to find a new book ask a librarian or bookseller. Watch youtube videos by word of mouth or by being shown them by friends.

Paracaidas
Sep 24, 2016
Consistently Tedious!
https://twitter.com/ddayen/status/1579472512735645697

For the many ills of the Biden administration (including some terrible nominees), he's also hit on some very, very good ones. Here's Alvaro Bedoya, who replaced Chopra on the FTC after he was named director of the CFPB, in a speech to the Midwest Forum on Fair Markets on September 22.

I strongly recommend clicking through if you find this topic interesting, as there are embedded links to more detail and citations throughout. Phoneposting makes those particularly challenging to embed in my post. It also has an audio version, if you'd prefer to listen rather than read.

quote:

A Call for Fairness, a Rule for Efficiency
The President nominated me to this position roughly a year ago. I spent a good bit of that time reading antitrust treatises cover to cover.

Doing that, I quickly read that the purpose of antitrust is to maximize efficiency. I read that the Supreme Court declared it “axiomatic” that the antitrust laws were passed to protect competition, not competitors, which is a way of saying that antitrust laws are not intended to protect the small and allegedly inefficient.

But I also used that time to read a lot of history, which told a very different story. I learned that small farmers pressed Iowa to pass the nation’s first antitrust law in 1888. I learned that when Congress convened in 1890 to debate the Sherman Act, they did not talk about efficiency. No, the most common complaint in the Sherman Act debates was that a cartel of meatpackers was cheating cattlemen out of a fair price for their livestock. In 1936, Congress spent months debating a bill to protect small-town grocers being driven out of business by powerful chain stores who got secret payoffs from their suppliers.

“What are we trying to get away from these chains?” asked one of the bill’s supporters. “What we are trying to take away from them is secret discounts, secret rebates, and secret advertising allowances. We are trying to take away from them those practices that are unfair.”

It wasn’t just 1890 or 1936. Five times in 60 years, Congress passed antitrust laws that in letter or spirit demanded fairness for small business, often rural small business. Yet today, it is “axiomatic” that antitrust does not protect small business. And that the lodestar of antitrust is not fairness, but efficiency.

How did this happen? What has this focus on efficiency meant for rural America? And what would it look like to return to fairness?

A Child in West Virginia
In many parts of rural America, independent pharmacies are the one place where you can fill your prescriptions, get your shots, and get answers to medical questions. Here’s a story I read on the website of the West Virginia state insurance commissioner about something that happened at one of those pharmacies.

A family walks into a pharmacy. Their child has cancer. The pharmacist has the child’s medicine behind the counter, ready to dispense. But when that pharmacist calls the pharmacy benefit manager, or PBM, for the family’s insurance company, they are denied authorization to give the family that medicine. Instead, they are told that the medicine can only be dispensed by the PBM’s own mail order specialty pharmacy. The family was to go home and wait up to two weeks to receive the medicine for their child in the mail.

How did this happen?

Picture a set of 39 companies. Some pharmacies, some PBMs, some insurers. Twenty years ago, these were all separate. Today, those 39 companies have merged into just three vertically integrated entities. And so today, when most people fill a prescription, just one of three entities mediates what medicine they get, what they pay for it, and how they will get it—and that corporate entity makes money by making sure that prescription is filled by its own pharmacy. Even, apparently, when it is cancer medicine. And even, apparently, when doing that will force a child to wait for two weeks.

How did this happen? This change from 39 companies to just three?

Merging companies usually predict that the merger is going to save them money by merging. They then predict that they will pass those predicted savings on to consumers via lower prices. For many years, however, it was not a mainstream idea that those predicted price reductions could offset the harm of a merger that increases market power.

That started to change in the 1970s and ’80s. The idea took hold within enforcement agencies that mergers, particularly vertical ones, were presumptively good for the economy and good for consumers. This idea was given the greatest weight for vertical mergers, the kind of mergers that help make it so that a pharmacy middleman has an interest in steering a patient to their own pharmacy.

There are certainly many factors in merger analysis. But it is inescapable that this presumption of efficiency significantly contributed to making 39 separate companies into the three vertically integrated firms that exist today.

Today, rural independent pharmacies are closing one after another after another. In Minnesota, from 2003 to 2018, 30 rural zip codes lost their only pharmacy.

Cattlemen in Iowa
I was in Des Moines for a conference; I asked our team to set up a listening session with some cattlemen and corn growers. It was about nine or ten people. Every one was in crisis.

The prices of seeds, feed, fertilizer, and farm equipment were going up. The prices of their products were going down. Farmers used to make 40 cents on every dollar spent at the grocery; they make 16 now. They are going out of business by the thousands. “We have a noose around our necks and we’re standing on an ice cube,” said one. “It’s like being picked apart by a chicken,” said another.

The group talked about a lot of factors behind these changes, but they kept returning to consolidation. Fertilizer, seeds, grain buying, meatpacking: There used to be dozens of firms, sometimes over a hundred, in each of these sectors. Now each is dominated by just four; depending on the region, there may now be just one supplier of a key input, or just one meatpacking plant.

What is it like to be down to just one place to sell your livestock? We’ve known since 1890 that it can depress farmers’ prices. But it’s more than that. One of the cattlemen described through tears how he had to gas a warehouse full of cattle when the one processing plant accessible to him was shut down because of COVID.

Another described animal abuse on the lot that he said was unheard of in competitive markets. A cow that he raised was bolted in the head, killed, dragged out of a trailer with a log chain, and dumped in the garbage because she had slipped in the trailer on the drive to the processing plant. The producer pleaded with the lot worker to take the cow home instead so she would have time to recover and heal, but the worker informed the producer that no livestock leaves the premises. The cow was given a couple of minutes to try and get up and was then shot.

But maybe the most shocking thing was how scared they were that something they said would somehow get back to their suppliers or their purchasers and that they would pay for it.

How did this happen?

The merger wave began in the 1980s. Tellingly, when farmers have raised alarms about the consolidation of input and product markets, economists have answered that the consolidation “unquestionably enhance[s] efficiency.”

When antitrust was guided by fairness, these farmers’ families were part of a thriving middle class across rural America. After the shift to efficiency, their livelihoods began to disappear.

A Grocer in South Dakota
That shift didn’t just affect farmers. It also affected the communities that depend on them and their products.

Like independent pharmacies, independent groceries serve places that bigger companies do not. The lower the income, the lower the population, the more likely it is to be served by an independent.

I recently watched video testimony of an independent grocer named R.F. Buche, who owns 21 stores in South Dakota Indian country. Mr. Buche’s family has been serving Indian country for 117 years. Many of his stores are the only place where locals can easily get fresh milk and produce. Many of them are over an hour’s drive from the nearest big box store.

Yet Mr. Buche faces challenges that those big box stores do not. Manufacturers sell products to the big box stores in sizes and packages that they don’t offer to him. When he is offered the same products, he cannot get the same prices for them. And that’s not because of quantity.

Like most independent grocers, Mr. Buche works with a wholesaler. By bundling the orders of multiple independent grocers, that wholesaler can often meet the order sizes of the big box stores. But even then, his wholesaler is not given the same price. That price is kept secret.

When the pandemic hit, manufacturers cut supplies to Mr. Buche and his wholesaler. “Picture this, please,” he told Congress. “Pine Ridge, one of the poorest counties in the nation, not having WIC items like formula for babies on their grocery store shelf.” Note that this was months before the baby formula shortages caused by the Sturgis plant in Michigan.

The only way Mr. Buche could keep products like baby formula, ground beef, or Pedialyte on his shelves was by driving over a thousand miles each week to move essential products between his low-volume and high-volume stores. Yet when Mr. Buche would walk into a big box store 50 or 100 miles from his own, those shelves would be full of those products.

What is happening to Mr. Buche is happening to independent groceries around the country. They are closing, by the thousand, creating food deserts across rural America.

How did this happen?

Efficiency happened. In 1936, Congress passed the Robinson-Patman Act, the law I talked about earlier that bans “unfair practices” like “secret discounts” and “secret rebates,” available only to the large and powerful. When it passed that law, Congress went out of its way to “keep open the door of opportunity for the small-business man as well as large.” For decades, Robinson-Patman was a mainstay of FTC enforcement. It arguably prohibits many of the practices Mr. Buche is experiencing.

Then, as efficiency gained ground in the mid-1980s, a view took hold among enforcers and then courts: First, that Robinson-Patman was an outlier among antitrust statutes because the Congress that passed it focused on harms to supposedly inefficient small businesses. Second, that the law raised consumer prices. Enforcement slowed to a trickle, and then stopped completely.

Those claims are unproven or incorrect. To my knowledge, some 86 years after its passage, there is not one empirical analysis showing that Robinson-Patman actually raised consumer prices. And none other than Professor Herbert Hovenkamp has explained that Robinson-Patman was not an outlier. According to him, the congressional debates around each of the other major antitrust laws were also “fairly dominated … by a strong desire to protect small business.”

A Return to Fairness
We need to step back and question the role of efficiency in antitrust enforcement.

If efficiency is so important in antitrust, then why doesn’t that word, “efficiency,” appear anywhere in the antitrust statutes that Congress actually wrote and passed?

If efficiency is the goal of antitrust, then why am I charged by statute with stopping unfair methods of competition, and not “inefficient” ones?

We cannot let a principle that Congress never wrote into law trump a principle that Congress made a core feature of that law. I think it is time to return to fairness.

People may not know what is efficient—but they know what’s fair. It may be efficient to send a child home to wait two weeks for their cancer medicine. We all know it isn’t fair. It may be efficient to force cattlemen to sell their livestock to just one meatpacker. It may be efficient for Pine Ridge to go without baby formula. We all know that that’s not what fair markets look like.

That visceral understanding of fairness has often been dismissed as ambiguous and “impressionistic.” I disagree. Because Congress and the courts have told us, directly and repeatedly, how to implement protections against unfairness.

Certain laws that were clearly passed under what you could call a fairness mandate—laws like Robinson-Patman—directly spell out specific legal prohibitions. Congress’s intent in those laws is clear. We should enforce them.

But Congress did more than that. As FTC Chair Lina Khan has explained, Congress deliberately charged the FTC to go beyond the limits of the Sherman Act. And then, the Supreme Court came in and repeatedly reaffirmed the idea that our Section 5 authority goes beyond Sherman. So I support Chair Khan’s goal to reactivate enforcement under our unfairness authority, and to issue a policy statement setting out the scope of that authority.

My own focus is on people living paycheck to paycheck. For me, that’s what antitrust is about : your groceries, your prescriptions, your paycheck. I want to make sure the Commission is helping the people who need it the most. And I want to make sure we don’t leave behind rural America.

Ghost Leviathan
Mar 2, 2017

Exploration is ill-advised.

You miss my point of identifying what the problem is. The Nazi youtubes should all be deleted and their creators identified, investigated and prosecuted as terrorists. That is entirely a different problem from the Amazon algorithm identifying and recommending item combinations used for suicide.

evilweasel
Aug 24, 2002

Ghost Leviathan posted:

You miss my point of identifying what the problem is. The Nazi youtubes should all be deleted and their creators identified, investigated and prosecuted as terrorists. That is entirely a different problem from the Amazon algorithm identifying and recommending item combinations used for suicide.

I disagree. The way you're framing "what the problem is" plays into the goal of immunizing the algorithms - and the youtube recommendation algorithm is specifically known to promote extremist right-wing content, and, thus, monetize it and encourage the creation of more of it.

Sure, you can try to go after Nazi youtubers except (a) you'll quickly find out that they shift to non-extradition countries (like, say, notable proponent of right-wing and nazi ideology Russia); and (b) the first amendment generally prohibits the criminalization of mere speech and as a result it is highly unlikely that prosecuting the video uploaders is going to do much. You'll just wind up with people being on the right side of the "incitement to imminent lawless action" that's the standard for criminalizing hate speech under US law, and Youtube gleefully recommending those videos and asserting that it cannot be sued for products liability under Section 230 for those recommendations.

The Amazon recommending suicide kits and Youtube's recommendation promoting extremist content (because it is more "engaging") are both issues where a for-profit recommendation algorithm is being asserted to have Section 230 immunity.

Clarste
Apr 15, 2013

Just how many mistakes have you suffered on the way here?

An uncountable number, to be sure.
Why the gently caress is anyone defending the recommendation algorithm anyway?

Tibalt
May 14, 2017

What, drawn, and talk of peace! I hate the word, As I hate hell, all Montagues, and thee

They're cool and interesting and fun to play with if you're into data science. But the number of times I've heard a variation of "So it turns out it was picking up whether someone with bipolar disorder was entering a manic phase..." makes me pretty leery of them, ethically.

Paracaidas
Sep 24, 2016
Consistently Tedious!

Clarste posted:

Why the gently caress is anyone defending the recommendation algorithm anyway?

It's generally less "all hail the algorithm, bringer of good content and unassailable vessel of perfection" and more section230 absolutism - the belief that any liability for platforms (directly or indirectly) renders 230 deadletter and consolidates power under the current worst actors while killing most of the rest of the interactive internet within a year.

I don't have an issue (in a spherical elephant/frictionless room sense) with legislation or rulings that split unsolicited recommendations based on both aggregated and personal user history out for liability but protect the recommendations of, for instance, search engines (to say nothing of moderation and the other 230 protections). I still think the harm done by this hypothetical lawwork is understated by its proponents (Great Replacement will be fine, BLM and (depending on if the terfs win) LGBT/just T content will be yanked), but haven't put enough thought into if it'll be a net positive.

I look at the congressional, judicial, and other political voices backing 230 reform, on the other hand, and have very little faith in their ability to make tailored, beneficial changes that don't make every one of these problems worse.

But, towards evilweasel's points, that ends up functionally indistinguishable from:

evilweasel posted:

youtube wants you to think there is no option between forcing them to pre-approve each and every video uploaded to the site (in practice, largely ending the product) or allowing their algorithm to keep trying to find which flavor of extremist videos will get you hooked.

Twincityhacker
Feb 18, 2011

Paracaidas posted:

https://twitter.com/ddayen/status/1579472512735645697

For the many ills of the Biden administration (including some terrible nominees), he's also hit on some very, very good ones. Here's Alvaro Bedoya, who replaced Chopra on the FTC after he was named director of the CFPB, in a speech to the Midwest Forum on Fair Markets on September 22.

I strongly recommend clicking through if you find this topic interesting, as there are embedded links to more detail and citations throughout. Phoneposting makes those particularly challenging to embed in my post. It also has an audio version, if you'd prefer to listen rather than read.

While not a rural dweller myself, I do know several people who live in rural areas. ( All hail fandom, the great connector of people. ) One of them in particular is a cattle rancher and for all the work that the entire 3x generation family does they pull in a tiny amount of money. And everything besides beef is more expensive. Their satellite internet is twice as expensive for half the service than someone on the same fandom message board who lives in a more populated area gets.

I really hope that this goverment dude walks the walk and does something about this.

Crain
Jun 27, 2007

I had a beer once with Stephen Miller and now I like him.

I also tried to ban someone from a Discord for pointing out what an unrelenting shithead I am! I'm even dumb enough to think it worked!
I'm wondering when machine learning "AI" algorithms, in the sense of what people are complaining about, became the de facto way things have to be done.

It's a problem of scale, sure, but is it really a problem of the scale of upload or just a problem of the scale of profit? NewGrounds, Flashplayer.com, AlbinoBlackSheep, EbaumsWorld, etc were all pretty functional with pretty simple recommendation systems. At best they were operating off of "new" and "popular" with some really basic means to track that, but they were also absolutely not using any sort of manual verification of content. Hell most of those were probably single person operations at first and barely expanded to more than a handful before either closing because you couldn't compete with youtube/online AAA games or being bought out by media companies.

And even in a world where you had 56k as the standard, DSL at best, and barely 16 million people online (later up to 700 million by the end of 2003) those sites were chock full of garbage content that flooded day in day out, and yet people were still able to easily find decent stuff without hyper invasive systems to derive exactly what will keep you glued to the screen. Do we really need a system that can guess that me watching a video on how to solve some metal puzzle will mean I'm X% likely to watch a video on card tricks, and if I watch that one I'm XX% likely to be interested in stainglass window making?

I answered my own question. Google/Youtube/FB/Amazon et al. do need it because the revenue will fall off a cliff if they aren't abusing the gently caress out of operant conditioning and addiction cycles crossed with invasive meta analysis of behavior.

BlueBlazer
Apr 1, 2010

Crain posted:

I answered my own question. Google/Youtube/FB/Amazon et al. do need it because the revenue will fall off a cliff if they aren't abusing the gently caress out of operant conditioning and addiction cycles crossed with invasive meta analysis of behavior.

Algorithm = Editorialization = Liable for content recommended

Even a promoted top 10 list would be considered a level of editorialization. Just because its numbers nerds deciding it and not some influencer/media mogul, doesn't make it less so.

Adbot
ADBOT LOVES YOU

Fart Amplifier
Apr 12, 2003

Crain posted:

I answered my own question. Google/Youtube/FB/Amazon et al. do need it because the revenue will fall off a cliff if they aren't abusing the gently caress out of operant conditioning and addiction cycles crossed with invasive meta analysis of behavior.

Google/Youtube/FB/Amazon need it because everyone expects these services to be free and this is how free services are paid for.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply