|
BiggerBoat posted:I'm 100% totally opposed to the death penalty, mind, but were I ever to be on death row and face my maker, I'll take "heroin overdose, please" and feel like an opioid overdose might be some sort of happy medium.
|
# ? Oct 8, 2022 21:01 |
|
|
# ? May 25, 2024 06:17 |
|
Regarding undecideds,I AM GRANDO posted:I think most people who advocate for the death penalty really like the ritual aspects of it and would feel that it wasn’t really the death penalty if terror and pain weren’t properly measured aspects of it. I seem to recall hearing that witnesses to Timothy McVeigh's execution did complain that it was anticlimactic and he didn't appear to be suffering overmuch.
|
# ? Oct 8, 2022 21:02 |
|
This keeps getting brought up over and over, but anyway. If you absolutely, positively have to kill some people and don't want them to suffer, you could suffocate them using nitrogen. It does not trigger the physiological response a human being has to suffocating, since it does not involve messing with the carbon content of the blood stream. It probably isn't pleasant to pass out that way, but it's painless. But the point of the death penalty is to terrify people, and satisfy the torture fantasies of others. This venn diagram probably has an area of over-lap, too.
|
# ? Oct 8, 2022 21:31 |
|
Rappaport posted:This keeps getting brought up over and over, but anyway. If you absolutely, positively have to kill some people and don't want them to suffer, you could suffocate them using nitrogen. It does not trigger the physiological response a human being has to suffocating, since it does not involve messing with the carbon content of the blood stream. It probably isn't pleasant to pass out that way, but it's painless. According to my cousins I was twitching and jerking around like I was having a seizure, so it might even satisfy the torture fantasies of sadists while still being painless for the victim.
|
# ? Oct 8, 2022 21:46 |
|
One thing to remember is that most people aren't following politics closely. Lots of things sound weird when you imagine someone who knows what every candidate says about every issue and thinks all of the national issues are important, but make more sense if the undecided voters (or people who switched parties, or whatever else) just don't know very much about politics.
|
# ? Oct 8, 2022 21:47 |
|
Rappaport posted:This keeps getting brought up over and over, but anyway. If you absolutely, positively have to kill some people and don't want them to suffer, you could suffocate them using nitrogen. It does not trigger the physiological response a human being has to suffocating, since it does not involve messing with the carbon content of the blood stream. It probably isn't pleasant to pass out that way, but it's painless. Inert gas suffocation is how I'd choose to go if I ever need assisted suicide. It's entirely suffering free.
|
# ? Oct 8, 2022 22:03 |
|
World Famous W posted:there is nothing pleasant about oding Right, and I mean...I'll take your word for it and I wouldn't know having never done heroin - let alone overdosed - but from what I can tell it's probably a better way to go out than the electric chair or whatever chemical bleach and weed killer cocktail they use for an execution in Arkansas currently. As I said, I'd prefer that the state never has the power to legally murder anyone but as long as we're doing that, an opioid overdose seems about as peaceful and painless as anything my non sadistic mind can think of. I suppose I'm imagining something similar to hospice than whatever barbaric lava in people's veins we hem and haw about at the moment and argue about what's humane or not in a country where half of our society thinks we should have smaller government but salivates at the idea of locking up people for life and wants to fast track death sentences. BiggerBoat fucked around with this message at 22:09 on Oct 8, 2022 |
# ? Oct 8, 2022 22:05 |
|
World Famous W posted:there is nothing pleasant about oding Yeah. It does eventually cause people to pass out but before that it's not fun. It also works by making the person stop breathing, which is horrific, even if they are unconscious. Even with stuff designed to anesthetize people, it's bad. Also, companies specifically stopped supplying certain drugs to states who executed prisoners, leading to a cottage industry of smaller compounding pharmacies doing illegal synthesis of fast-acting barbiturates to supply it at the behest of the states, as well as secret, illicit funds to pay for the drugs. States using other methods is leading to more cruelty- ODs. But eventually, if they can't get their hands on legal fentanyl... There are "painless" ways to do it (certain gases) but 1. It's still cruel and 2. Why?! Cranappleberry fucked around with this message at 00:09 on Oct 9, 2022 |
# ? Oct 8, 2022 22:12 |
|
World Famous W posted:there is nothing pleasant about oding maybe not for bystanders but i'll say from personal experience that it's indistinguishable from passing out ... you just dont wake back up (without intervention)
|
# ? Oct 8, 2022 23:34 |
|
Zwabu posted:I wonder what could possibly ever change someone’s alignment these days. Can the GOP screw up badly enough for people to decide they aren’t a Republican anymore? My Dad was a Republican for decades. I'm not sure when he jumped ship and started voting Democrat, but the absolute latest had to be 2008.
|
# ? Oct 9, 2022 00:12 |
|
Cranappleberry posted:Yeah. It does eventually cause people to pass out but before that it's not fun. It also works by making the person stop breathing, which is horrific, even if they are unconscious. It's not bad in the way you are implying though. It is true that people who overdose on drugs like opioids or propofol stop breathing, but it's because the drive to breathe is removed, so the person doesn't feel like they need to take a breath at all and they don't have the feeling of suffocating like if they had the drive to breathe but were unable to do so because someone had a pillow over their face or something. I'd rather go out with an OD than something like filling a chamber with nitrogen. While true that depending on the specific method that might work quickly because the deprivation of oxygen might render you unconscious before you could register the feeling of distress or suffocation, you don't have any of the respiratory drive removed the way a whopping dose of fentanyl or heroin would do. I wouldn't personally want to bet on the oxygen deprivation working before feeling the rest. The main hardships associated with lethal injection would be - difficulty establishing intravenous access requiring multiple pokes, and especially if propofol is the agent, depending on where the IV is located the medicine can be quite caustic and cause pain going into the veins before taking effect. Source - I'm someone who administers these medicines to others on the daily. Edit: on reflection the nitrogen method might not be too bad because the main feeling of not breathing enough is due to not eliminating CO2 rather than being short of oxygen which mainly makes you feel faint. Since your breathing is unimpeded and you still eliminate CO2 freely, there might not be too much distress from the decline in oxygen, especially if it's rapid. But I'd still go for the IV method assuming not too much difficulty getting an IV in place. Zwabu fucked around with this message at 01:08 on Oct 9, 2022 |
# ? Oct 9, 2022 01:04 |
|
Zwabu posted:Edit: on reflection the nitrogen method might not be too bad because the main feeling of not breathing enough is due to not eliminating CO2 rather than being short of oxygen which mainly makes you feel faint. Since your breathing is unimpeded and you still eliminate CO2 freely, there might not be too much distress from the decline in oxygen, especially if it's rapid. But I'd still go for the IV method assuming not too much difficulty getting an IV in place.
|
# ? Oct 9, 2022 01:44 |
|
Twincityhacker posted:My Dad was a Republican for decades. I'm not sure when he jumped ship and started voting Democrat, but the absolute latest had to be 2008. I was a "republican" from about 1996-2002. Something really bothered me about Bill Clinton and by proxy, Al Gore. And yeah, not crazy about Hilary either. So I voted Bush in 2000 and then either didn't vote or voted Libertarian until 2018. Politics didn't do anything for me and the dream of true Libertarian where we're fiscally conservative but socially liberal appealed to me. Of course then 2016 hit and holy poo poo did I reevaluate my political compass. Trump was so offensively bad that I had to start paying attention. Voted Democrat for the first time in 2018 and in 2020, though I still wish we could get a candidate under 60 that's actually popular and not "Not Trump"
|
# ? Oct 9, 2022 03:29 |
|
(Warning: discussion on suicide and specific suicide methods incoming) On the topic from a few days ago of holding big tech responsible for their recommendation algorithms, here is a much more tangible example of how loosely monitored systems like this can go wrong, and why companies need to be liable. Amazon is being sued by the parents of children who died by suicide using chemicals bought on Amazon. Amazon was recommending what the plaintiffs call "suicide kits" through their "Buy it with" groupings (encouraging customers to buy harmful chemicals, a digital scale, medication to prevent vomiting, and a book on suicide methods all at once). Their auto-complete also recommended adding the search term "suicide" to people searching for these chemicals. https://twitter.com/arstechnica/status/1579056836212273152
|
# ? Oct 9, 2022 18:22 |
|
Phlag posted:(Warning: discussion on suicide and specific suicide methods incoming) I think this really comes down to regulating algorithms or at least putting a human in the loop to hit a button whenever these issues arise. The same thing happens with radicalization with YouTube videos or Facebook group and how that poo poo is being gamed to get more adherents to indoctrinate by abusing the traffic algorithms to those views. Like after a few hits, maybe have a manager come in and flag recommendations before getting they hurt more people.
|
# ? Oct 9, 2022 18:55 |
|
Young Freud posted:I think this really comes down to regulating algorithms or at least putting a human in the loop to hit a button whenever these issues arise. The same thing happens with radicalization with YouTube videos or Facebook group and how that poo poo is being gamed to get more adherents to indoctrinate by abusing the traffic algorithms to those views. Like after a few hits, maybe have a manager come in and flag recommendations before getting they hurt more people. Amazon is going to run into the same problem Youtube runs into- there are not enough managers in the world to keep up with how often the algorithm fucks up
|
# ? Oct 9, 2022 19:14 |
|
haveblue posted:Amazon is going to run into the same problem Youtube runs into- there are not enough managers in the world to keep up with how often the algorithm fucks up I guess at some point Amazon is going to have to do a cost analysis and see if costs more hiring content managers or paying out lawsuits.
|
# ? Oct 9, 2022 19:16 |
|
Young Freud posted:I guess at some point Amazon is going to have to do a cost analysis and see if costs more hiring content managers or paying out lawsuits. There's always a third option: stop using that algorithm altogether. It's not like it's some super smart demonstration of incredibly machine intelligence. It's just querying the database for things that are commonly bought in the same transaction as the item the user is looking at. It's not an essential part of the shopping experience, it's just Amazon trying to nudge people into buying more poo poo. The problems here aren't unique to automated algorithms. The well-known anecdote of Target sending maternity advertisements to an expecting mother before her own family knew about her pregnancy came as a result of a few marketing folks asking the company's statistics department whether there was a way to tell if a woman was pregnant. By the time anyone realized the potential PR risk, the stats department had already worked out product cues that could be used to track pregnancy and created a database query for it. The resulting backlash was so bad that the marketing department actually went back and made the pregnancy advertisements less targeted on purpose so that people wouldn't feel like Target knew so much about them. That was mostly human effort, but it still points to the big problem with automating this stuff: when it was under human control, it was done for a few products and/or demographics that were considered especially important, and any Bad Idea decisions were signed off by a human who had specifically asked for that Bad Idea. With an automated algorithm, it's done for everything by default, ensuring it will find every possible bad idea through sheer brute-force. No marketer is ever going to go and ask the stats department how to target suicidal customers, but the algorithm doesn't have any idea what "suicide" is - it just knows that these products are often bought together.
|
# ? Oct 9, 2022 20:03 |
|
Main Paineframe posted:There's always a third option: stop using that algorithm altogether. On the other hand, they could keep this algorithm, not make it public, and instead include links to books about how to deal with depression for those making those purchases together and maybe a warning email to the parent-linked account stating their child was exhibiting danger signs of suicidality and to schedule a psych consult.
|
# ? Oct 10, 2022 02:29 |
|
The actual problem is that Amazon is selling how-to guides for committing suicide, not that the algorithm is correctly identifying people with similar interests and recommending products based on that similarity. This is the same problem with YouTube -- not that YouTube correctly categorizing neo-nazis as having similar interests to other neo-nazis, but that YouTube is even hosting neo-nazi videos in the first place.
|
# ? Oct 10, 2022 02:38 |
|
Oracle posted:On the other hand, they could keep this algorithm, not make it public, and instead include links to books about how to deal with depression for those making those purchases together and maybe a warning email to the parent-linked account stating their child was exhibiting danger signs of suicidality and to schedule a psych consult. My first thought was "what about kids where the parents are the problem."
|
# ? Oct 10, 2022 06:47 |
|
Oracle posted:On the other hand, they could keep this algorithm, not make it public, and instead include links to books about how to deal with depression for those making those purchases together and maybe a warning email to the parent-linked account stating their child was exhibiting danger signs of suicidality and to schedule a psych consult. That's an egregious violation of privacy, good lord. And that's before we even consider whether the algorithm is wrong.
|
# ? Oct 10, 2022 07:10 |
|
Oracle posted:On the other hand, they could keep this algorithm, not make it public, and instead include links to books about how to deal with depression for those making those purchases together and maybe a warning email to the parent-linked account stating their child was exhibiting danger signs of suicidality and to schedule a psych consult. How the gently caress would they know to do this when the recommendation algorithm is making these decisions with no human input? That's kind of the entire point here. To focus on how to solve this exact problem after we know it's a problem is to completely miss the point. Any post hoc solution is totally meaningless when the algorithm will simply make the exact same mistake again with some other combinations of purchases we can't predict. pseudorandom name posted:The actual problem is that Amazon is selling how-to guides for committing suicide, not that the algorithm is correctly identifying people with similar interests and recommending products based on that similarity. This is the same problem with YouTube -- not that YouTube correctly categorizing neo-nazis as having similar interests to other neo-nazis, but that YouTube is even hosting neo-nazi videos in the first place. Ah, the solution to the problem of blind statistics engines guiding our entire lives is to remove everything that could possibly hurt us, one by one. With an infinite army of moderators. What if it wasn't an explicit "how-to guide" but rather a short work of fiction that portrayed the steps in detail? Should we also remove anything that talks about suicide at all? Clarste fucked around with this message at 07:36 on Oct 10, 2022 |
# ? Oct 10, 2022 07:27 |
|
As ever a key part is to identify what the problem actually is. YouTube continuing to host Nazi indoctrination material to the point where the algorithm actually promotes it to create more Nazis is the problem.
|
# ? Oct 10, 2022 07:46 |
|
Clarste posted:Ah, the solution to the problem of blind statistics engines guiding our entire lives is to remove everything that could possibly hurt us, one by one. With an infinite army of moderators. What if it wasn't an explicit "how-to guide" but rather a short work of fiction that portrayed the steps in detail? Should we also remove anything that talks about suicide at all? They're called "retail buyers", not moderators, and they're how all retail businesses worked up until about twenty years ago and how the good retail businesses still work today. Good retail businesses also employ lots of other people in many other important roles, for example they have people whose job it is to ensure that their supply chain isn't filled with counterfeits, something that Amazon also doesn't bother to do.
|
# ? Oct 10, 2022 08:07 |
|
Ghost Leviathan posted:As ever a key part is to identify what the problem actually is. YouTube continuing to host Nazi indoctrination material to the point where the algorithm actually promotes it to create more Nazis is the problem. Yeah, this sums up the actual problem pretty well.
|
# ? Oct 10, 2022 08:41 |
|
Amazon is generally very bad about intervening with automated and algorithmic stuff. A few months ago, I bought expired hand sanitizer of the same type 3 times. The first time, I told them about it. The second time, I told them about it and asked them to ensure that it won't happen again after I place an order for the same items, but it happened again. I told them to take it off their shelves and it was still there weeks later. I have many examples of this.
|
# ? Oct 10, 2022 15:14 |
|
Ghost Leviathan posted:As ever a key part is to identify what the problem actually is. YouTube continuing to host Nazi indoctrination material to the point where the algorithm actually promotes it to create more Nazis is the problem. that's not correct, and this sort of smearing the difference between liability for hosting user-provided content, and liability for recommended content is a core problem in any of these discussions section 230 represents a reasonable policy judgment - that it is not possible for any internet service hosting user-provided content to effectively screen it all, and so the choices are basically (a) no user-provided content sites; or (b) no liability for the mere hosting of such user-provided content sites. you can tweak around the edges of exactly how you do (b) but ultimately the volume of any user-provided content site means you have to pick one of those. you can have something DMCA-like where there becomes an obligation to take stuff down once you're notified, perhaps. here's the thing: that is not what we are discussing here. amazon lumping suicide packs together is not mere liability for hosting user-supplied content (it is an open question if anything amazon does w/r/t its store should be considered section 230-ish). youtube's algorithm deliberately serving you extremist videos of one flavor or another because it recognizes those as driving "engagement" is not merely hosting user-supplied content. there's a good policy argument that user-published content is a social good overall, even if there's quite a lot of bad content. but importantly - and people deliberately blur this - those arguments do not apply to algorithmic recommendation. you can make similar arguments that if algorithmic recommendations expose the creator of the product to ordinary products liability claims, then you will get no algorithmic recommendations. there's two problems with that. first: it is not at all apparent that algorithmic recommendations are the same social good as self-published content. second, algorithmic recommendations are profit-seeking activities and as such have (as we have seen) a potential to tend towards creating negative externalities in search of profit. it is in youtube's financial interest for you to get hooked on extremist content, because assholes watching nazi videos watch a lot of them while people watching cooking videos watch a couple. youtube wants you to think there is no option between forcing them to pre-approve each and every video uploaded to the site (in practice, largely ending the product) or allowing their algorithm to keep trying to find which flavor of extremist videos will get you hooked. that's not the case: youtube can have the right to not get sued every time someone uploads a nazi video they didn't know about, but be liable for what their algorithms decide to serve you up. we would be impoverished if some rando couldn't upload a video of, say, how to do some esoteric electronics project they were interested in that maybe a thousand people will ever watch (something that would not be cost-effective to allow if all videos had to be manually screened). we would not be impoverished if that video didn't have "auto-play" after it where the algorithm tried to figure out what would hook you best if that algorithm couldn't be forced to stay away from extremist content. it is worth noting as well that it is precisely that recommendation algorithm that generates so much nazi youtubes. because it recommends them, making them profitable to make, and creating a whole ecosystem of them!
|
# ? Oct 10, 2022 15:17 |
|
Oracle posted:On the other hand, they could keep this algorithm, not make it public, and instead include links to books about how to deal with depression for those making those purchases together and maybe a warning email to the parent-linked account stating their child was exhibiting danger signs of suicidality and to schedule a psych consult. Yeah, they can also set up so they can tell if you're reading too much radical content, or just things about gay people or something, and then send that right to the local authorities, thereby keeping everyone safe
|
# ? Oct 10, 2022 15:36 |
|
They should just get rid of it. If you want to find a new book ask a librarian or bookseller. Watch youtube videos by word of mouth or by being shown them by friends.
|
# ? Oct 10, 2022 15:37 |
|
https://twitter.com/ddayen/status/1579472512735645697 For the many ills of the Biden administration (including some terrible nominees), he's also hit on some very, very good ones. Here's Alvaro Bedoya, who replaced Chopra on the FTC after he was named director of the CFPB, in a speech to the Midwest Forum on Fair Markets on September 22. I strongly recommend clicking through if you find this topic interesting, as there are embedded links to more detail and citations throughout. Phoneposting makes those particularly challenging to embed in my post. It also has an audio version, if you'd prefer to listen rather than read. quote:A Call for Fairness, a Rule for Efficiency
|
# ? Oct 10, 2022 15:40 |
|
You miss my point of identifying what the problem is. The Nazi youtubes should all be deleted and their creators identified, investigated and prosecuted as terrorists. That is entirely a different problem from the Amazon algorithm identifying and recommending item combinations used for suicide.
|
# ? Oct 10, 2022 15:42 |
|
Ghost Leviathan posted:You miss my point of identifying what the problem is. The Nazi youtubes should all be deleted and their creators identified, investigated and prosecuted as terrorists. That is entirely a different problem from the Amazon algorithm identifying and recommending item combinations used for suicide. I disagree. The way you're framing "what the problem is" plays into the goal of immunizing the algorithms - and the youtube recommendation algorithm is specifically known to promote extremist right-wing content, and, thus, monetize it and encourage the creation of more of it. Sure, you can try to go after Nazi youtubers except (a) you'll quickly find out that they shift to non-extradition countries (like, say, notable proponent of right-wing and nazi ideology Russia); and (b) the first amendment generally prohibits the criminalization of mere speech and as a result it is highly unlikely that prosecuting the video uploaders is going to do much. You'll just wind up with people being on the right side of the "incitement to imminent lawless action" that's the standard for criminalizing hate speech under US law, and Youtube gleefully recommending those videos and asserting that it cannot be sued for products liability under Section 230 for those recommendations. The Amazon recommending suicide kits and Youtube's recommendation promoting extremist content (because it is more "engaging") are both issues where a for-profit recommendation algorithm is being asserted to have Section 230 immunity.
|
# ? Oct 10, 2022 15:49 |
|
Why the gently caress is anyone defending the recommendation algorithm anyway?
|
# ? Oct 10, 2022 16:28 |
|
They're cool and interesting and fun to play with if you're into data science. But the number of times I've heard a variation of "So it turns out it was picking up whether someone with bipolar disorder was entering a manic phase..." makes me pretty leery of them, ethically.
|
# ? Oct 10, 2022 16:40 |
|
Clarste posted:Why the gently caress is anyone defending the recommendation algorithm anyway? It's generally less "all hail the algorithm, bringer of good content and unassailable vessel of perfection" and more section230 absolutism - the belief that any liability for platforms (directly or indirectly) renders 230 deadletter and consolidates power under the current worst actors while killing most of the rest of the interactive internet within a year. I don't have an issue (in a spherical elephant/frictionless room sense) with legislation or rulings that split unsolicited recommendations based on both aggregated and personal user history out for liability but protect the recommendations of, for instance, search engines (to say nothing of moderation and the other 230 protections). I still think the harm done by this hypothetical lawwork is understated by its proponents (Great Replacement will be fine, BLM and (depending on if the terfs win) LGBT/just T content will be yanked), but haven't put enough thought into if it'll be a net positive. I look at the congressional, judicial, and other political voices backing 230 reform, on the other hand, and have very little faith in their ability to make tailored, beneficial changes that don't make every one of these problems worse. But, towards evilweasel's points, that ends up functionally indistinguishable from: evilweasel posted:youtube wants you to think there is no option between forcing them to pre-approve each and every video uploaded to the site (in practice, largely ending the product) or allowing their algorithm to keep trying to find which flavor of extremist videos will get you hooked.
|
# ? Oct 10, 2022 16:52 |
|
Paracaidas posted:https://twitter.com/ddayen/status/1579472512735645697 While not a rural dweller myself, I do know several people who live in rural areas. ( All hail fandom, the great connector of people. ) One of them in particular is a cattle rancher and for all the work that the entire 3x generation family does they pull in a tiny amount of money. And everything besides beef is more expensive. Their satellite internet is twice as expensive for half the service than someone on the same fandom message board who lives in a more populated area gets. I really hope that this goverment dude walks the walk and does something about this.
|
# ? Oct 10, 2022 17:11 |
|
I'm wondering when machine learning "AI" algorithms, in the sense of what people are complaining about, became the de facto way things have to be done. It's a problem of scale, sure, but is it really a problem of the scale of upload or just a problem of the scale of profit? NewGrounds, Flashplayer.com, AlbinoBlackSheep, EbaumsWorld, etc were all pretty functional with pretty simple recommendation systems. At best they were operating off of "new" and "popular" with some really basic means to track that, but they were also absolutely not using any sort of manual verification of content. Hell most of those were probably single person operations at first and barely expanded to more than a handful before either closing because you couldn't compete with youtube/online AAA games or being bought out by media companies. And even in a world where you had 56k as the standard, DSL at best, and barely 16 million people online (later up to 700 million by the end of 2003) those sites were chock full of garbage content that flooded day in day out, and yet people were still able to easily find decent stuff without hyper invasive systems to derive exactly what will keep you glued to the screen. Do we really need a system that can guess that me watching a video on how to solve some metal puzzle will mean I'm X% likely to watch a video on card tricks, and if I watch that one I'm XX% likely to be interested in stainglass window making? I answered my own question. Google/Youtube/FB/Amazon et al. do need it because the revenue will fall off a cliff if they aren't abusing the gently caress out of operant conditioning and addiction cycles crossed with invasive meta analysis of behavior.
|
# ? Oct 10, 2022 17:34 |
|
Crain posted:I answered my own question. Google/Youtube/FB/Amazon et al. do need it because the revenue will fall off a cliff if they aren't abusing the gently caress out of operant conditioning and addiction cycles crossed with invasive meta analysis of behavior. Algorithm = Editorialization = Liable for content recommended Even a promoted top 10 list would be considered a level of editorialization. Just because its numbers nerds deciding it and not some influencer/media mogul, doesn't make it less so.
|
# ? Oct 10, 2022 18:49 |
|
|
# ? May 25, 2024 06:17 |
|
Crain posted:I answered my own question. Google/Youtube/FB/Amazon et al. do need it because the revenue will fall off a cliff if they aren't abusing the gently caress out of operant conditioning and addiction cycles crossed with invasive meta analysis of behavior. Google/Youtube/FB/Amazon need it because everyone expects these services to be free and this is how free services are paid for.
|
# ? Oct 10, 2022 18:55 |