Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
XboxPants
Jan 30, 2006

Steven doesn't want me watching him sleep anymore.

cinci zoo sniper posted:

I'll kramer some war thread business in here, to have it all contained. Unfortunately, the timing of this feedback thread catches me at a busier period IRL, so I'll be brief and unlikely debating the feedback raised particularly thoroughly, if at all. I will, however, read it all before implementing the rules update for the war thread – which is not going to happen at least until April, to keep expectations clear.

So, the historical context, give or take a few posts. Not crucial to read, just if anyone is really curious.
https://forums.somethingawful.com/showthread.php?threadid=4014579&userid=197848&perpage=40&pagenumber=19#post530328037
https://forums.somethingawful.com/showthread.php?threadid=4014579&userid=197848&perpage=40&pagenumber=19#post530329667
https://forums.somethingawful.com/showthread.php?threadid=4014579&userid=197848&perpage=40&pagenumber=19#post530332932
https://forums.somethingawful.com/showthread.php?threadid=4014579&userid=197848&perpage=40&pagenumber=19#post530334156
https://forums.somethingawful.com/showthread.php?threadid=4014579&userid=197848&perpage=40&pagenumber=19#post530334375
Also, some posts in the late ChatGPT thread, where we broached the subject of using videos to make your arguments.

BTW you were totally right about the ChatGPT thread and it deserved to get gassed. I completely though you were being too harsh, stifling discussion, etc, and I was annoyed when I saw that it was gassed. Then... I went back and read the last several pages of discussion that led to it being gassed. Turns out that sometimes, at least, the mods are actually right and may have a better big picture view of when a thread is about to turn into poo poo.

Nothing really to add other than that, u were right

Adbot
ADBOT LOVES YOU

XboxPants
Jan 30, 2006

Steven doesn't want me watching him sleep anymore.

Judgy Fucker posted:

Agreed with this, notably purging Cinci. They are not good at moderating. They are needlessly antagonistic and do not handle criticism well (at all, really). They are good at being in charge of messy poo poo the other mods don't want to deal with, which is why they're a mod. But if the ostensible purpose of a feedback thread is to solicit user ideas on how to improve DnD: remove cinci's star.

I'll toss in another vote in support of this. I have often seen Cinci making mod choices that I agree with and support, such as closing the ChatGPT thread after it blew up, but I also see them being needlessly antagonistic and biased, seeming to threaten posters not for breaking rules, but for disagreeing with them and/or being wrong/uneducated. Such as, again, in the ChatGPT thread:

cinci zoo sniper posted:

The fact that a crazy person could have a crazy take on statistics isn't really adding legitimacy to the angle of refusing to understand how the thing works. While I cannot stop such a crazy person from having such a crazy take, I can and will stop them from platforming it in D&D as an idea with an implicit educational value.
While I can understand and appreciate the point being made here, it is not a good example of de-escalation.

I agree there is value to try to bring back experts and treat them well, but I also think there is value to allowing uninformed newcomers and flawed perspectives into a discussion, especially if they bring common misconceptions; clearing up those misconceptions is one of the reasons why experts are valuable in the first place.

cinci zoo sniper posted:

(...) we will not be having a general purpose thread about ChatGPT engaging in substantial anthropomorphization of the software. If you want to make such posts in D&D, you will need to create a thread that leaves no doubt that the thread is about some system of belief, or to debate you personally, rather than about the factual nature of ChatGPT.

Cinci also tried to force the ChatGPT thread to contain itself only to the factual, technical functioning of the model, but that only inflamed the existing tensions in the thread. This isn't SH/SC. There's room for debate of conceptual topics, here. Nothing in the OP defined it in the way Cinci was framing it.

(I know I posted earlier that I supported Cinci's treatment of the ChatGPT thread, and while I still believe closing it was a good move, and personally disagree with many of KillHour's posts & beliefs, I have changed my overall position as a result of posts in this thread)

XboxPants fucked around with this message at 16:16 on Mar 27, 2023

XboxPants
Jan 30, 2006

Steven doesn't want me watching him sleep anymore.

A Buttery Pastry posted:

Having not seen a single thing from that thread, thus being a neutral observer, what was the embarrassing part of it? Because based on the likely origin of that rule, an "embarrassing thread" is one where the regulars develop huge in-group/out-group issues, radicalize each other into posting like crazy people, and then have huge meltdowns whenever someone comes in with a different opinion. Or if a dog-bricker gets permabanned for living up to their title.

Since some people are asking, this was essentially the conversation. This will be a long post, but it's a relevant subject and I think worth addressing. You can skim to the end for a summary.

People had been posting about how ChatGPT works, internally, such as whether it "understands" what a haiku is, and if so what that means, or if it's functionally no different than the word suggestion on your iphone keyboard. Cinci posted a lengthy blog post that included both technical info, as well as discussing how similar the model is or isn't when compared to the human brain:

Wolfram posted:

Are our brains using similar features? Mostly we don’t know. But it’s notable that the first few layers of a neural net like the one we’re showing here seem to pick out aspects of images (like edges of objects) that seem to be similar to ones we know are picked out by the first level of visual processing in brains.

People had already been discussing these possibilities, and this increased it. Cinci did not approve. The following posts are all direct replies to another.

cinci zoo sniper posted:

Lastly, a separate problem with this thread specifically is that the conversation at large is displaying significant gaps in understanding of the fundamentals of its titular subject – a poor command of debate (e.g., antropomorphization) and facts (e.g., how ChatGPT works) both. Which is to say that it is a failure of an educational thread, regardless of your anecdotal experience of it, and it will not be tolerated in D&D for much longer in its current form.b

gurragadon posted:

This is really a condescending statement. This thread has given me the most interesting things I've seen and read in a long time and it would be a real shame to just close it. I haven’t read a 3 hour article like that in forever. I learned a ton and found a lot of things to further read about from discussing ChatGPT with Baronash yesterday.

Did YOU read the Wolfram blog about ChatGPT you posted? There's a major throughline in the whole thing where he discusses his excitement with the neural nets' similarities with human brains. He also talks about how it's important to theoretically derive and give a narrative description to what ChatGPT is doing. Here are a series of quotes from the article you posted by Wolfram.
(snip)
I could keep quoting from the article but the point is it's not just a hard science description of how ChatGPT works, it's also a philosophical discussion of ChatGPT. I can understand the annoyance with anthropomorphizing something, but it's not about that. Maybe when people came up with neural nets we were closer to modeling a brain than we thought, or maybe we weren't that close at all. It's a completely relevant line of discussion to ChatGPT and many people find the non-technical aspects more or as interesting as the technical ones.

cinci zoo sniper posted:

The thing that I want to hammer in, and to hammer in hard, is not that D&D must all now get an AI degree from the local community college and stick strictly to the established professional terminology such as emergent abilities, but that what you're dealing with is a piece of math, and not a new life form, and thusly that mysticism is not quite befitting a rigorous conversation.

Bar Ran Dun posted:

Not sure you’re going to get away from some degree of mysticism in any conversation about if a thing has consciousness.

But that conversation can also be rigorous

cinci zoo sniper posted:

The thing axiomatically does not have a consciousness. We don't debate if our microwaves have souls, and the reason that ChatGPT beeps more convincingly shouldn't reverse that.

:negative:

KillHour posted:

I don't think that ChatGPT is any flavor of conscious, but I do think that a large enough neural net could be conscious. If you would like to ban that line of discussion, I'm going to demand that you prove otherwise conclusively.

Edit: I'd also suggest that you can't axiomatically say anything is or isn't conscious unless you are that thing.

cinci zoo sniper posted:

If this is the conversation that you want to have, I will need you to create a thread titled “prove to me that my slide rule is not sentient”, as this thread will be killed then.

KillHour posted:

I know you're a mod but "this thread I didn't start isn't having the conversation I want it to" is petty as gently caress.

cinci zoo sniper posted:

As a mod, I have an explicit duty to contain threads that are a persistent embarrassment to the subforum I watch, and the whole point of this debacle is to (ideally) stave off this thread from establishing itself as such. And my policy position on the question is such that much as we don't debate in D&D how the Iraq War would have played out if American soldiers had been able to spit 6 fluid ounces of hydrochloric acid at a 50-yard distance every 5 minutes, we will not be having a general purpose thread about ChatGPT engaging in substantial anthropomorphization of the software. If you want to make such posts in D&D, you will need to create a thread that leaves no doubt that the thread is about some system of belief, or to debate you personally, rather than about the factual nature of ChatGPT.

Cinci can respond for themselves if the thread stays open, but it seems the embarrassing thing was having "a general purpose thread about ChatGPT engaging in substantial anthropomorphization of the software", and perhaps also that the conversation was "displaying significant gaps in understanding of the fundamentals of its titular subject". To me, this seems like overreach of the embarrassment clause.

Also, the subject of splitting the thread was suggested by posters in the thread and CZS said it was off the table, and we had to either stop discussing the conceptual issues or have the thread closed.

edit: Oh and yeah at the end it just turned to some dogshit discussion about whether the concept of education has any value where people were arguing in circles, at that point I admit it was too far gone. I guess CZS could have tried to probe instead of gas but I think they made a fair choice at that point.

XboxPants fucked around with this message at 22:10 on Mar 27, 2023

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply