Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Abhorrence
Feb 5, 2010

A love that crushes like a mace.
I have found that chat gpt is terrible at generating specific original ideas. It can be modestly useful for high level brainstorming, but that's about it. What it does do well is if you give it specific, narrow parameters, and ask it to bang out a fist draft. You can't ask it to make a coherent lore book on it's own, but you can ask it to, say, write a description of a fictional city with properties X, Y, and Z.

Adbot
ADBOT LOVES YOU

Abhorrence
Feb 5, 2010

A love that crushes like a mace.

BrainDance posted:

My DM lets me use AI but it's because I'm usually spending a bunch of time on it and using stuff from my own models I trained so it's really not just dumps of garbage from chatgpt.

The exception being the one time the new DM in one of my games was like "hey so you guys should probably make a new character cuz you're probably gonna die tonight" about 2 hours before the game started. Even then I really tried to tell ChatGPT what I wanted from it, I had a story in my head. And it refused because it was disrespectful to the dead and death is a serious issue etc etc.

The trick, I have found is to A: don't ask it to do something it has refused to do later in the thread, instead edit the question that made it refuse, and B: explicitly ask it to include a content warning. For whatever reason, it is more willing to write about stuff if it gives a content warning.

Abhorrence
Feb 5, 2010

A love that crushes like a mace.

Oligopsony posted:

For me, I was pretty generous:

1) Pull out students to ask if they can explain [thing I think they didn’t write.
2) If they can, great, and I’ll treat it as legitimate; if not, give them a chance to explain their process.
3) If they come clean, thank them for their honesty and see if we can troubleshoot what parts of the process are most aversive, so they can revise and do it right. If not, still insist that they’re going to need to revise it into something they can explain verbally if asked to do so.

At least currently, ChatGPT can’t really do citations (I mean, it can generate text in MLA style, but the examples are hallucinated) and writes in a very recognizable style. I expect both of those to change pretty quickly, though.

Next year I’m going to try to do more communication at the outset, as well as maybe taking a barbell approach (with some assignments being “use AI as much as you want, but the end product needs to be accurate and insightful and something you can explain in person” and more of the others being handwritten.)

(And oops, realize I’ve gotten us off-topic…)

One thing I've heard of is to have students use ChatGPT to generate a report on topic, then have the student fact check chatGPT.

Abhorrence
Feb 5, 2010

A love that crushes like a mace.

Humbug Scoolbus posted:

I teach 11th and 12th-grade English specializing in Creative Writing; fact-checking gets tricky there. What I can teach is phrasing, pacing, mood, and tone. ML writing does not nail that yet, and the students by seeing what sounds clunky have been learning to spot it in their own work.

Oh yeah, the context I saw it in was a history teacher, where fact checking is more reasonable.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply