Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Quixzlizx
Jan 7, 2007
I got an invitation to Google Bard, and I'm honestly having trouble thinking up useful things to ask it.

It doesn't help that I can't see myself using any results without double-checking them anyway. For a more practical example, I can't see myself ever saying "Hey Google, order me some mustard," and giving up control over what exactly I'm purchasing, from whom, and for how much.

At least for AI art I'd be able to look at the picture and go "I approve/don't approve of this" without having to do outside research to verify factual accuracy.

Adbot
ADBOT LOVES YOU

Quixzlizx
Jan 7, 2007
Hallucinations are also an extremely tiny subset of the set of instances where humans consider false information to be true, so there should be some sort of burden to explain why an LLM providing false information specifically maps to that minuscule percentage.

Quixzlizx
Jan 7, 2007
If you're asking whether or not a self-aware robot army is about to rise up against humanity anytime soon, then the answer is no.

Quixzlizx
Jan 7, 2007
Is it just me, or is this less of an "AI grifting story" and more of a "grifters who happened to use AI" story? Like, maybe it lowered the amount of effort required, but all of the grifting elements could've easily been done before ChatGPT existed. They would've had to do a GIS/Pinterest search for the picture instead of entering an AI prompt.

Quixzlizx
Jan 7, 2007
https://arstechnica.com/security/2024/03/researchers-use-ascii-art-to-elicit-harmful-responses-from-5-major-ai-chatbots/

Chatbots were tricked into ignoring their guardrails with a combination of ASCII art and a simple substitution cipher.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply