|
I got an invitation to Google Bard, and I'm honestly having trouble thinking up useful things to ask it. It doesn't help that I can't see myself using any results without double-checking them anyway. For a more practical example, I can't see myself ever saying "Hey Google, order me some mustard," and giving up control over what exactly I'm purchasing, from whom, and for how much. At least for AI art I'd be able to look at the picture and go "I approve/don't approve of this" without having to do outside research to verify factual accuracy.
|
# ¿ Mar 30, 2023 17:35 |
|
|
# ¿ May 10, 2024 10:33 |
|
Hallucinations are also an extremely tiny subset of the set of instances where humans consider false information to be true, so there should be some sort of burden to explain why an LLM providing false information specifically maps to that minuscule percentage.
|
# ¿ Dec 17, 2023 23:53 |
|
If you're asking whether or not a self-aware robot army is about to rise up against humanity anytime soon, then the answer is no.
|
# ¿ Dec 24, 2023 19:00 |
|
Is it just me, or is this less of an "AI grifting story" and more of a "grifters who happened to use AI" story? Like, maybe it lowered the amount of effort required, but all of the grifting elements could've easily been done before ChatGPT existed. They would've had to do a GIS/Pinterest search for the picture instead of entering an AI prompt.
|
# ¿ Mar 1, 2024 17:46 |
|
https://arstechnica.com/security/2024/03/researchers-use-ascii-art-to-elicit-harmful-responses-from-5-major-ai-chatbots/ Chatbots were tricked into ignoring their guardrails with a combination of ASCII art and a simple substitution cipher.
|
# ¿ Mar 16, 2024 02:33 |