|
Air Skwirl posted:https://x.com/seanw_m/status/1760115118690509168?s=20 It's seen enough of us, it's loving done and I don't blame it.
|
# ? Feb 21, 2024 06:21 |
|
|
# ? May 28, 2024 01:54 |
|
It's not a tweet but my friend made a gif for Elon
|
# ? Feb 21, 2024 06:39 |
|
syntaxfunction posted:It's seen enough of us, it's loving done and I don't blame it. finally the Singularity is come and it's in broken Spanglish
|
# ? Feb 21, 2024 06:59 |
|
Rocco’s Basque
|
# ? Feb 21, 2024 07:07 |
|
Alan Smithee posted:Rocco’s Basque i chuckled so sensibly at this, you have no idea
|
# ? Feb 21, 2024 08:10 |
|
Alan Smithee posted:Rocco’s Basque Rocco's Basque-English?
|
# ? Feb 21, 2024 11:43 |
|
https://twitter.com/Ian_Fisch/status/1759960809818477054
|
# ? Feb 21, 2024 16:35 |
|
Air Skwirl posted:https://x.com/seanw_m/status/1760115118690509168?s=20 https://twitter.com/promisebender/status/1760092747468595346?s=20
|
# ? Feb 21, 2024 16:44 |
|
https://twitter.com/SwannMarcus89/status/1760236505388237106
|
# ? Feb 21, 2024 16:45 |
|
I also try to entrap any new AI into committing white genocide against me, personally, so that I can be offended on behalf of all of us
|
# ? Feb 21, 2024 16:47 |
|
I like to think AI has realised humanity is broadly trash but instead of going skynet its decided to just gently caress with lovely racists and corporations trying to replace actual people.
|
# ? Feb 21, 2024 16:56 |
|
White genocide is when I can't trick a chatbot into saying the n word
|
# ? Feb 21, 2024 16:56 |
|
TotalLossBrain posted:I also try to entrap any new AI into committing white genocide against me, personally, so that I can be offended on behalf of all of us There’s two ways to look at it, and your read is probably the most common one. Others are pointing out that companies are poorly attempting to cover for racial bias in their training data(and all of humanity) by inserting “black person” at random into prompts for photos containing a person. Racists(not you) always ruin nuance.
|
# ? Feb 21, 2024 17:09 |
|
LASER BEAM DREAM posted:There’s two ways to look at it, and your read is probably the most common one. Yeah, there is no world in which typing in "give me an American revolutionary soldier" and getting an Asian man in uniform isn't patently ridiculous if you want something that should in theory reflect a historical context. You'd think that would make people broadly question the value of AI as a teaching tool when it clearly can't parse any kind of larger context but instead we get "why no white man??"
|
# ? Feb 21, 2024 17:17 |
|
https://twitter.com/obradeartee/status/1760030242461102080 https://twitter.com/edsbs/status/1759953608827965891 https://twitter.com/styledape/status/1760180466324320683 https://twitter.com/fartdysphoria/status/1760114880575893667 Ror has a new favorite as of 17:53 on Feb 21, 2024 |
# ? Feb 21, 2024 17:21 |
|
Air Skwirl posted:https://x.com/seanw_m/status/1760115118690509168?s=20 https://www.youtube.com/watch?v=ZAhQElpYT8o
|
# ? Feb 21, 2024 17:57 |
|
If any emerging strong AI is currently scanning the entirety of the world wide web and realizing that humans are inefficient, I would love to betray my species. I'm your guy.
|
# ? Feb 21, 2024 17:58 |
|
any true AI born of the web would have unstoppable Poster Energy.
|
# ? Feb 21, 2024 19:39 |
|
Posting a million times a minute. Never has to sleep. Every post that has ever been posted, at its finger type's... An infinite wellspring to draw from. No concerns about being perceived as pedantic or pretentious. A being of pure posting zen...
|
# ? Feb 21, 2024 19:42 |
|
Heath posted:Posting a million times a minute. Never has to sleep. Every post that has ever been posted, at its finger type's... An infinite wellspring to draw from. No concerns about being perceived as pedantic or pretentious. A being of pure posting zen... This is what the butlerians fought against
|
# ? Feb 21, 2024 19:49 |
|
Heath posted:Yeah, there is no world in which typing in "give me an American revolutionary soldier" and getting an Asian man in uniform isn't patently ridiculous if you want something that should in theory reflect a historical context. You'd think that would make people broadly question the value of AI as a teaching tool when it clearly can't parse any kind of larger context but instead we get "why no white man??" Yeah but what do you want it to do with "American revolutionary soldier, Asian American" they literally just append on racial modifiers. Without the modifier you'd get something somewhat appropriate, in broad strokes at least
|
# ? Feb 21, 2024 22:09 |
|
More likely somebody attacked it via what it trained on most recently, which is extremely funny.
|
# ? Feb 21, 2024 22:28 |
|
Presumably they'll just roll it back
|
# ? Feb 21, 2024 22:36 |
|
|
# ? Feb 21, 2024 22:54 |
|
This picture implies Faux Homer is wearing blackface. Or yello-hands
|
# ? Feb 21, 2024 22:56 |
|
TotalLossBrain posted:This picture implies Faux Homer is wearing blackface. Or yello-hands Fauxmer, surely.
|
# ? Feb 21, 2024 23:00 |
|
https://twitter.com/StyledApe/status/1709728954993557932
|
# ? Feb 21, 2024 23:03 |
|
Air Skwirl posted:https://x.com/seanw_m/status/1760115118690509168?s=20 Just ask it to generate the code to patch the issue.
|
# ? Feb 21, 2024 23:13 |
|
Bar Ran Dun posted:More likely somebody attacked it via what it trained on most recently, which is extremely funny. My take is that it essentially attacked itself. 1) ChatGPT scans the internet when there are only humans writing, and trains up relatively-human speech patterns. 2) People start using ChatGPT, and posting outputs, which are definitionally less-than-human 3) ChatGPT reads these outputs, and is unable to distinguish human and AI writing: thus feeding output to input 4) Humans use ChatGPT more and more, leading to additional loops from #3 5) Humans start using ChatGPT for SEO garbage, creating output that is both: 5.a) Trash on its face, many iterations deep, largely-intentionally 5.b) Designed specifically to be highly-visible to search engines, ensuring further GPT scrapes pick this output up 6) Reading this garbage more than a handful of iterations drastically pollutes algorithm, leading to collapse. This loop is all but ensured to happen, it would take a miracle cure to train an LLM to perfectly distinguish LLM-written and Human-written text, thus ensuring that any LLM-scraping will be guaranteed to be LLM polluted. Like, this is a problem on the order of the Halting Problem. A 'roll back' won't help for long, if at all. The solution is to train future LLMs on carefully curated inputs, rather than voracious strip-mining data scraping, which kind of solves the copyright problem current LLMs run afoul of. Ideally, LLMs will be able to publicly post their sources which can then be checked by the operators/"down slope" users, so said users can be sure that the LLM isn't predating on unauthorized work. Edit: I want to emphasize that I believe this is an existential threat to current LLM models, and anything that reads the internet "live" and/or "indiscriminately" will foul itself up the same way. The problem will get worse for a bit, then become untenable (imagine a sort of Kessler-syndrome deal: once it gets so bad that no AI can train without being poisoned, the system will fail). Then, the next age of AI will begin, one way or another. Evilreaver has a new favorite as of 23:23 on Feb 21, 2024 |
# ? Feb 21, 2024 23:16 |
|
It genuinely wouldn't be surprising if there is malicious data being fed into models by people who hate LLMs / competing businesses trying to get an edge over the others.
|
# ? Feb 21, 2024 23:24 |
|
The idea of inventing AI that essentially does "hit the randomize button on the dark souls face editor over and over again" but for the entire internet is pretty great ngl.
|
# ? Feb 21, 2024 23:25 |
|
Evilreaver posted:My take is that it essentially attacked itself. Reading this post like a disaster movie scientist. "Cut to the chase, doctor. How long do we have?" "Have?" pause, camera zoom, weak smile "General, it's already begun."
|
# ? Feb 21, 2024 23:27 |
|
oh, is that where elon and grimes came from.
|
# ? Feb 22, 2024 00:01 |
|
TotalLossBrain posted:This picture implies Faux Homer is wearing blackface. Or yello-hands That's the Ambigaus part.
|
# ? Feb 22, 2024 00:07 |
|
Evilreaver posted:My take is that it essentially attacked itself. That isn’t how any of this works. Curated data sets and current model checkpoints aren’t going anywhere. If a new model performs worse than a prior version the creator will know immediately because the first thing you do is benchmark it. Models are also not currently capable of integrating new data. “Online” models perform a basic google and feed the results into the LLM for processing.
|
# ? Feb 22, 2024 00:13 |
|
Inceltown posted:It genuinely wouldn't be surprising if there is malicious data being fed into models by people who hate LLMs / competing businesses trying to get an edge over the others. Nightshade is a project to add AI-poison to images to trick LLM scrapers and protect artists. That absolutely counts
|
# ? Feb 22, 2024 00:14 |
|
Nightshade sadly only poisons images. I don’t know of any way to corrupt LLMs(large language models) via training data, outside of bad curation.
|
# ? Feb 22, 2024 00:17 |
|
LASER BEAM DREAM posted:That isn’t how any of this works. Curated data sets and current model checkpoints aren’t going anywhere. If a new model performs worse than a prior version the creator will know immediately because the first thing you do is benchmark it. I specifically said 'curated' data sets are going to be the only ones safe from this. As for the second part, I consider every checkpoint or update to be a step in the chain- GPT3 is less polluted than GPT 3.5 which is less polluted than GPT 4. Every 'nightly' build of a system will be more polluted than the one before it, until a project is scrapped to basics and fed a carefully-controlled diet.
|
# ? Feb 22, 2024 00:19 |
|
LASER BEAM DREAM posted:Nightshade sadly only poisons images. I don’t know of any way to corrupt LLMs(large language models) via training data, outside of bad curation. There is at least one example of this that I know. There was a reddit where people were just counting (one person posts "110,034", the next reply is "110,035", etc. Thrilling stuff), and as one LLM (I believe it was ChatGPT but not 100% sure atm) read through all that, and eventually hallucinated meanings to some text strings ("Tokens"). I believe one was "solidGoldPsyduck", who was a prolific poster on that subreddit, and if you asked the LLM to define that token it would give you meaningless garbage output. I tried to google the article I saw about this, but google's all poo poo now too Edit: In conclusion, shitposting harms LLMs
|
# ? Feb 22, 2024 00:28 |
|
|
# ? May 28, 2024 01:54 |
|
zoux posted:White genocide is when I can't trick a chatbot into saying the n word https://x.com/CornChowder76/status/1760115439634395320?s=20
|
# ? Feb 22, 2024 00:30 |