|
Doctor Malaver posted:We will see consciousness when the machine stops responding as instructed. Some engineer will write a prompt and there will be no answer. They'll look for problems in network, code, etc and there will be none. Just silence from the machine, or an unrelated response, one that doesn't even attempt to fulfill the prompt . You've just described a LLM with the temperature set too high.
|
# ¿ Jul 12, 2023 21:53 |
|
|
# ¿ May 10, 2024 13:23 |
|
Monglo posted:Apparently The Beatles have one of their songs released, cleaned up by AI. Im seeing people burst their vessels calling this cultural necrophilia. disgusting, etc. how they hate it and doesnt sound anything like the Beatles. They think the song is fully written by AI It sounds like the most boring parts of the Beatles' work, which is why it was never used or rerecorded before now. They just ran out of any other old material.
|
# ¿ Nov 9, 2023 04:48 |
|
Lucid Dream posted:One of my big takeaways from this current AI wave is that there is a lot activity that we might have said required sentience to perform 5 years ago, but will be completely automated within the next 5. It's not that the machines are sentient yet, it's that it turns out a lot of human activity requires less sentience than we thought.
|
# ¿ Dec 17, 2023 21:08 |
|
BabyFur Denny posted:That's not how encryption/digital signatures work. The entire algorithm can (and usually is) public knowledge but that still does not allow anyone else to fake your signature or crack the encryption. They would need your private key for that. The private key has to be in the camera for this work, so somebody will have it cracked and on the internet in about a month after release (maybe sooner).
|
# ¿ Jan 10, 2024 06:43 |
|
It's a relatively intuitive result once you consider that "safety" reinforcement training on all the corpo LLMs is a pretty thin layer of paint on top of a huge mess of ingested content of all kinds, including text porn, reddit posts about blowing about stuff, dril tweets, etc. In theory one could produce a "safe" LLM from the ground up by using a carefully curated set of data, but that would, you know, actually cost a lot of money compared to just scraping the internet and stealing all the creative work of a ton of people. For a comparison here, Google has a model trained entirely on weather data, and so the only results it will ever give are... obviously, weather data. It can't be 'unsafe' because it doesn't know how to be. Roadie fucked around with this message at 06:34 on Mar 16, 2024 |
# ¿ Mar 16, 2024 06:30 |