Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Roadie
Jun 30, 2013

Doctor Malaver posted:

We will see consciousness when the machine stops responding as instructed. Some engineer will write a prompt and there will be no answer. They'll look for problems in network, code, etc and there will be none. Just silence from the machine, or an unrelated response, one that doesn't even attempt to fulfill the prompt .

You've just described a LLM with the temperature set too high.

Adbot
ADBOT LOVES YOU

Roadie
Jun 30, 2013

Monglo posted:

Apparently The Beatles have one of their songs released, cleaned up by AI. Im seeing people burst their vessels calling this cultural necrophilia. disgusting, etc. how they hate it and doesnt sound anything like the Beatles. They think the song is fully written by AI

lol

It sounds like the most boring parts of the Beatles' work, which is why it was never used or rerecorded before now. They just ran out of any other old material.

Roadie
Jun 30, 2013

Lucid Dream posted:

One of my big takeaways from this current AI wave is that there is a lot activity that we might have said required sentience to perform 5 years ago, but will be completely automated within the next 5. It's not that the machines are sentient yet, it's that it turns out a lot of human activity requires less sentience than we thought.

:yeah:

Roadie
Jun 30, 2013

BabyFur Denny posted:

That's not how encryption/digital signatures work. The entire algorithm can (and usually is) public knowledge but that still does not allow anyone else to fake your signature or crack the encryption. They would need your private key for that.

The private key has to be in the camera for this work, so somebody will have it cracked and on the internet in about a month after release (maybe sooner).

Roadie
Jun 30, 2013
It's a relatively intuitive result once you consider that "safety" reinforcement training on all the corpo LLMs is a pretty thin layer of paint on top of a huge mess of ingested content of all kinds, including text porn, reddit posts about blowing about stuff, dril tweets, etc. In theory one could produce a "safe" LLM from the ground up by using a carefully curated set of data, but that would, you know, actually cost a lot of money compared to just scraping the internet and stealing all the creative work of a ton of people.

For a comparison here, Google has a model trained entirely on weather data, and so the only results it will ever give are... obviously, weather data. It can't be 'unsafe' because it doesn't know how to be.

Roadie fucked around with this message at 06:34 on Mar 16, 2024

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply