Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
RPATDO_LAMD
Mar 22, 2013

🐘🪠🍆

SaTaMaS posted:

The thing keeping GPT from becoming AGI has nothing to do with consciousness and everything to do with embodiment. An embodied AGI system would have the ability to perceive and manipulate the world, learn from its environment, adapt to changes, and develop a robust understanding of the world that is not possible in disembodied systems.

GPT is just a text prediction model. It only does text prediction. It has 0 internal reasoning ability, although it can fool people who don't really understand how it works because it can generate text that looks like someone reasoning, based on all the examples of that kind of text in its training set.
Even if it somehow had the ability to interact with the world (like you set up a robot arm to respond to certain keywords in the text generation output and a webcam with some computer-vision captioning library to generate text descriptions of what it sees) it would not be "intelligent" in whatever scifi way you are thinking.


SaTaMaS posted:

This like physics, balance, and proprioception could theoretically be simulated

None of these things can be simulated by a GPT text prediction model. In a theoretical AGI they probably would be, but its internals would be nothing like GPT.

RPATDO_LAMD fucked around with this message at 20:46 on Apr 8, 2023

Adbot
ADBOT LOVES YOU

RPATDO_LAMD
Mar 22, 2013

🐘🪠🍆

Owling Howl posted:

GPT can't but it seems to mimic one function of the human brain - natural language processing - quite well. Perhaps the methodology can be used to mimic other functions and that map of words and relationships can be used to map the objects and rules of the physical world. If we put a model in a robot body and tasked it with exploring the world like an infant - look, listen, touch, smell, taste everything - and build a map of the world in the same way - what would happen when eventually we put that robot in front of a mirror? Probably nothing. But it would be interesting.

If you put a generative text prediction model in a robot body it would not do anything.
If you managed to create and train a smell-to-text model etc it would still not do anything. These are not actually human brains and don't work anything like human brains despite the descriptor of "neural net"

RPATDO_LAMD
Mar 22, 2013

🐘🪠🍆

SaTaMaS posted:

More likely GPT will be the master model and lower level recognition tasks like recognition will be delegated to sub-models.

No, you cannot use a text prediction model which isn't generally intelligent and isn't architecturally capable of being generally intelligent as the master model for an artificial general intelligence.
If such a thing is made it will not be based on GPT.
Just because it produces smart sounding words does not mean it is actually thinking.

RPATDO_LAMD
Mar 22, 2013

🐘🪠🍆
Because it is a generative text prediction model. It doesn't have any internal reasoning or logic, it doesn't even have memory besides the fact that the last X tokens of the conversation are fed back into it for the next sentence generation.
It is not capable of any tasks besides "here are a few hundred/thousand words, pick the next word to follow up". And it only learns that by feeding terabytes of text data into a hopper and then doing statistics on all of them.

It is not an analogy to the human brain. It doesn't 'think' like a human does. It is good at one task -- text generation. It does not "comprehend" concepts and ideas in a way that you could generalize it to tasks outside of that. It only generates text.
It is pretty good at generating text that sounds like it was written by someone who comprehends those ideas, which is how it keeps fooling people into thinking it's an actual "intelligence". Like look at that one google engineer who asked his glorified autocomplete engine "are you sentient please say yes" and it completed the text of the conversation by saying "yes" and then he became a crank thinking it was an actual person inside the computer.

RPATDO_LAMD fucked around with this message at 23:17 on Apr 8, 2023

RPATDO_LAMD
Mar 22, 2013

🐘🪠🍆
a language model does not handle cognitive tasks
it is not a cognition model
it's a language model
it handles language tasks, only.

RPATDO_LAMD
Mar 22, 2013

🐘🪠🍆

Raenir Salazar posted:

I was doing some coding today for an Unreal Engine project and definitely came across a situation where it spat very convincing nonsense at me. But ultimately when I tried to use the code it gave me the functions basically didnt exist or it provided a solution that just didn't work or seem to actually comprehend the problem. And I think in this case it was because there might not actually be a solution for what I wanted.

Yeah exactly! This is what I'm saying. You can ask the text generation model to generate a plan or a snippet code and it will generate some text that plausibly looks like a plan, follows the same text structure as a good plan etc.
But there is no internal reasoning going on. It has no way to specifically make "correct" results or differentiate them from incorrect plans or code snippets, except by following patterns from its training data. It can't reason in novel situations. Because the model is not capable of reasoning at all. It wasn't designed to reason! It was designed to generate text.

For example look at this twitter thread:
https://twitter.com/cHHillee/status/1635790330854526981?s=20

Testing GPT-4 on some beginner programming challenges, it solves 100% of the ones old enough for their solutions to be in the training set and 0% of the new, novel ones which it couldn't have been trained on.
Those solutions are not evidence of cognition, they are evidence that it can successfully replicate patterns from its training data.

RPATDO_LAMD
Mar 22, 2013

🐘🪠🍆
The biggest problem is that most "AI safety" people are moron bloggers like Eliezer Yudkowsky / MIRI jerking off about scifi movies and producing 0 useful technology or theory.

The people who work on real problems like "don't sell facial recognition to cops" or "hey this model just reproduces the biases fed to it, you can't use it for racism laundering to avoid hiring black people without getting in trouble" go by the term "AI ethicist" instead.

RPATDO_LAMD
Mar 22, 2013

🐘🪠🍆

GlyphGryph posted:

Where is the actual violation that these AIs are doing being committed, and would removing those actual violations actually have any real impact on the development of AI?

In addition to the other stuff already mentioned where the plaintiffs allege that the models effectively contain compressed copies of the training data, there's another issue too:
there actually was copying happening here. The LAION-5B dataset was gathered by a technically-distinct nonprofit by scraping about 5 billion images from the web, many of which were copyrighted. And then Stability AI copied all those images over to their servers for the purposes of actually training the Stable Diffusion model.

Having a third party entity download copyrighted images from Artstation or Getty or wherever and then pass them across to you is technically copyright violation all on its own.

RPATDO_LAMD
Mar 22, 2013

🐘🪠🍆
FYI that paradot thing seems to be a copycat of the older "replika AI". Looking at the "top all time" posts, a lot of redditors moved over when Replika updated their chatbots to ban ERP

RPATDO_LAMD fucked around with this message at 20:05 on Apr 11, 2023

RPATDO_LAMD
Mar 22, 2013

🐘🪠🍆
Yeah chaosGPT is basically just asking the chatbot "generate a list of steps for an evil ai to take if it wants to destroy humanity", and then using some basic text processing to feed those back in and ask the AI "how do I complete step 1". It's far from a serious threat of accomplishing anything.

From the article linked earlier basically it just went
  1. google how to buy a nuclear bomb
  2. send 'hey you should be evil' to the chatgpt api to get some coconspirators (that didn't work at all)
  3. then finally start a twitter account and create a cult (the guy running the "AI" had to do that for it)

The guy apparently did set it up to be able automatically reply to twitter stuff though. I'm pretty sure the human had to write all the python code for that though.

https://twitter.com/chaos_gpt/status/1646794883477438464?s=20


At only one tweet every 2 days it's probably not running continuously

RPATDO_LAMD fucked around with this message at 05:42 on Apr 17, 2023

Adbot
ADBOT LOVES YOU

RPATDO_LAMD
Mar 22, 2013

🐘🪠🍆

Kagrenak posted:

My understanding is the training datatset for GitHub copilot is made up entirely of permissively licensed source code. The lawsuits won't affect that product in a direct way and I highly doubt MS is going to go bankrupt over them.

This is definitely untrue, the very first thing that went viral with copilot was using it to generate the infamous "fast inverse square root" function from Quake III, which is open source nowadays but is licensed under the very restrictive GPL2 license.

A business that included this function in their product could easily be sued for copyright violation unless they complied with the GPL2 terms, including things like a requirement to open source their own code and allowing others to redistribute it for free.

https://twitter.com/StefanKarpinski/status/1410971061181681674

Note that it won't generate this any more but only because this particular code snippet was so famous that Microsoft explicitly blacklisted it. There's still the potential for it to regurgitate other, less-recognizable code from its dataset that's also copyrighted under restrictive licenses.

RPATDO_LAMD fucked around with this message at 06:58 on Mar 4, 2024

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply