Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Lucid Dream
Feb 4, 2003

That boy ain't right.

KillHour posted:

You're looking at this from a perspective of how we use these systems today - as something to ask questions to. The concern around AI safety is that there will be a day that these things are hooked up to the internet and allowed to do things without supervision.

AutoGPT and similar are already there. It can take a high level goal, break it down into tasks and then create commands that are parsed by external agents to do... whatever (google, execute/verify code, etc) which then return results to GPT so that it can use that information to plan and execute more steps. Things are moving very fast.

Adbot
ADBOT LOVES YOU

Lucid Dream
Feb 4, 2003

That boy ain't right.

KillHour posted:

Even better is ChaosGPT, which is AutoGPT with the explicit goal to end humanity.

But yeah, it would be laughably naïve to think that we will or even can keep these things constrained such that they can't do anything without human intervention.

It's this generation's splitting of the atom, except everyone can tinker around with the raw materials in their basement (with an OpenAI key). The gap between the best case scenario and the worst case scenario is so wide that I no longer have any clue what comes next, but I know that the existing status quo is hosed in a lot of dimensions already.

Lucid Dream fucked around with this message at 23:34 on Apr 15, 2023

Lucid Dream
Feb 4, 2003

That boy ain't right.

KillHour posted:

To be clear, the definition of "AGI" is just a thing that can solve general tasks without being specifically trained to do them.

Well, by that definition doesn't ChatGPT already pass that threshold?

Lucid Dream
Feb 4, 2003

That boy ain't right.

RPATDO_LAMD posted:

Yeah chaosGPT is basically just asking the chatbot "generate a list of steps for an evil ai to take if it wants to destroy humanity", and then using some basic text processing to feed those back in and ask the AI "how do I complete step 1". It's far from a serious threat of accomplishing anything.

The thing to keep in mind with the current generation of LLMs is that they're sort of like the glue that can hold various processes together that would traditionally have a human bottleneck, and the capabilities of these systems are basically only constrained by the ad-hoc APIs that people build on top of them. When you combine that with GPT's ability to recursively hallucinate tasks and plans all the way from high level goals to the specific implementations... It's not hard to imagine where this goes as this stuff matures. Right now these auto-gpt systems are mostly just hallucinating their plans, but we're already several steps down the path of them being able to touch the real world by using external agents to retrieve, act on and incorporate information for future recursive steps. Any one prompt execution isn't some kind of super-intelligent output, but these things can be strung together so that they can check each other's work and analyze problems.

We're now at a point that reading a news story about a rogue AI system causing problems is a realistic possibility. It's still pretty far from a real existential threat... but nobody knows how far it takes to get from here to there considering this stuff is improving in every dimension.

Lucid Dream
Feb 4, 2003

That boy ain't right.

KillHour posted:

The main thing these systems can't do is improve themselves - learn from doing.

It's a little more complicated than that. If I ask GPT4 for help with a programming issue and it gives me incorrect output given the API I'm using, I can paste the documentation and it will "learn" and apply that knowledge to the task. It doesn't natively store that information long term, but the LLM itself doesn't necessarily have to store it and it could be retrieved by some higher level systems. The long term memory problem is far from solved, but I think we're already at the point where we have to split hairs about what counts as "learning".

Lucid Dream
Feb 4, 2003

That boy ain't right.

KillHour posted:

Granted, but I'm being more precise about the meaning in that the model doesn't change. Giving me a textbook I can look things up in will make me give you the answer more often, but if I have to look it up every time, I'm not going to be able to use it to solve different but related problems. As you learn new things, you get more of an intuition from them. Just having the reference available probably isn't as good as if it was trained to give the answer correctly the first time. I'm not sure how you'd objectively test that with GPT though - would be interesting if you could.

There is no question that a system that could continue training the model in real time would be more powerful, but I'm not convinced it's necessary to have significant learning capabilities. Humans are really good at building higher level abstractions with tools, and I think a lot of folks are under-appreciating what you can do with access to a suite of cognitive building blocks like text summarization and vector embedding. The LLMs themselves don't necessarily have to be super-intelligent to power a super-intelligent system, in much the way any individual neuron isn't all that smart.

Lucid Dream
Feb 4, 2003

That boy ain't right.

SaTaMaS posted:

For LLMs their "goals" are determined by their creators and are essentially programmed tasks that the AI system is designed to perform so attributing intentionality isn't necessary.

The LLMs don't have goals, but they do predict pretty darn well what a human would say if you asked them to come up with goals about different things.

Lucid Dream
Feb 4, 2003

That boy ain't right.

Tei posted:

LLM might have goals builtin in the design, even if are not intended by the creators. The very least, to produce a interesting output or any output.

Ok sure, but if I describe a situation I can still ask an LLM what a person’s goals might be given the context and it will respond with something plausible. I’m not saying anything about the subjective experience of the LLM or how interesting the output is, but rather that the system has the capability to predict what a human might say if given a situation and asked to define goals to solve the problem. The semantics don’t matter as much as the actual capability.

Lucid Dream
Feb 4, 2003

That boy ain't right.
The wonderful thing about AI is that it lowers the barrier to entry for everything. The existentially terrifying thing about AI is that it lowers the barrier to entry for everything.

Lucid Dream
Feb 4, 2003

That boy ain't right.

Liquid Communism posted:

A better and more common example is troubleshooting programming code. ChatGPT is absolutely terrible at this, because it is both confidently wrong and incapable of making inferences of intent. A human coder can look at a piece of code, and the use it was meant for, and evaluate what the intent of the writer was and where they hosed up, even if the syntax is okay. This is such a basic thing that it's elementary-level problem solving, and large language models are utterly poo poo at it because all they can do is compare it to other examples of syntax they were trained on and try to vomit up something linguistically similar.

GPT4 is great at this.

Lucid Dream
Feb 4, 2003

That boy ain't right.
Seems like a lot of folks in here think that this AI stuff is all or nothing, but you're missing the forest for the trees. This stuff isn't useful because it can spit out an entire screenplay in one-shot, it's useful because a mediocre writer with a good concept can use GPT4 to produce something that sure seems well written, and it does it quickly and iteratively. These things are like a multiplier, but you still have to put in some effort to get a good result.

For example, I have been tinkering around with using GPT to organize various things about an indie game I'm working on, and after getting it to write a summary of the lore/backstory I asked it to write a bunch of book titles and summaries for historical stories that the aliens in the game might tell.

It wrote this.
It's decent, although plain. It has no real style and just sort of plainly states the events. What I wanted was something mysterious and ambiguous.

So I told it:

quote:

Ok, now write it all obfuscated and weird like the author Gene Wolfe's "Book of the New Sun" trilogy.

and it spit out this.
Better, it's definitely more ambiguous and better written, but it was still a bit too on the nose and included too many explicit details. It has some great lines in there though, like

quote:

But then came the Calamity, a catastrophe of such magnitude that it rent the fabric of their existence, stripping the Luminary from the Xenoid constellation.
and

quote:

The lament of the Luminary is a dirge sung for the harmony that once was.

So I went in and edited it a bit manually, and ended up with this.
Which is almost certainly better than something I would have written without such a tool, and the whole process took maybe 10 minutes.

Lucid Dream fucked around with this message at 07:44 on May 11, 2023

Lucid Dream
Feb 4, 2003

That boy ain't right.

Clarste posted:

People tend to overestimate the value of a "good concept." All those people who can write well too? They also have good concepts. Everyone thinks they have a good concept. But that's exactly why they are worth nothing unless you actually have the skill to pull it off.

It's not even about the quality of the concept though, it's that these things just lower the barrier of entry. It feels like that is what AI is going to do across the board, at least in the short-medium term - simply lower the barrier of entry for various mediums. ChatGPT makes it easier for me to learn and immediately apply new programming concepts, and even how to set up and use specific APIs. I made a generative AI twitch channel in like a month and a half that strings together a half dozen cloud services and I'd never touched any of that stuff before I started the project. It reduces the friction in ways that are unambiguous to me at this point. Films used to cost a lot more to make, but now folks can film a movie on their phone. Games used to be a lot harder to make but now there is a robust ecosystem of tools and tutorials. It means there are a lot more stinkers produced, but it also means there are more good things produced as well when folks who care actually put in the effort and effectively leverage the tools available. We'll see the same with AI.

Lucid Dream fucked around with this message at 08:05 on May 11, 2023

Lucid Dream
Feb 4, 2003

That boy ain't right.
We're in uncharted territory in so many ways these days, and I don't think we can really look at precedent too much. If AI turns out to be as disruptive as it sure seems like it's going to be, then it's pretty reasonable to expect there to be new laws and novel interpretations of existing laws in light of the impact of AI.

Lucid Dream
Feb 4, 2003

That boy ain't right.

Tei posted:

The current fast advancement of AI is a illusion.

Almost every advancement we see latelly are based on the same ML algorithm, probably alot of the stuff we see just run TensorFlow.

So.. is very possible that we will soon reach the cap of the utility of that algorithm/approch.


There are many different AI ideas and strategies. Expert systems, rule based, machine learning, genetic algorithms. AI is not only machine learning systems.

Once we reach the cap of ML systems, we will go back to slow progress towards AGI.

I feel like you're under-selling the amount of disruption that is going to come from only what is available right now. These latest generation LLMs can already perform many cognitive tasks that used to require a human, and it's pretty hard to imagine that the art stuff isn't incredibly disruptive in its own way. I'm not even invoking future progress in the underlying technologies when I say it seems like there will be a lot of disruption.

Lucid Dream fucked around with this message at 11:27 on May 13, 2023

Lucid Dream
Feb 4, 2003

That boy ain't right.
All I know is that a lot of people will believe machines are conscious before they are, and a lot of people will believe they aren't conscious after they are. The former is less than ideal, and the latter will be a tragedy.

Lucid Dream
Feb 4, 2003

That boy ain't right.
So far the most profound thing to me about using chatgpt a lot for indie game development is that it dramatically reduces the amount of time I have to spend reinventing the wheel. GPT makes a lot less off-by-one errors than me too.

Lucid Dream
Feb 4, 2003

That boy ain't right.
Vector databases are cool but at the end of the day they're just about finding something from a sea of information that is similar to some other information you're searching for. It's really just an AI assisted similarity search. Still useful, but it isn't some kind of magic bullet that solves the hard problem of consciousness.

Lucid Dream
Feb 4, 2003

That boy ain't right.
I think OpenAI figured out good memory retrieval as part of doing the GPTs thing, and they used that to create autonomous agents that can complete a new class of tasks with no human intervention. Then, Sam started demoing it to potential investors without board approval. Ilya saw Sam hocking a proto-agi without even consulting the board, got spooked, and pulled the ripcord.

Lucid Dream
Feb 4, 2003

That boy ain't right.

KillHour posted:

Do you have evidence for any of this or is it just speculation?

Nah, pure speculation. I used the GPTs thing enough to see that it has the ability for it to re-write its own system prompt, and it has the ability to retrieve arbitrary data from uploaded files for use in the output. Right after the GPTs thing was announced I remember using an early UI of it or something where it had a log on the side that was showing it doing agent stuff, so I'm just sorta putting the pieces together and extrapolating. I'm guessing they figured out a way to write their own "memories" and then retrieve them in a way that's scalable enough to tackle more complex problems.

Lucid Dream fucked around with this message at 03:40 on Nov 21, 2023

Lucid Dream
Feb 4, 2003

That boy ain't right.

Heran Bago posted:

Someone please explain to me how Q* solving math for 10-year olds is dangerous, where Wolfram Alpha aceing "what is six over four?" isn't.

OpenAI have an apparently incredibly accurate method of predicting capabilities of their models before they are fully trained. Better math on a smaller model using a new architecture might suggest a fully trained model would have dramatically more capabilities than GPT4.

Lucid Dream
Feb 4, 2003

That boy ain't right.

SCheeseman posted:

Self serving example generated by me:
https://voca.ro/1hZS3KPqg6k3

Ahaha that's fantastic. Still has some minor acoustic artifacts, but goddrat all the same.

Lucid Dream
Feb 4, 2003

That boy ain't right.
One of my big takeaways from this current AI wave is that there is a lot activity that we might have said required sentience to perform 5 years ago, but will be completely automated within the next 5. It's not that the machines are sentient yet, it's that it turns out a lot of human activity requires less sentience than we thought.

Lucid Dream
Feb 4, 2003

That boy ain't right.
I set up a makeshift version of chatgpt using the gemini api, and gemini pro seems kind of lovely. It's basically free though, so that's nice I guess. I haven't really experimented with the multimodal capabilities, but I haven't been particularly impressed by chatgpt's ability to "see" images so I'm curious to compare at least.

Lucid Dream fucked around with this message at 23:21 on Dec 19, 2023

Lucid Dream
Feb 4, 2003

That boy ain't right.

repiv posted:

well they can be if the model is over-fitted, as demonstrated by the new midjourney V6 model which seems to be especially prone to regurgitating near-perfect replicas of images from the training set for some reason



in that last case the name and creator of the original piece aren't even named in the prompt, and MJv6 still copied it almost to a tee

This is a weird one to me. Like, if you tell the AI to draw a picture of Mona Lisa and it draws a good Mona Lisa that isn't a bug, except in that it might be legally problematic. If you had a perfect AGI and asked it to output "Mona Lisa" it would be a bug if it *wasn't* a perfect representation.

Lucid Dream
Feb 4, 2003

That boy ain't right.

repiv posted:

in the case of the mona lisa i'd agree, since that's a specific still image you would expect a sufficiently large model to reproduce it exactly if asked for the mona lisa. it's only notable as an example that models can store and reproduce the images they were trained on, contrary to some claims that the training data is always atomized beyond recognition so it doesn't count as reproducing it.

the other two examples though - the joker prompt doesn't ask for a specific known image but the model decided to regurgitate a specific frame from the movie rather than interpolating over the broad space of "joker movie images", and in the last example the piece being copied isn't referenced in the prompt whatsoever. that to me indicates over-fitting, biasing the model towards being less creative and more plagiarizey in an effort to improve quality.

Hmm, I still think the Joker one is similar to the Mona Lisa example in that it was asking for a screenshot from the film and we don't actually know how many attempts it took. The third one is pretty damning though I suppose.

Lucid Dream
Feb 4, 2003

That boy ain't right.
ChatGPT isn't as reliable as a textbook or a good human tutor, but it allows for self directed learning 24/7 with very little friction at a fraction of the cost. I presume that before long we'll have LLMs fine tuned for specific subjects, or even a specific grade-level curriculum. The ability to just... ask about a subject, and drill down on the parts that you don't understand is incredibly powerful, and it's the very reason 1-on-1 tutoring is so effective. It can't replace a teacher or school, but if you don't have access to a private 1-on-1 tutor, it's nice to have.

Lucid Dream fucked around with this message at 18:18 on Jan 22, 2024

Lucid Dream
Feb 4, 2003

That boy ain't right.

Abhorrence posted:

The problem is you don't know if what you're learning is accurate at all, or a ChatGPT hallucination.

Well, this is partially why I brought up the fine tuned models for education because it would drastically reduce hallucinations, but honestly GPT4 doesn't really hallucinate that much these days. As long as you're not asking it something that can be interpreted as a request for it to produce creative output (stories, poems, etc) it does a pretty good job of just saying it doesn't know or can't say. I'm not saying there aren't downsides, but I also think you'd be hard pressed to come up with a reasonable educational related question to ask ChatGPT4 that it would get wrong. It's getting harder and harder to come up with a contrived example of a question that fails with with chatgpt, let alone good-faith questions related to established historical events, mathematical concepts, etc.

Again, not suggesting that you can replace teachers with AI (we should tax the hell out of AI and use the money for all sorts of stuff, including the real human-run education system), but I'm also a strong believer in reducing the friction required to let people engage in self directed learning. The internet was a big step in that direction, with similar and familiar pitfalls, but I sure wouldn't want to try and learn things the old way even though sometimes websites are wrong.

Lucid Dream fucked around with this message at 20:11 on Jan 22, 2024

Lucid Dream
Feb 4, 2003

That boy ain't right.

Seph posted:

I'm not saying genAI will necessarily follow the same trajectory, but it made me think if it's possible to know where in the development curve we are with it? Are all the recent advancements just the tip of the iceberg, or is it possible we're getting to the point where we hit diminishing returns (whether that's on compute time, development time, or training data)?

Can't say yet, still too early. GPT5 will be a bellwether. Over the last few weeks I've seen several significant "hard" problems get solved in the AI space, and the improvements are still happening too rapidly to say where things might land.

Lucid Dream
Feb 4, 2003

That boy ain't right.

BrainDance posted:

Like, I think we can take the whole mixture of experts thing a lot further than we have.

My baseless speculation is that the rumored Q* breakthrough at OpenAI is a way to navigate a higher dimensional mixture of experts or something.

Lucid Dream
Feb 4, 2003

That boy ain't right.

Staluigi posted:

the humans in the matrix weren't there as a power source, their input was the only means by which ai could harvest original nonhallucinatory data
Humans basically hallucinate their own reality as it is, just with more internal consistency (usually).

Lucid Dream
Feb 4, 2003

That boy ain't right.
If the LLM enables the problem to be solved then it kinda solved it. If I use a calculator to help me do a complicated math problem, I still solved it even if the calculator helped. If the LLM is smart enough to choose to use the calculator then I think it counts.

Lucid Dream fucked around with this message at 21:25 on Mar 15, 2024

Lucid Dream
Feb 4, 2003

That boy ain't right.

Boris Galerkin posted:

I get what you're trying to say here but it's also wrong. Every day I use a fancy calculator to solve an equation similar to this:
...
There is a difference between saying I solved it and that I used a computer to solve it.

I really think it's a semantic issue more than anything. Modern LLMs aren't inherently reliable for math, but an LLM can act as a sort of delegator and make a high level plan and send sub-tasks to specialized systems (such as a calculator, a database, whatever) and then incorporate that data into its response. Folks can argue whether that counts as the LLM "learning" math, but what really matters to me is the capability of the system as a whole.

Lucid Dream fucked around with this message at 23:05 on Mar 16, 2024

Lucid Dream
Feb 4, 2003

That boy ain't right.

Mega Comrade posted:

But that's not how we describe systems. I'm completely baffled by your argument.

It's really not very complicated. The LLM itself is not very good at math, but it can be trained to utilize external tools such as a calculator. Whether you think that counts as the LLM "learning to do math" or whether it counts as the LLM "solving the problem" is obviously subjective. I think it counts, but we can agree to disagree.

Mega Comrade posted:

The LLM doesn't calculate, attempt to calculate or even see the answer, it simply provides another strut when the main AI cannot resolve the question.
Well, they could attempt to calculate the answer and they'd probably do a hell of a lot better than me if you gave me the same problems, but if you can train it to use an external tool then you don't need to train it to answer any arbitrary math problem via inference. The LLM would actually see the answer, as the external tool would return the answer to the LLM, which would then use that as part of the context for the response to the user or other system or whatever.

Edit: To clarify a bit, I'm talking about using multiple LLM calls as part of a response to the user. The first call determines the need for the tool and requests it, the second call formats the response after the tool returns the answer. Neither call is individually "solving the problem" but the problem is solved by the system enabled by the LLM.

Lucid Dream fucked around with this message at 08:34 on Mar 17, 2024

Lucid Dream
Feb 4, 2003

That boy ain't right.

Boris Galerkin posted:

You made a comment on semantics and it's obviously clear that they matter because it seems like you're conflating LLM with "computer" or "software" in general.
I feel like I’ve been pretty clear that I’m specifically talking about systems enabled by LLMs. I clarified it and explicitly explained the kind of “systems” I’m referring to - multi-stage prompts that use external tools. I’m talking about a system that can take arbitrary input from a user and, based on that input, the system determines which tool to use (if any), uses the tool, and then uses the output of the tool in the response back to the user. The entire end-to-end system that is powered by the LLM’s ability to interpret prompts.

When folks type a question into ChatGPT, the system that responds is what I’m talking about.

Boris Galerkin posted:

Control logic in programming has been a thing since the first programming languages were invented. Fortran programmed with punch cards were capable of "knowing" whether to do multiply two vectors or multiply a vector with a matrix or whatever. The LLM that takes your natural language input and delegates smaller bite sized commands to other systems does the same thing but differently.
That’s cool, I still think that teaching an LLM to use a calculator means you’ve taught it to do math. It’s ok if you disagree, I’m not stating some kind of objective fact lol.

Boris Galerkin posted:

Also you keep talking about "it" doing things and it's sounding awfully close to you thinking that the system is acting like a person with agency instead of just following code.
If referring to a system as “it” is a problem then we are truly through the liking glass of semantic nonsense.

Lucid Dream
Feb 4, 2003

That boy ain't right.

Main Paineframe posted:

It obviously doesn't. If someone said "this 5-year-old human knows how to do division", you would expect that human to be able to solve "20 / 5" without a calculator. If their math demonstration consisted solely of just entering the problem into a calculator and repeating the result, and if they were completely unable to solve the problem without the help of a calculator, then I think most people would get annoyed at the parent for wildly exaggerating their child's abilities.

"The LLM can do math" and "a system including both a LLM and a math-solver can do math" are very different statements. I don't see why so many people are so incredibly eager to treat them as synonymous.

See, this is what I mean with people losing track of the limitations of LLMs. LLMs are fundamentally incapable of doing this sort of mathematical calculation. They can't attempt to calculate arithmetic, because they're highly specialized tools that are capable of doing exactly one thing: statistically calculating what word a human would use next. That's basically the only thing they can do.
LLMs can use inference to solve math problems, they just aren’t reliable. They are more reliable if you ask them to output their work in steps, but still not as reliable as a calculator, especially for complex problems. GPT 4 can divide 20 by 5 just fine in one API call.

Lucid Dream fucked around with this message at 17:36 on Mar 17, 2024

Adbot
ADBOT LOVES YOU

Lucid Dream
Feb 4, 2003

That boy ain't right.

Boris Galerkin posted:



Ok it can do math.

I said one API call, ChatGPT is using code analysis and multi stage prompts. Retry it and tell it to only use inference. When I did that it skipped the code analysis, broke the problem down into words and gave me the answer with only inference.

Edit: I tried it a few more times and telling it *not* to use code analysis is more likely to skip that part. You can tell when it’s not doing it because it doesn’t give you a link to let you expand the python part.
Edit2 (on the playground, so just bare model here):

Lucid Dream fucked around with this message at 18:10 on Mar 17, 2024

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply