Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
SaTaMaS
Apr 18, 2003

SCheeseman posted:

idk, if AI eventually heralds true post scarcity by inventing star trek poo poo, cool beans, but at the moment it does essays and spits out pretty pictures. Not to say I don't think people will find ways to use the technology in broader ways that have greater impact, but at the moment, post scarcity is still sci-fi.

GPT-4 can use external tools like a calculator so a star trek computer is theoretically possible but people are still working on the real-world implementations

Adbot
ADBOT LOVES YOU

SaTaMaS
Apr 18, 2003

SCheeseman posted:

I meant more in the sense of replicators rather than API integrations. Post scarcity isn't post scarcity until all people can eat and live without dependence on labour or privilige.

Is this thread supposed to be about what's happening currently or speculations about things that probably won't ever happen

SaTaMaS
Apr 18, 2003

Noam Chomsky posted:

So how long before it puts web developers and programmers out of work? I’m asking for a friend.

It’s me. I’m the friend.

Most large projects I've worked on have had a small collection of senior programmers, a small number of interns/junior programmers that the senior programmers are supposed to train in case they get hit by a bus, and a bunch of in-between programmers that are usually contract workers. ChatGPT can make the junior and senior programmers so much more productive that the in-between programmers are mostly unnecessary, but can't replace the senior programmers or their HBAB replacements in the foreseeable future.

SaTaMaS
Apr 18, 2003
Another promising area is the possibility that ChatGPT can look at really old languages like Cobol and Fortran and not just improve the documentation but translate it into modern languages using cleaner code.

SaTaMaS
Apr 18, 2003

Drakyn posted:

I want to be very very clear about this without being any more insulting than possible: this seems like an worry based entirely on conflating a hypothetical computer program that's very good at playing Scrabble and Apples to Apples using the internet with a mind. Why do ChatGPT or its hypothetical descendants deserve any more special consideration than the AI bots in an FPS or anything else besides the fact that what it's simulating is 'pretending to talk' rather than 'pretending to play a video game badly'?
edit: It seems like the only reason it does is because it's capable of pretending to look sort of like a completely fictional entity's ancestor, and if we're going to treat vague approximations of fictional technologies as if they're real I'd be more worried that Star Trek shows have been mimicking teleporting technology for decades now and although that isn't true teleportation we don't know if there's a certain complexity that leads to it and where that line is.

Language use is what is supposed to make humans uniquely intelligent.

SaTaMaS
Apr 18, 2003

gurragadon posted:

I mean I thought I was pretty clear but no I don't think ChatGPT itself needs to be treated differently than other chatbots or anything else pretending to talk. It doesn't posses AGI or any form of intelligence, but it is really good at mimicking it like you said. The fact that is good at mimicking it makes me consider what is right when dealing with AI programs at the current technology level and hypothetical AI programs in the future.

It's not the complexity issue, it's the reasoning and consciousness issue. Right now, I am comfortable saying that ChatGPT is faking it and it has been proven why to me, but these advancements go really fast.

Edit: It just seems like a bad first step into this domain that were taking and were just disregarding a lot of concerns, which isn't new by any means, but interesting to think about.

A chatbot doesn't need to be conscious to replace a person, the jobs question is why it needs to be treated differently than other chatbots.

SaTaMaS
Apr 18, 2003

MSB3000 posted:

I'd like to throw out there that if/when we create a conscious AI mind, we need to understand that there's no reason to expect it'll have any of the same kind of experience humans or even biological creatures have. There's not really an objective truth to human consciousness being the "default", or at least there doesn't seem to be reason to believe that. We're one of the unknown number of ways consciousness could exist in the universe. In other words, Galileo showed us it was dumb to assume the Earth was the center of the cosmos, and for the same reason it's dumb to assume human consciousness is the default type of consciousness. Reality does its own thing, we're just part it.

So, an AI mind could potentially experience no emotions, super emotions, inverse emotions, totally inhuman emotions, or any combination of those. There's basically no reason to assume anything right now. IMO, that's all the more reason to be cautious before we throw the comically large Frankenstein power switch on GPT 5/6/7/n

The thing keeping GPT from becoming AGI has nothing to do with consciousness and everything to do with embodiment. An embodied AGI system would have the ability to perceive and manipulate the world, learn from its environment, adapt to changes, and develop a robust understanding of the world that is not possible in disembodied systems.

SaTaMaS
Apr 18, 2003

Raenir Salazar posted:

How do we know we're not also perceiving the world via a disembodied system though? What is embodiment other than a set of inputs? If we provided a sufficiently realistic and informative simulated environment how does this differ from the Real(tm) environment for the purposes of learning?

This like physics, balance, and proprioception could theoretically be simulated, but that would involve much more complex inputs than GPT's current multi-modal interface.

SaTaMaS
Apr 18, 2003

RPATDO_LAMD posted:

GPT is just a text prediction model. It only does text prediction. It has 0 internal reasoning ability, although it can fool people who don't really understand how it works because it can generate text that looks like someone reasoning, based on all the examples of that kind of text in its training set.
Even if it somehow had the ability to interact with the world (like you set up a robot arm to respond to certain keywords in the text generation output and a webcam with some computer-vision captioning library to generate text descriptions of what it sees) it would not be "intelligent" in whatever scifi way you are thinking.

None of these things can be simulated by a GPT text prediction model. In a theoretical AGI they probably would be, but its internals would be nothing like GPT.

More likely GPT will be the master model and lower level recognition tasks like recognition will be delegated to sub-models.

SaTaMaS
Apr 18, 2003

IShallRiseAgain posted:

I think a true AGI would at least be able to train at the same time as its running. GPT does do some RLHF, but I don't think its real-time. Its definitely a Chinese Room situation at the current moment. Right now it just regurgitates its training data, and can't really utilize new knowledge. All it has is short term memory.

Although some people have made an extremely rudimentary step in that direction like with this https://github.com/yoheinakajima/babyagi.

That's certainly true for a 100% complete true AGI, but it's not a huge difference practically if it retrains on a regular basis. Eventually there will be fewer and fewer situations that are truly new as opposed to new situations that can be managed by recognizing and applying existing patterns.

SaTaMaS fucked around with this message at 23:08 on Apr 8, 2023

SaTaMaS
Apr 18, 2003

RPATDO_LAMD posted:

No, you cannot use a text prediction model which isn't generally intelligent and isn't architecturally capable of being generally intelligent as the master model for an artificial general intelligence.
If such a thing is made it will not be based on GPT.
Just because it produces smart sounding words does not mean it is actually thinking.

why

SaTaMaS
Apr 18, 2003

RPATDO_LAMD posted:

If you put a generative text prediction model in a robot body it would not do anything.
If you managed to create and train a smell-to-text model etc it would still not do anything. These are not actually human brains and don't work anything like human brains despite the descriptor of "neural net"

The LLM handles higher level cognitive tasks, you're talking about lower level tasks like movement and sensing.

SaTaMaS
Apr 18, 2003

RPATDO_LAMD posted:

Because it is a generative text prediction model. It doesn't have any internal reasoning or logic, it doesn't even have memory besides the fact that the last X tokens of the conversation are fed back into it for the next sentence generation.
It is not capable of any tasks besides "here are a few hundred/thousand words, pick the next word to follow up". And it only learns that by feeding terabytes of text data into a hopper and then doing statistics on all of them.

It is not an analogy to the human brain. It doesn't 'think' like a human does. It is good at one task -- text generation. It does not "comprehend" concepts and ideas in a way that you could generalize it to tasks outside of that. It only generates text.

This is demonstrably false. It's capable of task planning, selecting models to execute sub-tasks, and intelligently using the results from the sub-models to complete a task.

SaTaMaS
Apr 18, 2003

RPATDO_LAMD posted:

Yeah exactly! This is what I'm saying. You can ask the text generation model to generate a plan or a snippet code and it will generate some text that plausibly looks like a plan, follows the same text structure as a good plan etc.
But there is no internal reasoning going on. It has no way to specifically make "correct" results or differentiate them from incorrect plans or code snippets, except by following patterns from its training data. It can't reason in novel situations. Because the model is not capable of reasoning at all. It wasn't designed to reason! It was designed to generate text.

For example look at this twitter thread:
https://twitter.com/cHHillee/status/1635790330854526981?s=20

Testing GPT-4 on some beginner programming challenges, it solves 100% of the ones old enough for their solutions to be in the training set and 0% of the new, novel ones which it couldn't have been trained on.
Those solutions are not evidence of cognition, they are evidence that it can successfully replicate patterns from its training data.

And yet it's able to use pattern recognition to apply previous solutions it's seen to similar problems. That's what a lot of people are doing when they say they're reasoning.

SaTaMaS
Apr 18, 2003

cat botherer posted:

ChatGPT cannot control a robot. JFC people, come back to reality.

It will be able to control a robot before it will be conscious is the point.

SaTaMaS
Apr 18, 2003

DeeplyConcerned posted:

Why would you use a language model to control a robot? Wouldn't you want to train the robot model on situations the robot would encounter?

In order to communicate a task to the robot in natural language and have it be able to figure out a solution.

SaTaMaS
Apr 18, 2003

DeeplyConcerned posted:

That part makes sense . But you still need the robot to be able to interact with the physical world. Because the code to have the robot move its arm is going to be more complicated than "move your left arm" or "pick up the coffee cup". I'm thinking you would need at least a few specialized models that work together to make this work. You'd want a physical movement model, an object recognition model, plus the natural language model. Plus, I'd imagine some kind of safety layer to make sure that the robot doesn't accidentally rip someone's face off.

Focused work on this is going to start soon if hasn't already. It's a far more tractable problem than trying to make a conscious AI. I'm not a believer in the singularity, but if a LLM capable of acting as a "master model" which can control sub-models with additional skills such as perception, motor control, and decision-making can be iterated on to gradually perform more and more tasks that previously only a human could do, it can still lead to a lot of the same issues as a true AGI like job displacement and economic inequality.

SaTaMaS
Apr 18, 2003

eXXon posted:

Do you have sources for this?

https://arxiv.org/pdf/2303.17580.pdf

SaTaMaS
Apr 18, 2003

An online demo of Hugging GPT called JARVIS it out
https://beebom.com/how-use-microsoft-jarvis-hugginggpt/
https://huggingface.co/join

quote:

In the AI field, new large language models are being launched every day and things are changing at a breakneck pace. In just a few months of development, we can now run a ChatGPT-like LLM on our PC offline. Not just that, we can train an AI chatbot and create a personalized AI assistant. But what has intrigued me recently is Microsoft’s hands-on approach to AI development. Microsoft is currently working on an advanced form of AI system called JARVIS (an obvious reference to Marvel’s Iron Man) that connects to multiple AI models and responds with a final result. Its demo is hosted on Huggingface and anyone can check out JARVIS’s capabilities right now. So if you’re interested, go ahead and learn how to use Microsoft JARVIS (HuggingGPT) right away.

Microsoft has developed a kind of unique collaborative system where multiple AI models can be used to achieve a given task. And in all of this, ChatGPT acts as the controller of the task. The project is called JARVIS on GitHub (visit), and it’s now available on Huggingface (hence called HuggingGPT) for people to try it out. In our testing, it worked wonderfully well with texts, images, audio, and even videos.

It works similarly to how OpenAI demonstrated GPT 4’s multimodal capabilities with texts and images. However, JARVIS takes it one step further and integrates various open-source LLMs for images, videos, audio, and more. The best part here is that it can also connect to the internet and access files. For example, you can enter a URL from a website and ask questions about it. That’s pretty cool, right?

You can add multiple tasks in a single query. For example, you can ask it to generate an image of an alien invasion and write poetry about it. Here, ChatGPT analyzes the request and plans the task. After that, ChatGPT selects the correct model (hosted on Huggingface) to achieve the task. The selected model completes the task and returns the result to ChatGPT.

Finally, ChatGPT generates the response using inference results from all the models. For this task, JARVIS used the Stable Diffusion 1.5 model to generate the image and used ChatGPT itself to write a poem.
It's still limited to inputting and outputting text and images, but anyone still using something like Excel to copy data around and process it probably won't have that job much longer.

SaTaMaS
Apr 18, 2003

Lucid Dream posted:

Well, by that definition doesn't ChatGPT already pass that threshold?

ChatGPT can only receive input and output via a chat window, which is very limiting all things considered.

SaTaMaS
Apr 18, 2003

KillHour posted:

This is basically the only time someone could make ChaosGPT. It's obviously a joke, and it works because it's bad. Before this, all the existing tech would be too limited to be bad in an interesting way. After this, it will probably be too good to be a joke.

AutoGPT is hooked up to a console and has a lot more "freedom" without a lot of new capabilities. It's still limiting, but it's not the main reason these systems are limited. A human could do a lot with a console and an internet connection.

The main thing these systems can't do is improve themselves - learn from doing.

Nah the main limitation is that no LLM has intrinsic desires so all it can do is respond to instructions. AutoGPT isn't going to try to hack any nuclear codes unless someone tells it to, so the alignment problem is the real issue there. Once ChatGPT has a robot body it will need to worry about self-preservation and all the sorts of situations that result from Asimov's laws of robotics. Learning from doing is important but LLMs do so much learning up front that it's not as much of a requirement for intelligent behavior as it is for people.

SaTaMaS
Apr 18, 2003

KillHour posted:

I think you're being too limiting about the definition of what an intrinsic desire is. I would argue they do have intrinsic desires in the same way that a plant has an intrinsic desire to grow towards the sun. It's not necessarily a conscious thought - just a response to stimuli. But one that is engrained into it at the most basic level. I think what humans consider intrinsic goals are closer to a plant growing towards the sun than anything more logical. You literally cannot change them and you probably aren't even aware of them directly. In the same way, a model has no "choice" but to output the natural consequences of its model weights. To give an example - if the model is trained to complete a task and is capable enough, it is probably going to try to stop you from preventing it from completing that task, just because that would interfere with the thing it was trained to do. This might sound like I'm mincing words, but I think it's just that we are uncomfortable about thinking of humans as really advanced automata.

The thing ChatGPT doesn't do is create instrumental goals - intermediate goals that further its ability to do the intrinsic stuff. That's where it falls flat on its face.

The model doesn't have a "choice" or "desire" to complete a task; it is just executing the function it was designed for. It's no more useful to attribute human-like characteristics, such as desires or goals, to these models than it is to say a thermostat desires to keep a room a certain temperature.

SaTaMaS
Apr 18, 2003

GlyphGryph posted:

Saying they have desires is dumb, sure. But saying they have goals is perfectly reasonable, talking about goals does not require anything remotely human-like to be attributed, and goal modeling and terminology (like the difference between instrumental and terminal goals) is a useful and effective way to describe AI functionality.

It really isn't, because of how easily goal terminology gets munged into intentionality, which then gets munged into consciousness and anthropomorphism.

SaTaMaS
Apr 18, 2003

GlyphGryph posted:

You said it wasn't useful - it clearly is, or you'd be offering an alternate framework for discussing the issue. If we gave up on useful language in scientific fields because idiots somewhere were bad at using it, there's a whole lot of stuff we'd be completely unable to talk about in a meaningful way.

The key is to keep in mind that it's the user who has goals, while ChatGPT has tasks or objectives just like any other program. In this case it's processing input, including context, and utilizing its training data to produce a relevant output.

SaTaMaS
Apr 18, 2003

KillHour posted:

This is actually my point - our core goals are mostly "seek pleasure and avoid pain" and both of those things come from chemical and physiological responses we have no control over. The important thing is we don't need to experience the pain for us to want to avoid it - our brains are hardwired to do or not do certain things. That's pretty much the limit of my knowledge of the subject though, so anything else is speculation. The idea that a trained model may be able to exhibit goal-seeking behavior from the training as a proxy for how our brain is "trained" to avoid pain is definitely speculation. But I think it's plausible and can't be completely ruled out.

It can be ruled out because you're confusing a metaphor (exhibiting goal-seeking behavior) with reality that it performs specific tasks (generating coherent and relevant responses) based on the data it was trained on.

SaTaMaS
Apr 18, 2003

GlyphGryph posted:

I'm honestly not sure what he thinks a goal is at this point. Magic, probably.

Having a goal requires consciousness and intentionality

SaTaMaS
Apr 18, 2003

KillHour posted:

Does it? That really sounds like an assertion begging the question.

You just get stuck in a circle with things that you think are conscious having goals and things you think aren't conscious just having things that they do.

Taking the intentional stance is a useful last resort when there's no simpler way to explain something's actions. For people, just measuring brain activity won't tell much at all about the person involved so we need to attribute intentionality for a useful description. For LLMs their "goals" are determined by their creators and are essentially programmed tasks that the AI system is designed to perform so attributing intentionality isn't necessary.

SaTaMaS
Apr 18, 2003

GlyphGryph posted:

But you think having an objective doesn't, apparently? That doesn't make much sense, considering they are synonymous. Why should we use the word the way you want to, here, where it explicitly requires something to have those things, instead of the way its traditionally used especially within technological fields but also elsewhere where it does not?

Because it's very useful to differentiate between the intentional stance and the design stance.

SaTaMaS
Apr 18, 2003

gurragadon posted:

I was unfamiliar with these terms but wikipedia made it seem like the design stance is taking only the function of a system for granted as working while the design stance dosen't care about the structure or design of the system? The mental processes if you will.

Am I off base? Could you relate it to the difference between goals and objectives, like are you saying the goal and objective represent different stances? Or are you saying that the programmer "takes" the intentional stance from the AI program they create?

Edit: This is the wikipedia article. https://en.wikipedia.org/wiki/Intentional_stance

https://sites.google.com/site/minddict/intentional-stance-the#:~:text=Just%20as%20the%20design%20stance,object%20as%20a%20rational%20agent.

quote:

The Physical Stance and the Design Stance

The physical stance stems from the perspective of the physical sciences. To predict the behavior of a given entity according to the physical stance, we use information about its physical constitution in conjunction with information about the laws of physics. Suppose I am holding a piece of chalk in my hand and I predict that it will fall to the floor when I release it. This prediction relies on (i) the fact that the piece of chalk has mass and weight; and (ii) the law of gravity. Predictions and explanations based on the physical stance are exceedingly common. Consider the explanations of why water freezes at 32 degrees Fahrenheit, how mountain ranges are formed, or when high tide will occur. All of these explanations proceed by way of the physical stance.

When we make a prediction from the design stance, we assume that the entity in question has been designed in a certain way, and we predict that the entity will thus behave as designed. Like physical stance predictions, design stance predictions are commonplace. When in the evening a student sets her alarm clock for 8:30 a.m., she predicts that it will behave as designed: i.e., that it will buzz at 8:30 the next morning. She does not need to know anything about the physical constitution of the alarm clock in order to make this prediction. There is no need, for example, for her to take it apart and weigh its parts and measure the tautness of various springs. Likewise, when someone steps into an elevator and pushes "7," she predicts that the elevator will take her to the seventh floor. Again, she does not need to know any details about the inner workings of the elevator in order to make this prediction.

Design stance predictions are riskier than physical stance predictions. Predictions made from the design stance rest on at least two assumptions: first, that the entity in question is designed as it is assumed to be; and second, the entity will perform as it is designed without malfunctioning. The added risk almost always proves worthwhile, however. When we are dealing with a thing that is the product of design, predictions from the design stance can be made with considerably more ease than the comparable predictions from the physical stance. If the student were to take the physical stance towards the alarm clock in an attempt to predict whether it will buzz at 8:30 a.m., she would have to know an extraordinary amount about the alarm clock’s physical construction.

This point can be illustrated even more dramatically by considering a complicated designed object, like a car or a computer. Every time you drive a car you predict that the engine will start when you turn the key, and presumably you make this prediction from the design stance—that is, you predict that the engine will start when you turn the key because that it is how the car has been designed to function. Likewise, you predict that the computer will start up when you press the "on" button because that it is how the computer has been designed to function. Think of how much you would have to know about the inner workings of cars and computers in order to make these predictions from the physical stance!

The fact that an object is designed, however, does not mean that we cannot apply the physical stance to it. We can, and in fact, we sometimes should. For example, to predict what the alarm clock will do when knocked off the nightstand onto the floor, it would be perfectly appropriate to adopt the physical stance towards it. Likewise, we would rightly adopt the physical stance towards the alarm clock to predict its behavior in the case of a design malfunction. Nonetheless, in most cases, when we are dealing with a designed object, adopting the physical stance would hardly be worth the effort. As Dennett states, "Design-stance prediction, when applicable, is a low-cost, low-risk shortcut, enabling me to finesse the tedious application of my limited knowledge of physics." (Dennett 1996)

The sorts of entities so far discussed in relation to design-stance predictions have been artifacts, but the design stance also works well when it comes to living things and their parts. For example, even without any understanding of the biology and chemistry underlying anatomy we can nonetheless predict that a heart will pump blood throughout the body of a living thing. The adoption of the design stance supports this prediction; that is what hearts are supposed to do (i.e., what nature has "designed" them to do).


The Intentional Stance

As already noted, we often gain predictive power when moving from the physical stance to the design stance. Often, we can improve our predictions yet further by adopting the intentional stance. When making predictions from this stance, we interpret the behavior of the entity in question by treating it as a rational agent whose behavior is governed by intentional states. (Intentional states are mental states such as beliefs and desires which have the property of "aboutness," that is, they are about, or directed at, objects or states of affairs in the world. See intentionality.) We can view the adoption of the intentional stance as a four-step process. (1) Decide to treat a certain object X as a rational agent. (2) Determine what beliefs X ought to have, given its place and purpose in the world. For example, if is X standing with his eyes open facing a red barn, he ought to believe something like, "There is a red barn in front of me." This suggests that we can determine at least some of the beliefs that X ought to have on the basis of its sensory apparatus and the sensory exposure that it has had. Dennett (1981) suggests the following general rule as a starting point: "attribute as beliefs all the truths relevant to the system’s interests (or desires) that the system’s experience to date has made available." (3) Using similar considerations, determine what desires X ought to have. Again, some basic rules function as starting points: "attribute desires for those things a system believes to be good for it," and ""attribute desires for those things a system believes to be best means to other ends it desires." (Dennett 1981) (4) Finally, on the assumption that X will act to satisfy some of its desires in light of its beliefs, predict what X will do.

Just as the design stance is riskier than the physical stance, the intentional stance is riskier than the design stance. (In some respects, the intentional stance is a subspecies of the design stance, one in which we view the designed object as a rational agent. Rational agents, we might say, are those designed to act rationally.) Despite the risks, however, the intentional stance provides us with useful gains of predictive power. When it comes to certain complicated artifacts and living things, in fact, the predictive success afforded to us by the intentional stance makes it practically indispensable. Dennett likes to use the example of a chess-playing computer to make this point. We can view such a machine in several different ways:
as a physical system operating according to the laws of physics;
as a designed mechanism consisting of parts with specific functions that interact to produce certain characteristic behavior; or
as an intentional system acting rationally relative to a certain set of beliefs and goals
Given that our goal is to predict and explain a given entity’s behavior, we should adopt the stance that will best allow us to do so. With this in mind, it becomes clear that adopting the intentional stance is for most purposes the most efficient and powerful way (if not the only way) to predict and explain what a well designed chess-playing computer will do. There are probably hundreds of different computer programs that can be run on a PC in order to convert it into a chess player. Though the computers capable of running these programs have different physical constitutions, and though the programs themselves may be designed in very different ways, the behavior of a computer running such a program can be successfully explained if we think of it as a rational agent who knows how to play chess and who wants to checkmate its opponent’s king. When we take the intentional stance towards the chess-playing computer, we do not have to worry about the details of its physical constitution or the details of its program (i.e., its design). Rather, all we have to do is determine the best legal move that can be made given the current state of the game board. Once we treat the computer as a rational agent with beliefs about the rules and strategies of chess and the locations of the pieces on the game board, plus the desire to win, it follows that the computer will make the best move available to it.

Of course, the intentional stance will not always be useful in explaining the behavior of the chess-playing computer. If the computer suddenly started behaving in a manner inconsistent with something a reasonable chess player would do, we might have to adopt the design stance. In other words, we might have to look at the particular chess-playing algorithm implemented by the computer in order to predict what it will subsequently do. And in cases of more extreme malfunction—for example, if the computer screen were suddenly to go blank and the system were to freeze up—we would have to revert to thinking of it as a physical object to explain its behavior adequately. Usually, however, we can best predict what move the computer is going to make by adopting the intentional stance towards it. We do not come up with our prediction by considering the laws of physics or the design of the computer, but rather, by considering the reasons there are in favor of the various available moves. Making an idealized assumption of optimal rationality, we predict that the computer will do what it rationally ought to do.

Objectives are typically more quantifiable than goals. Using the design stance, "objective" emphasizes that these systems are designed to perform specific tasks based on their algorithms and training data, without consciousness or intentions. These tasks are programmed by their creators and can be thought of as objectives that the AI system is designed to achieve.

SaTaMaS fucked around with this message at 21:09 on Apr 18, 2023

SaTaMaS
Apr 18, 2003

KillHour posted:

Okay, but humans are "designed" by evolution to do things that make us more likely to reproduce. It just seems like an arbitrary distinction created to conform to the idea that we're special in a way a computer is not or cannot be. There's a bunch of handwaving going on to gloss over the limitations in our knowledge. It's possible there's some fundamental thing that makes intent real, but it's also possible we're just post-hoc justifying unconscious predisposition as intent.

Cool so we're at the point of discussing intelligent design.

SaTaMaS
Apr 18, 2003

KillHour posted:

You just made that strawman up and it's incredibly blatant. I didn't say some intelligent god designed us. I said our brains have an inherent structure that is tuned or trained or designed or shaped or whatever you want to call it by evolution. This is not controversial.

Edit: If anything, you're the one saying there's something fundamental about our designing these systems that makes any potential "goals" they might someday create a product of our intent instead of emergent.

It's extremely controversial, you're literally talking about intelligent design. The whole point of evolution is that it provides a way to no longer need a designer.

SaTaMaS
Apr 18, 2003

gurragadon posted:

I think I understand what you are saying now, tell me if I'm off.

When we take the intentional stance towards AI programs we may gain information, but that information is more likely to be incorrect because we are making assumptions. It is preferable to take the design stance because we can when we are talking about AI programs because there is less room for error because we are assuming less.

Or maybe another way to say it is we are taking the intentional stance towards AI programs because it is easier to describe its behavior that way.

Yes exactly

SaTaMaS
Apr 18, 2003

Quinch posted:

Yeah isn’t this the whole point of AI really? It’s given a goal defined by a measure of some sort and the AI works out the correct actions of its possible outputs to achieve this. I wouldn’t say it’s desire as such but saying an AI has goals and it’s programmed to maximise them is perfectly reasonable.

Sure in casual conversation it doesn't really matter, and even in AI systems things like beliefs, desires, and intentions are employed as metaphors. However in any somewhat serious discussion about AI it's important to distinguish between things that are determined by their design and training data, and where something resembling personal motivations and intentions start to determine its goals, assuming such a thing is even possible for an AI.

SaTaMaS
Apr 18, 2003
One annoying thing about using ChatGPT for coding is that whether you give it a great idea, a good idea, or a mediocre idea, it responds pretty much the same way, it will give some pros and cons and an example implementation. I'm not sure whether this is the RLHF conditioning trying to avoid hurting my feelings or whether it really has no concept of a "good" implementation vs a "bad" implementation assuming both are free of bugs.

SaTaMaS
Apr 18, 2003

America Inc. posted:

Small diversion in topic:
I find it funny that people need to be reminded that ChatGPT doesn't always provide factual information, given that growing up with the Internet you knew that you shouldn't trust everything you find on Google and Wikipedia.

In fact, given that most people don't click through more than the first two pages of a search when these queries have thousands or millions of results, search engines were already mostly returning garbage with a few actually "good" results on top provided by companies who are good at SEO. Search engines have always been more of an advertising tool than a knowledge tool. A search engine provides you with a shelf of different items of information that compete and whose value is not in truth but in utility. There is only one answer to the question "what is 1+1?" but there's lots of different ways to make pasta. For this reason I predict Google Bard is going to be a flop.

E: Google was apparently aware of this already, which is why they were more conservative in launching in the first place.

Wikipedia isn't perfect but it's curated by people who are usually experts on a given topic.

Which leads into a question: what would be the AI equivalent of Wikipedia? That is to say, how can you provide a curated repository of knowledge that also provides answers in a conversational format?

As far as LLMs this is called "temperature". A lower temperature setting results in a more predictable and factual response, while a higher temperature setting results in a more creative response. Though a low temperature setting can still produce incorrect responses if the model is poorly trained.

SaTaMaS
Apr 18, 2003

cat botherer posted:

You're kind of touching on symbolic AI or expert systems, which was really the first way of doing AI before statistical methods took over. They're really useful in some problem domains. I think figuring out to integrate the two approaches will be a big deal in the future, given that they tend to be good at complimentary things.

Deepmind actually found a way to do this for AlphaFold, they figured out how to get the neural network to treat atomic physics as an immutable ground truth when figuring out how proteins fold.

SaTaMaS
Apr 18, 2003

SCheeseman posted:

I don't think it's practical to heavily regulate machine learning, given it's already pervasive use in a bunch of benign applications (and less benign but desirable to government and business). The greater point was that anti-AI advocates solutions that use copyright as a fixer is a long term self-own that inhibits the general public's open access to the technology, which they might not care about because they don't want to use it but probably will care once they are forced to in order to keep their jobs and need to pay rent for access to it.

Has there ever been a form of automation technology, anything truly useful, that was inhibited long term by political and social pressure? Not a rhetorical question.

In the UK the Red Flag Act was passed in 1865 , which required a man to walk in front of every automobile waving a red flag to warn pedestrians and horse-drawn vehicles. It wasn't repealed until 1896.

SaTaMaS
Apr 18, 2003

KillHour posted:

The part of art that is being automated here is the labor part, not the creative part. I keep using this example, but it's similar to DAWs becoming a way to make music that doesn't require knowing how to play every instrument in the song. Songs that once required a full band can be made by a single producer. Bands that play live instruments still exist, but now so do a bunch of artists who wouldn't have been able to hire a professional drummer to help them create a song. Instead, they spend $25/month on Sonicpass and the barrier to entry comes way down.

No, it's just the opposite. Bands can still make money playing live, that's the labor part. Theoretically artists could paint pictures that were created with an AI, but I doubt people will pay much for that. OTOH I can imagine EDM artists performing AI created dance music live and making money selling tickets.

SaTaMaS
Apr 18, 2003

KillHour posted:

Playing a sampled drum kit is not the same as playing the drums. Playing a sampled guitar is not the same as playing a guitar. I'm not discounting the amount of effort that goes into EDM sets (in fact, they are my favorite genre, and I have dabbled in making my own and I can play the guitar), but I'm pointing out that before sampled instruments became a thing, it would be an incredible amount of effort and money for a solo artist to make music like that. When sampled instruments first came out, a bunch of people complained that it hurt "real" musicians and lamented the death of learning to do things "the old fashioned way." In ways that resemble artists in this thread lamenting that art is changing from underneath them. But what it ended up doing was creating opportunities for entirely new kids of musical expression, in much the same way generative AI can do for the visual arts.

The EDM artist still has to be there in person in order to get paid. There isn't an equivalent to live concerts for the visual arts.

Adbot
ADBOT LOVES YOU

SaTaMaS
Apr 18, 2003

BrainDance posted:


Art shows exist. They serve baked brie and have live jazz it's great.

That's not really the same. With art shows they're getting paid once for their creativity and labor, like if a musician could only sell each song they wrote once to a single person. With tangible art that can still work but AI will make that basically irrelevant for digital art, and the market for the people who will only buy art manually created by a person is miniscule compared to people who go to music concerts.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply