Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Lucid Dream
Feb 4, 2003

That boy ain't right.

RPATDO_LAMD posted:

Yeah chaosGPT is basically just asking the chatbot "generate a list of steps for an evil ai to take if it wants to destroy humanity", and then using some basic text processing to feed those back in and ask the AI "how do I complete step 1". It's far from a serious threat of accomplishing anything.

The thing to keep in mind with the current generation of LLMs is that they're sort of like the glue that can hold various processes together that would traditionally have a human bottleneck, and the capabilities of these systems are basically only constrained by the ad-hoc APIs that people build on top of them. When you combine that with GPT's ability to recursively hallucinate tasks and plans all the way from high level goals to the specific implementations... It's not hard to imagine where this goes as this stuff matures. Right now these auto-gpt systems are mostly just hallucinating their plans, but we're already several steps down the path of them being able to touch the real world by using external agents to retrieve, act on and incorporate information for future recursive steps. Any one prompt execution isn't some kind of super-intelligent output, but these things can be strung together so that they can check each other's work and analyze problems.

We're now at a point that reading a news story about a rogue AI system causing problems is a realistic possibility. It's still pretty far from a real existential threat... but nobody knows how far it takes to get from here to there considering this stuff is improving in every dimension.

Adbot
ADBOT LOVES YOU

SaTaMaS
Apr 18, 2003

Lucid Dream posted:

Well, by that definition doesn't ChatGPT already pass that threshold?

ChatGPT can only receive input and output via a chat window, which is very limiting all things considered.

KillHour
Oct 28, 2007


This is basically the only time someone could make ChaosGPT. It's obviously a joke, and it works because it's bad. Before this, all the existing tech would be too limited to be bad in an interesting way. After this, it will probably be too good to be a joke.

SaTaMaS posted:

ChatGPT can only receive input and output via a chat window, which is very limiting all things considered.

AutoGPT is hooked up to a console and has a lot more "freedom" without a lot of new capabilities. It's still limiting, but it's not the main reason these systems are limited. A human could do a lot with a console and an internet connection.

The main thing these systems can't do is improve themselves - learn from doing.

KillHour fucked around with this message at 16:02 on Apr 17, 2023

duck monster
Dec 15, 2004

KillHour posted:

This is basically the only time someone could make ChaosGPT. It's obviously a joke, and it works because it's bad. Before this, all the existing tech would be too limited to be bad in an interesting way. After this, it will probably be too good to be a joke.

AutoGPT is hooked up to a console and has a lot more "freedom" without a lot of new capabilities. It's still limiting, but it's not the main reason these systems are limited. A human could do a lot with a console and an internet connection.

The main thing these systems can't do is improve themselves - learn from doing.

Although that limitation would appear to be an artefact of design. It cant remember poo poo. Orthodox Transformers arent really *supposed* to learn. But its not hard to imagine a fairly trivial update to the design to feed its buffers back into its training in real timel.

SaTaMaS
Apr 18, 2003

KillHour posted:

This is basically the only time someone could make ChaosGPT. It's obviously a joke, and it works because it's bad. Before this, all the existing tech would be too limited to be bad in an interesting way. After this, it will probably be too good to be a joke.

AutoGPT is hooked up to a console and has a lot more "freedom" without a lot of new capabilities. It's still limiting, but it's not the main reason these systems are limited. A human could do a lot with a console and an internet connection.

The main thing these systems can't do is improve themselves - learn from doing.

Nah the main limitation is that no LLM has intrinsic desires so all it can do is respond to instructions. AutoGPT isn't going to try to hack any nuclear codes unless someone tells it to, so the alignment problem is the real issue there. Once ChatGPT has a robot body it will need to worry about self-preservation and all the sorts of situations that result from Asimov's laws of robotics. Learning from doing is important but LLMs do so much learning up front that it's not as much of a requirement for intelligent behavior as it is for people.

KillHour
Oct 28, 2007


duck monster posted:

Although that limitation would appear to be an artefact of design. It cant remember poo poo. Orthodox Transformers arent really *supposed* to learn. But its not hard to imagine a fairly trivial update to the design to feed its buffers back into its training in real timel.

I know why it can't do it - I'm just stating that it can't. I'm not sure if feeding the data back would be enough though. Humans have both long term and short term memory (which is different from what they call LSTM in AI), and AI probably needs both "what just happened" and some way to self-modify to adjust to how well it worked. Otherwise, it would just go back to being as stupid as it started whenever it ran out of memory.

SaTaMaS posted:

Nah the main limitation is that no LLM has intrinsic desires so all it can do is respond to instructions. AutoGPT isn't going to try to hack any nuclear codes unless someone tells it to, so the alignment problem is the real issue there. Once ChatGPT has a robot body it will need to worry about self-preservation and all the sorts of situations that result from Asimov's laws of robotics. Learning from doing is important but LLMs do so much learning up front that it's not as much of a requirement for intelligent behavior as it is for people.

I think you're being too limiting about the definition of what an intrinsic desire is. I would argue they do have intrinsic desires in the same way that a plant has an intrinsic desire to grow towards the sun. It's not necessarily a conscious thought - just a response to stimuli. But one that is engrained into it at the most basic level. I think what humans consider intrinsic goals are closer to a plant growing towards the sun than anything more logical. You literally cannot change them and you probably aren't even aware of them directly. In the same way, a model has no "choice" but to output the natural consequences of its model weights. To give an example - if the model is trained to complete a task and is capable enough, it is probably going to try to stop you from preventing it from completing that task, just because that would interfere with the thing it was trained to do. This might sound like I'm mincing words, but I think it's just that we are uncomfortable about thinking of humans as really advanced automata.

The thing ChatGPT doesn't do is create instrumental goals - intermediate goals that further its ability to do the intrinsic stuff. That's where it falls flat on its face.

Lucid Dream
Feb 4, 2003

That boy ain't right.

KillHour posted:

The main thing these systems can't do is improve themselves - learn from doing.

It's a little more complicated than that. If I ask GPT4 for help with a programming issue and it gives me incorrect output given the API I'm using, I can paste the documentation and it will "learn" and apply that knowledge to the task. It doesn't natively store that information long term, but the LLM itself doesn't necessarily have to store it and it could be retrieved by some higher level systems. The long term memory problem is far from solved, but I think we're already at the point where we have to split hairs about what counts as "learning".

KillHour
Oct 28, 2007


Lucid Dream posted:

It's a little more complicated than that. If I ask GPT4 for help with a programming issue and it gives me incorrect output given the API I'm using, I can paste the documentation and it will "learn" and apply that knowledge to the task. It doesn't natively store that information long term, but the LLM itself doesn't necessarily have to store it and it could be retrieved by some higher level systems. The long term memory problem is far from solved, but I think we're already at the point where we have to split hairs about what counts as "learning".

Granted, but I'm being more precise about the meaning in that the model doesn't change. Giving me a textbook I can look things up in will make me give you the answer more often, but if I have to look it up every time, I'm not going to be able to use it to solve different but related problems. As you learn new things, you get more of an intuition from them. Just having the reference available probably isn't as good as if it was trained to give the answer correctly the first time. I'm not sure how you'd objectively test that with GPT though - would be interesting if you could.

Lucid Dream
Feb 4, 2003

That boy ain't right.

KillHour posted:

Granted, but I'm being more precise about the meaning in that the model doesn't change. Giving me a textbook I can look things up in will make me give you the answer more often, but if I have to look it up every time, I'm not going to be able to use it to solve different but related problems. As you learn new things, you get more of an intuition from them. Just having the reference available probably isn't as good as if it was trained to give the answer correctly the first time. I'm not sure how you'd objectively test that with GPT though - would be interesting if you could.

There is no question that a system that could continue training the model in real time would be more powerful, but I'm not convinced it's necessary to have significant learning capabilities. Humans are really good at building higher level abstractions with tools, and I think a lot of folks are under-appreciating what you can do with access to a suite of cognitive building blocks like text summarization and vector embedding. The LLMs themselves don't necessarily have to be super-intelligent to power a super-intelligent system, in much the way any individual neuron isn't all that smart.

BrainDance
May 8, 2007

Disco all night long!

duck monster posted:

Although that limitation would appear to be an artefact of design. It cant remember poo poo. Orthodox Transformers arent really *supposed* to learn. But its not hard to imagine a fairly trivial update to the design to feed its buffers back into its training in real timel.

I'm not disagreeing with you (this is a thing you could do now, literally with just a bunch of cron jobs or something to fine-tune it every night on what it learned that day)

But the hardware requirements are what makes that not practical with the very large models. Like, who's gonna pay for all those a100s? Cuz it'd be a lot, like a lot a lot given how much it'd have to learn. Though we've gotten it down with LoRAs (which might be safer, if it learns something the wrong way that makes it worse, with LoRA, just yank that day out) but, still, that's intense for something like GPT4, I don't even know how long it would take to do that even with OpenAI type setups, the training time might still be over a day.

I think the more practical way is a change in how transformers work, like an updated more efficiently finrtunable form of model, maybe one designed with a very intentional "memory spot" that can be added to quickly and the model knows exactly what it is. Or, just really upping the token limit to an insane degree so that you really can fit everything it's learned in with the prompt.

Tei
Feb 19, 2011
Probation
Can't post for 24 hours!
Theres a thing called "parasitic computing", or something like that.

You could send a challenge to a remote computer. The remote computer wold use his resources to solve this challenge. Then return a answer.
Like, you could send packets to computers where the CRC is intentionally wrong, and the remote computer will ask for these packets again, except the one where the CRC is correct. Basically solving the problem of "Sum all these numbers" for you. I don't remember how this is called, parasitic computing?
A system A could parasite a system B, using resources in B intended for other means, to get computation done in A.

Too bad networks are slow, so you could get the work done faster by computing it locally than trying to abuse a remote computer.

But maybe a AGI can find a problem that is just 1) very hard to solve, 2) require very littel data to send over the network, 3) can use parasitic computer 4) can be send to many different internet hosts. 5) you don't expect a answer in nanoseconds, are happy enough with entire seconds delay

If there where something that match these 5 conditions, a trickster AGI could expand his capabilities using servers connected to the internet. Until is found and ip-banned, probably.

SaTaMaS
Apr 18, 2003

KillHour posted:

I think you're being too limiting about the definition of what an intrinsic desire is. I would argue they do have intrinsic desires in the same way that a plant has an intrinsic desire to grow towards the sun. It's not necessarily a conscious thought - just a response to stimuli. But one that is engrained into it at the most basic level. I think what humans consider intrinsic goals are closer to a plant growing towards the sun than anything more logical. You literally cannot change them and you probably aren't even aware of them directly. In the same way, a model has no "choice" but to output the natural consequences of its model weights. To give an example - if the model is trained to complete a task and is capable enough, it is probably going to try to stop you from preventing it from completing that task, just because that would interfere with the thing it was trained to do. This might sound like I'm mincing words, but I think it's just that we are uncomfortable about thinking of humans as really advanced automata.

The thing ChatGPT doesn't do is create instrumental goals - intermediate goals that further its ability to do the intrinsic stuff. That's where it falls flat on its face.

The model doesn't have a "choice" or "desire" to complete a task; it is just executing the function it was designed for. It's no more useful to attribute human-like characteristics, such as desires or goals, to these models than it is to say a thermostat desires to keep a room a certain temperature.

Clarste
Apr 15, 2013

Just how many mistakes have you suffered on the way here?

An uncountable number, to be sure.
Plants grow towards the sun because of a chemical reaction: sunlight makes plant cells reproduce slower, which in turns means that the cells on the other side grow faster and cause the plant as a whole to tilt towards the sun. At no point does anything resembling a "desire" or even a "response" come into the picture. Even humans have a billion autonomous functions which you do not and cannot think about, they're purely chemical reactions or work by basic physical principals like osmosis. They do not require drive or motivation, they are simply built in a way where the intended result is a natural consequence of their existence.

Edit: By which I mean these processes are extremely dumb and cannot do anything like "stop someone from trying to interfere with them" because that isn't an input they can process in the first place.

Clarste fucked around with this message at 01:08 on Apr 18, 2023

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.

SaTaMaS posted:

The model doesn't have a "choice" or "desire" to complete a task; it is just executing the function it was designed for. It's no more useful to attribute human-like characteristics, such as desires or goals, to these models than it is to say a thermostat desires to keep a room a certain temperature.

Saying they have desires is dumb, sure. But saying they have goals is perfectly reasonable, talking about goals does not require anything remotely human-like to be attributed, and goal modeling and terminology (like the difference between instrumental and terminal goals) is a useful and effective way to describe AI functionality.

SaTaMaS
Apr 18, 2003

GlyphGryph posted:

Saying they have desires is dumb, sure. But saying they have goals is perfectly reasonable, talking about goals does not require anything remotely human-like to be attributed, and goal modeling and terminology (like the difference between instrumental and terminal goals) is a useful and effective way to describe AI functionality.

It really isn't, because of how easily goal terminology gets munged into intentionality, which then gets munged into consciousness and anthropomorphism.

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.

SaTaMaS posted:

It really isn't, because of how easily goal terminology gets munged into intentionality, which then gets munged into consciousness and anthropomorphism.

You said it wasn't useful - it clearly is, or you'd be offering an alternate framework for discussing the issue. If we gave up on useful language in scientific fields because idiots somewhere were bad at using it, there's a whole lot of stuff we'd be completely unable to talk about in a meaningful way.

SaTaMaS
Apr 18, 2003

GlyphGryph posted:

You said it wasn't useful - it clearly is, or you'd be offering an alternate framework for discussing the issue. If we gave up on useful language in scientific fields because idiots somewhere were bad at using it, there's a whole lot of stuff we'd be completely unable to talk about in a meaningful way.

The key is to keep in mind that it's the user who has goals, while ChatGPT has tasks or objectives just like any other program. In this case it's processing input, including context, and utilizing its training data to produce a relevant output.

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.
You really think "objectives" isn't going to have the same problem as "goals"?

KillHour
Oct 28, 2007


Clarste posted:

Plants grow towards the sun because of a chemical reaction: sunlight makes plant cells reproduce slower, which in turns means that the cells on the other side grow faster and cause the plant as a whole to tilt towards the sun. At no point does anything resembling a "desire" or even a "response" come into the picture. Even humans have a billion autonomous functions which you do not and cannot think about, they're purely chemical reactions or work by basic physical principals like osmosis. They do not require drive or motivation, they are simply built in a way where the intended result is a natural consequence of their existence.

Edit: By which I mean these processes are extremely dumb and cannot do anything like "stop someone from trying to interfere with them" because that isn't an input they can process in the first place.

This is actually my point - our core goals are mostly "seek pleasure and avoid pain" and both of those things come from chemical and physiological responses we have no control over. The important thing is we don't need to experience the pain for us to want to avoid it - our brains are hardwired to do or not do certain things. That's pretty much the limit of my knowledge of the subject though, so anything else is speculation. The idea that a trained model may be able to exhibit goal-seeking behavior from the training as a proxy for how our brain is "trained" to avoid pain is definitely speculation. But I think it's plausible and can't be completely ruled out.

SaTaMaS
Apr 18, 2003

KillHour posted:

This is actually my point - our core goals are mostly "seek pleasure and avoid pain" and both of those things come from chemical and physiological responses we have no control over. The important thing is we don't need to experience the pain for us to want to avoid it - our brains are hardwired to do or not do certain things. That's pretty much the limit of my knowledge of the subject though, so anything else is speculation. The idea that a trained model may be able to exhibit goal-seeking behavior from the training as a proxy for how our brain is "trained" to avoid pain is definitely speculation. But I think it's plausible and can't be completely ruled out.

It can be ruled out because you're confusing a metaphor (exhibiting goal-seeking behavior) with reality that it performs specific tasks (generating coherent and relevant responses) based on the data it was trained on.

KillHour
Oct 28, 2007


SaTaMaS posted:

It can be ruled out because you're confusing a metaphor (exhibiting goal-seeking behavior) with reality that it performs specific tasks (generating coherent and relevant responses) based on the data it was trained on.

I'm not misunderstanding what the computer is doing, I'm saying that we don't know what we're doing. Our brains could simply be very complex automata following the same natural consequences of math and we can't rule that out.

Like, you could train the system to give coherent but irrelevant (or intentionally deceiving) responses and from an outside perspective, that is indistinguishable from the system having the "goal" of gaslighting us. It's possible that what we think of as goals or desires or instinct are just the natural chemical biases in our brain connections, tuned over tens of millions of years.

Edit: I'm being reductionist - obviously our brains are a lot more complex and have a lot of hardware "features" these systems don't. But my point is we don't know where goals come from, so we can't exactly say what is missing to create them. I'm also not saying this is a correct theory. I just don't think it's an impossible one, unless you know something about brain physiology that I don't, which you might.

KillHour fucked around with this message at 17:46 on Apr 18, 2023

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.
I'm honestly not sure what he thinks a goal is at this point. Magic, probably.

SaTaMaS
Apr 18, 2003

GlyphGryph posted:

I'm honestly not sure what he thinks a goal is at this point. Magic, probably.

Having a goal requires consciousness and intentionality

KillHour
Oct 28, 2007


SaTaMaS posted:

Having a goal requires consciousness and intentionality

Does it? That really sounds like an assertion begging the question.

You just get stuck in a circle with things that you think are conscious having goals and things you think aren't conscious just having things that they do.

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.

SaTaMaS posted:

Having a goal requires consciousness and intentionality

But you think having an objective doesn't, apparently? That doesn't make much sense, considering they are synonymous. Why should we use the word the way you want to, here, where it explicitly requires something to have those things, instead of the way its traditionally used especially within technological fields but also elsewhere where it does not?

SaTaMaS
Apr 18, 2003

KillHour posted:

Does it? That really sounds like an assertion begging the question.

You just get stuck in a circle with things that you think are conscious having goals and things you think aren't conscious just having things that they do.

Taking the intentional stance is a useful last resort when there's no simpler way to explain something's actions. For people, just measuring brain activity won't tell much at all about the person involved so we need to attribute intentionality for a useful description. For LLMs their "goals" are determined by their creators and are essentially programmed tasks that the AI system is designed to perform so attributing intentionality isn't necessary.

SaTaMaS
Apr 18, 2003

GlyphGryph posted:

But you think having an objective doesn't, apparently? That doesn't make much sense, considering they are synonymous. Why should we use the word the way you want to, here, where it explicitly requires something to have those things, instead of the way its traditionally used especially within technological fields but also elsewhere where it does not?

Because it's very useful to differentiate between the intentional stance and the design stance.

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.
What do you even mean by "intent"? Do I want to know? I get the feeling that like for "goal" you're using a nonstandard definition here.

SaTaMaS posted:

Because it's very useful to differentiate between the intentional stance and the design stance.

What's useful, and how? I don't actually understand what you're saying here.

SaTaMaS posted:

For LLMs their "goals" are determined by their creators and are essentially programmed tasks that the AI system is designed to perform so attributing intentionality isn't necessary.

All of this is, at best, technically misleading, and mostly seems irrelevant?

I feel like there's something you're trying to get at him that I am just fundamentally not grasping, and it isn't quite what you're actually saying.

Lucid Dream
Feb 4, 2003

That boy ain't right.

SaTaMaS posted:

For LLMs their "goals" are determined by their creators and are essentially programmed tasks that the AI system is designed to perform so attributing intentionality isn't necessary.

The LLMs don't have goals, but they do predict pretty darn well what a human would say if you asked them to come up with goals about different things.

gurragadon
Jul 28, 2006

SaTaMaS posted:

Because it's very useful to differentiate between the intentional stance and the design stance.

I was unfamiliar with these terms but wikipedia made it seem like the design stance is taking only the function of a system for granted as working while the design stance dosen't care about the structure or design of the system? The mental processes if you will.

Am I off base? Could you relate it to the difference between goals and objectives, like are you saying the goal and objective represent different stances? Or are you saying that the programmer "takes" the intentional stance from the AI program they create?

Edit: This is the wikipedia article. https://en.wikipedia.org/wiki/Intentional_stance

SaTaMaS
Apr 18, 2003

gurragadon posted:

I was unfamiliar with these terms but wikipedia made it seem like the design stance is taking only the function of a system for granted as working while the design stance dosen't care about the structure or design of the system? The mental processes if you will.

Am I off base? Could you relate it to the difference between goals and objectives, like are you saying the goal and objective represent different stances? Or are you saying that the programmer "takes" the intentional stance from the AI program they create?

Edit: This is the wikipedia article. https://en.wikipedia.org/wiki/Intentional_stance

https://sites.google.com/site/minddict/intentional-stance-the#:~:text=Just%20as%20the%20design%20stance,object%20as%20a%20rational%20agent.

quote:

The Physical Stance and the Design Stance

The physical stance stems from the perspective of the physical sciences. To predict the behavior of a given entity according to the physical stance, we use information about its physical constitution in conjunction with information about the laws of physics. Suppose I am holding a piece of chalk in my hand and I predict that it will fall to the floor when I release it. This prediction relies on (i) the fact that the piece of chalk has mass and weight; and (ii) the law of gravity. Predictions and explanations based on the physical stance are exceedingly common. Consider the explanations of why water freezes at 32 degrees Fahrenheit, how mountain ranges are formed, or when high tide will occur. All of these explanations proceed by way of the physical stance.

When we make a prediction from the design stance, we assume that the entity in question has been designed in a certain way, and we predict that the entity will thus behave as designed. Like physical stance predictions, design stance predictions are commonplace. When in the evening a student sets her alarm clock for 8:30 a.m., she predicts that it will behave as designed: i.e., that it will buzz at 8:30 the next morning. She does not need to know anything about the physical constitution of the alarm clock in order to make this prediction. There is no need, for example, for her to take it apart and weigh its parts and measure the tautness of various springs. Likewise, when someone steps into an elevator and pushes "7," she predicts that the elevator will take her to the seventh floor. Again, she does not need to know any details about the inner workings of the elevator in order to make this prediction.

Design stance predictions are riskier than physical stance predictions. Predictions made from the design stance rest on at least two assumptions: first, that the entity in question is designed as it is assumed to be; and second, the entity will perform as it is designed without malfunctioning. The added risk almost always proves worthwhile, however. When we are dealing with a thing that is the product of design, predictions from the design stance can be made with considerably more ease than the comparable predictions from the physical stance. If the student were to take the physical stance towards the alarm clock in an attempt to predict whether it will buzz at 8:30 a.m., she would have to know an extraordinary amount about the alarm clock’s physical construction.

This point can be illustrated even more dramatically by considering a complicated designed object, like a car or a computer. Every time you drive a car you predict that the engine will start when you turn the key, and presumably you make this prediction from the design stance—that is, you predict that the engine will start when you turn the key because that it is how the car has been designed to function. Likewise, you predict that the computer will start up when you press the "on" button because that it is how the computer has been designed to function. Think of how much you would have to know about the inner workings of cars and computers in order to make these predictions from the physical stance!

The fact that an object is designed, however, does not mean that we cannot apply the physical stance to it. We can, and in fact, we sometimes should. For example, to predict what the alarm clock will do when knocked off the nightstand onto the floor, it would be perfectly appropriate to adopt the physical stance towards it. Likewise, we would rightly adopt the physical stance towards the alarm clock to predict its behavior in the case of a design malfunction. Nonetheless, in most cases, when we are dealing with a designed object, adopting the physical stance would hardly be worth the effort. As Dennett states, "Design-stance prediction, when applicable, is a low-cost, low-risk shortcut, enabling me to finesse the tedious application of my limited knowledge of physics." (Dennett 1996)

The sorts of entities so far discussed in relation to design-stance predictions have been artifacts, but the design stance also works well when it comes to living things and their parts. For example, even without any understanding of the biology and chemistry underlying anatomy we can nonetheless predict that a heart will pump blood throughout the body of a living thing. The adoption of the design stance supports this prediction; that is what hearts are supposed to do (i.e., what nature has "designed" them to do).


The Intentional Stance

As already noted, we often gain predictive power when moving from the physical stance to the design stance. Often, we can improve our predictions yet further by adopting the intentional stance. When making predictions from this stance, we interpret the behavior of the entity in question by treating it as a rational agent whose behavior is governed by intentional states. (Intentional states are mental states such as beliefs and desires which have the property of "aboutness," that is, they are about, or directed at, objects or states of affairs in the world. See intentionality.) We can view the adoption of the intentional stance as a four-step process. (1) Decide to treat a certain object X as a rational agent. (2) Determine what beliefs X ought to have, given its place and purpose in the world. For example, if is X standing with his eyes open facing a red barn, he ought to believe something like, "There is a red barn in front of me." This suggests that we can determine at least some of the beliefs that X ought to have on the basis of its sensory apparatus and the sensory exposure that it has had. Dennett (1981) suggests the following general rule as a starting point: "attribute as beliefs all the truths relevant to the system’s interests (or desires) that the system’s experience to date has made available." (3) Using similar considerations, determine what desires X ought to have. Again, some basic rules function as starting points: "attribute desires for those things a system believes to be good for it," and ""attribute desires for those things a system believes to be best means to other ends it desires." (Dennett 1981) (4) Finally, on the assumption that X will act to satisfy some of its desires in light of its beliefs, predict what X will do.

Just as the design stance is riskier than the physical stance, the intentional stance is riskier than the design stance. (In some respects, the intentional stance is a subspecies of the design stance, one in which we view the designed object as a rational agent. Rational agents, we might say, are those designed to act rationally.) Despite the risks, however, the intentional stance provides us with useful gains of predictive power. When it comes to certain complicated artifacts and living things, in fact, the predictive success afforded to us by the intentional stance makes it practically indispensable. Dennett likes to use the example of a chess-playing computer to make this point. We can view such a machine in several different ways:
as a physical system operating according to the laws of physics;
as a designed mechanism consisting of parts with specific functions that interact to produce certain characteristic behavior; or
as an intentional system acting rationally relative to a certain set of beliefs and goals
Given that our goal is to predict and explain a given entity’s behavior, we should adopt the stance that will best allow us to do so. With this in mind, it becomes clear that adopting the intentional stance is for most purposes the most efficient and powerful way (if not the only way) to predict and explain what a well designed chess-playing computer will do. There are probably hundreds of different computer programs that can be run on a PC in order to convert it into a chess player. Though the computers capable of running these programs have different physical constitutions, and though the programs themselves may be designed in very different ways, the behavior of a computer running such a program can be successfully explained if we think of it as a rational agent who knows how to play chess and who wants to checkmate its opponent’s king. When we take the intentional stance towards the chess-playing computer, we do not have to worry about the details of its physical constitution or the details of its program (i.e., its design). Rather, all we have to do is determine the best legal move that can be made given the current state of the game board. Once we treat the computer as a rational agent with beliefs about the rules and strategies of chess and the locations of the pieces on the game board, plus the desire to win, it follows that the computer will make the best move available to it.

Of course, the intentional stance will not always be useful in explaining the behavior of the chess-playing computer. If the computer suddenly started behaving in a manner inconsistent with something a reasonable chess player would do, we might have to adopt the design stance. In other words, we might have to look at the particular chess-playing algorithm implemented by the computer in order to predict what it will subsequently do. And in cases of more extreme malfunction—for example, if the computer screen were suddenly to go blank and the system were to freeze up—we would have to revert to thinking of it as a physical object to explain its behavior adequately. Usually, however, we can best predict what move the computer is going to make by adopting the intentional stance towards it. We do not come up with our prediction by considering the laws of physics or the design of the computer, but rather, by considering the reasons there are in favor of the various available moves. Making an idealized assumption of optimal rationality, we predict that the computer will do what it rationally ought to do.

Objectives are typically more quantifiable than goals. Using the design stance, "objective" emphasizes that these systems are designed to perform specific tasks based on their algorithms and training data, without consciousness or intentions. These tasks are programmed by their creators and can be thought of as objectives that the AI system is designed to achieve.

SaTaMaS fucked around with this message at 21:09 on Apr 18, 2023

KillHour
Oct 28, 2007


Okay, but humans are "designed" by evolution to do things that make us more likely to reproduce. It just seems like an arbitrary distinction created to conform to the idea that we're special in a way a computer is not or cannot be. There's a bunch of handwaving going on to gloss over the limitations in our knowledge. It's possible there's some fundamental thing that makes intent real, but it's also possible we're just post-hoc justifying unconscious predisposition as intent.

SaTaMaS
Apr 18, 2003

KillHour posted:

Okay, but humans are "designed" by evolution to do things that make us more likely to reproduce. It just seems like an arbitrary distinction created to conform to the idea that we're special in a way a computer is not or cannot be. There's a bunch of handwaving going on to gloss over the limitations in our knowledge. It's possible there's some fundamental thing that makes intent real, but it's also possible we're just post-hoc justifying unconscious predisposition as intent.

Cool so we're at the point of discussing intelligent design.

KillHour
Oct 28, 2007


SaTaMaS posted:

Cool so we're at the point of discussing intelligent design.

You just made that strawman up and it's incredibly blatant. I didn't say some intelligent god designed us. I said our brains have an inherent structure that is tuned or trained or designed or shaped or whatever you want to call it by evolution. This is not controversial.

Edit: If anything, you're the one saying there's something fundamental about our designing these systems that makes any potential "goals" they might someday create a product of our intent instead of emergent.

KillHour fucked around with this message at 21:29 on Apr 18, 2023

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.
I think its hilarious that my core criticism is that SaTaMaS is reading intent into things where no intent is being communicated, and that is how he responds to a post about evolutionary pressures. Really sort of drives the point home.

SaTaMaS
Apr 18, 2003

KillHour posted:

You just made that strawman up and it's incredibly blatant. I didn't say some intelligent god designed us. I said our brains have an inherent structure that is tuned or trained or designed or shaped or whatever you want to call it by evolution. This is not controversial.

Edit: If anything, you're the one saying there's something fundamental about our designing these systems that makes any potential "goals" they might someday create a product of our intent instead of emergent.

It's extremely controversial, you're literally talking about intelligent design. The whole point of evolution is that it provides a way to no longer need a designer.

gurragadon
Jul 28, 2006

SaTaMaS posted:

https://sites.google.com/site/minddict/intentional-stance-the#:~:text=Just%20as%20the%20design%20stance,object%20as%20a%20rational%20agent.

Objectives are typically more quantifiable than goals. Using the design stance, "objective" emphasizes that these systems are designed to perform specific tasks based on their algorithms and training data, without consciousness or intentions. These tasks are programmed by their creators and can be thought of as objectives that the AI system is designed to achieve.

I think I understand what you are saying now, tell me if I'm off.

When we take the intentional stance towards AI programs we may gain information, but that information is more likely to be incorrect because we are making assumptions. It is preferable to take the design stance because we can when we are talking about AI programs because there is less room for error because we are assuming less.

Or maybe another way to say it is we are taking the intentional stance towards AI programs because it is easier to describe its behavior that way.

Edit: Thanks for the link too, better examples than wikipedia.

gurragadon fucked around with this message at 21:50 on Apr 18, 2023

KillHour
Oct 28, 2007


SaTaMaS posted:

It's extremely controversial, you're literally talking about intelligent design. The whole point of evolution is that it provides a way to no longer need a designer.

What are you talking about? :psyduck:

I have no idea what you're reading into what I'm saying but you are talking about something totally different than I am.

SaTaMaS
Apr 18, 2003

gurragadon posted:

I think I understand what you are saying now, tell me if I'm off.

When we take the intentional stance towards AI programs we may gain information, but that information is more likely to be incorrect because we are making assumptions. It is preferable to take the design stance because we can when we are talking about AI programs because there is less room for error because we are assuming less.

Or maybe another way to say it is we are taking the intentional stance towards AI programs because it is easier to describe its behavior that way.

Yes exactly

Adbot
ADBOT LOVES YOU

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.

SaTaMaS posted:

It's extremely controversial, you're literally talking about intelligent design. The whole point of evolution is that it provides a way to no longer need a designer.

I genuinely don't think the problem is the words people are using at this point, I think it's the people who insist on interpreting them in the most insane possible way that are the problem.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply