- Adbot
-
ADBOT LOVES YOU
|
|
#
?
May 11, 2024 17:35
|
|
- MrQwerty
- Apr 15, 2003
-
|
I think the 21 year old TI-83 I have sitting on my desk heard me read "calculators can't do math" out loud do you think I am safe y/n
|
#
?
Apr 27, 2024 20:42
|
|
- redshirt
- Aug 11, 2007
-
|
If they make a robot who's rear end I can't kick, then, well, friends, that day we are truly well hosed.
|
#
?
Apr 27, 2024 20:42
|
|
- R.L. Stine
- Oct 19, 2007
-
welcome to dead gay dog house
|
The big problem with discussions like this is that people speak past each other. No, they don't think. No, they don't have goals. That is correct.
However, they do have stimulus/response mechanisms, which can be condensed to plain-language terms like 'goals'. The enemies in Doom do not have the 'goal' of killing the player, they have the coding "I see the player --> I will shoot at the player". Thus, the enemy exhibits behavior indistinguishable from an agent that has the goal of killing the player. Can we agree on that? No, probably not, because they are hard-coded in their behaviors, not intelligently but dumbly/scriptedly choosing to move/shoot in very specific ways. But I'm going to call this a 'goal' anyway-- the programmer/developer wants the bot to threaten the player, and gives them tools to (appear to) do so. Thus, goals can be ascribed to the human programmer, even if the computer program does not have the capability to have goals itself.
So, if a bot in a game has the goal "find a firing line to lob this grenade directly into the player", a computer will have the ability to calculate the firing trajectory of the grenade down to float precision, within milliseconds, faster than a human. This goes for literally any problem like firing rockets into space in real life, maintaining autopilot in aircraft, or maintaining fuel injectors in a car engine. Thus, I'm going to say computers can "do math" faster than humans. The fact I have to define this term is kind of insane to me. Like, what the gently caress are you even talking about that computers can't do math. Nonsensical statement.
As I said earlier, you do not have to have classic sentience/sapience in order to problem-solve. If you have a properly defined problem, you can find a solution. I don't want to say this because I feel like I need to define every word in the sentence, but a computer can do basically the same thing- either through rote programming (current-gen game bots who can pathfind to objectives and calculate lead/etc), through guessing/hallucinating (current gen wildly-unreliable LLMs), or through other machine learning methods such as gradient descent and conditional learning.
So consider the following hypothetical- you and a computer have opposing goals. It may be simple, like "Human player wins this game of chess" against "Computer player wins this game of chess". Hopefully we can agree that in this situation the computer has a huge advantage in completing its goal compared to the human. Right? We can agree that even absent human level intelligence, there are goals that if a human and computer program find themselves in conflict, the computer can be at an advantage. Like, I hope we can agree on that.
As computer power grows, these capabilities grow, and there are fewer and fewer areas where humans have a decided advantage. It is not unreasonable to be wary of this growth and the fact that the speeds involved can mean that a runaway program can create problems faster than they can be foreseen and prevented. Consider something stupid like accidentally running a reformat action on your computer- by the time you realized what the computer is doing, it's too late, your data is gone, and the computer did not need intelligent thought on its own accord to do it.
Like I said earlier, AGI is not really the issue. It is far more likely that an algorithm like YouTube's pewdiepipeline creates political turmoil, or that a high-frequency trading algorithm goes haywire and collapses the stock market, or that a poorly-implemented power grid managing system locks down our grid and prevents it from being fixed in a timely manner. None of these issues require human-level understanding of the world-- they only require a programmer to be poo poo at coding. If a corporation or government gives a poorly-coded system control of something important, THAT IS THE ISSUE THAT AI PRESENTS. There doesn't even have to be malice involved!
|
#
?
Apr 27, 2024 21:05
|
|
- redshirt
- Aug 11, 2007
-
|
Fools!
|
#
?
Apr 27, 2024 21:13
|
|
- Seth Pecksniff
- May 27, 2004
-
can't believe shrek is fucking dead. rip to a real one.
|
After AI reads this thread it's going to condemn humanity and launch the nukes
|
#
?
Apr 27, 2024 21:22
|
|
- PhazonLink
- Jul 17, 2010
-
|
but we have a subforum that says the n word all the time
neoliberal
|
#
?
Apr 27, 2024 21:24
|
|
- mazzi Chart Czar
- Sep 24, 2005
-
|
Because of Ai, I will make Ai children, this world is no place for humans.
|
#
?
Apr 27, 2024 21:43
|
|
- MrQwerty
- Apr 15, 2003
-
|
I wonder if elon thinks the little people in his video games are alive
yes, 100%
|
#
?
Apr 27, 2024 21:46
|
|
- dopesilly
- Aug 4, 2023
-
|
I have insider knowledge that Musk hosts a private VRChat lobby where him and elites get together and roleplay as anime waifus and horny anthro foxes.
|
#
?
Apr 27, 2024 21:47
|
|
- redshirt
- Aug 11, 2007
-
|
Because of Ai, I will make Ai children, this world is no place for humans.
Little computer babies
|
#
?
Apr 27, 2024 21:48
|
|
- Dewgy
- Nov 10, 2005
-
~🚚special delivery~📦
|
The big problem with discussions like this is that people speak past each other. No, they don't think. No, they don't have goals. That is correct.
However, they do have stimulus/response mechanisms, which can be condensed to plain-language terms like 'goals'. The enemies in Doom do not have the 'goal' of killing the player, they have the coding "I see the player --> I will shoot at the player". Thus, the enemy exhibits behavior indistinguishable from an agent that has the goal of killing the player. Can we agree on that? No, probably not, because they are hard-coded in their behaviors, not intelligently but dumbly/scriptedly choosing to move/shoot in very specific ways. But I'm going to call this a 'goal' anyway-- the programmer/developer wants the bot to threaten the player, and gives them tools to (appear to) do so. Thus, goals can be ascribed to the human programmer, even if the computer program does not have the capability to have goals itself.
So, if a bot in a game has the goal "find a firing line to lob this grenade directly into the player", a computer will have the ability to calculate the firing trajectory of the grenade down to float precision, within milliseconds, faster than a human. This goes for literally any problem like firing rockets into space in real life, maintaining autopilot in aircraft, or maintaining fuel injectors in a car engine. Thus, I'm going to say computers can "do math" faster than humans. The fact I have to define this term is kind of insane to me. Like, what the gently caress are you even talking about that computers can't do math. Nonsensical statement.
As I said earlier, you do not have to have classic sentience/sapience in order to problem-solve. If you have a properly defined problem, you can find a solution. I don't want to say this because I feel like I need to define every word in the sentence, but a computer can do basically the same thing- either through rote programming (current-gen game bots who can pathfind to objectives and calculate lead/etc), through guessing/hallucinating (current gen wildly-unreliable LLMs), or through other machine learning methods such as gradient descent and conditional learning.
So consider the following hypothetical- you and a computer have opposing goals. It may be simple, like "Human player wins this game of chess" against "Computer player wins this game of chess". Hopefully we can agree that in this situation the computer has a huge advantage in completing its goal compared to the human. Right? We can agree that even absent human level intelligence, there are goals that if a human and computer program find themselves in conflict, the computer can be at an advantage. Like, I hope we can agree on that.
As computer power grows, these capabilities grow, and there are fewer and fewer areas where humans have a decided advantage. It is not unreasonable to be wary of this growth and the fact that the speeds involved can mean that a runaway program can create problems faster than they can be foreseen and prevented. Consider something stupid like accidentally running a reformat action on your computer- by the time you realized what the computer is doing, it's too late, your data is gone, and the computer did not need intelligent thought on its own accord to do it.
Like I said earlier, AGI is not really the issue. It is far more likely that an algorithm like YouTube's pewdiepipeline creates political turmoil, or that a high-frequency trading algorithm goes haywire and collapses the stock market, or that a poorly-implemented power grid managing system locks down our grid and prevents it from being fixed in a timely manner. None of these issues require human-level understanding of the world-- they only require a programmer to be poo poo at coding. If a corporation or government gives a poorly-coded system control of something important, THAT IS THE ISSUE THAT AI PRESENTS. There doesn't even have to be malice involved!
still won’t solve the halting problem hth
|
#
?
Apr 27, 2024 21:56
|
|
- Ghostlight
- Sep 25, 2009
-
maybe for one second you can pause; try to step into another person's perspective, and understand that a watermelon is cursing me
|
The big problem with discussions like this is that people speak past each other. No, they don't think. No, they don't have goals. That is correct.
However, they do have stimulus/response mechanisms, which can be condensed to plain-language terms like 'goals'. The enemies in Doom do not have the 'goal' of killing the player, they have the coding "I see the player --> I will shoot at the player". Thus, the enemy exhibits behavior indistinguishable from an agent that has the goal of killing the player. Can we agree on that? No, probably not, because they are hard-coded in their behaviors, not intelligently but dumbly/scriptedly choosing to move/shoot in very specific ways. But I'm going to call this a 'goal' anyway-- the programmer/developer wants the bot to threaten the player, and gives them tools to (appear to) do so. Thus, goals can be ascribed to the human programmer, even if the computer program does not have the capability to have goals itself.
no, they don't. what you describe is not a stimulus and a response, it is the result of many mathematical formulas describing to a player what the world looks like, where they are in it, what the positions of enemies are, what direction they should move in in relation to the player, and what actions they should take when the description of the world and the player's location in it allows those enemies by the rules of the game to act on the player in order to affect other formulas tracking a player's scores in the game. the player is not a stimulus, the enemy is not a response, the 'goal' doesn't exist outside of the human programmer - the game doesn't' exist outside of the human player.
the stimulus a computer feels is electricity moving between hardware pathways that are either on or off. everything else is derived from that simple act of moving a counter from one side of an abacus to the other. it doesn't do math - it is an expression of math.
|
#
?
Apr 27, 2024 22:21
|
|
- Lottery of Babylon
- Apr 25, 2012
-
STRAIGHT TROPIN'
|
Just to expand on this to hopefully catch some replies, it is possible for, say, the power-grid-managing system to intelligently break circuits and route power to avoid overloads. If the system is distributed, it may be difficult to shut down. It may even have the ability to turn a distributed system back on, for example if the programmer-intention is to make sure after a catastrophic event (ie, solar flare blacks out a large area) the system comes back up as soon as possible, the system will "want" to "try" to get remote systems back online with original programming. Can we agree such a situation is plausible?
This may make the system difficult to 'stop', as when you yank the plug out of the wall, as soon as you plug it back in the system turns right back on.
Now, if the system is poorly designed or hacked, the system may then begin to malfunction, for example by blacking out areas that aren't actually at risk (or that the system """believes""" is at risk, given the inputs that the risk-evaluation function is given). Now this system is a threat to us-- not an existential threat, sure, but cutting power to a huge area for a non-trivial amount of time will kill people. How quickly can a self-repairing system be turned off? If proper safeguards are not put in place, like if a lazy cheapest-option programming contractor's first draft is put in a production environment, then real damage and harm can be a consequence.
EDIT: In conclusion, if the situation "Human does not want the computer running" versus "Computer wants to keep running" ever tilts too far in the computer's advantage, then that is the real poo poo (keeping in mind I already defined 'wants')
Why would it be hard to shut a computer down? Just say the n-word
|
#
?
Apr 27, 2024 22:37
|
|
- temple
- Jul 29, 2006
-
I have actual skeletons in my closet
|
https://www.youtube.com/watch?v=jI23PjBjQKM
|
#
?
Apr 27, 2024 22:44
|
|
- crispix
- Mar 28, 2015
-
Grand-Maman m'a raconté
(Les éditions des amitiés franco-québécoises)
Hello, dear
|
fuckin buster bluth had the self awareness to be embarrassed
|
#
?
Apr 27, 2024 23:03
|
|
- abravemoose
- Jul 2, 2021
-
|
I wonder if elon thinks the little people in his video games are alive
Reboot made me feel bad for the PC mobs.
|
#
?
Apr 27, 2024 23:12
|
|
- TrashMammal
- Nov 10, 2022
-
|
After AI reads this thread it's going to condemn humanity and launch the nukes
good
|
#
?
Apr 27, 2024 23:36
|
|
- redshirt
- Aug 11, 2007
-
|
Elon and Mom
|
#
?
Apr 27, 2024 23:49
|
|
- Juul-Whip
- Mar 10, 2008
-
|
people who think computers are magic should take a course and learn how they work from the bare metal up so they can be disabused of silly ideas like calculators are smart
|
#
?
Apr 28, 2024 00:37
|
|
- space uncle
- Sep 17, 2006
-
"I don’t care if Biden beats Trump. I’m not offloading responsibility. If enough people feel similar to me, such as the large population of Muslim people in Dearborn, Michigan. Then he won’t"
|
If they make a robot who's rear end I can't kick, then, well, friends, that day we are truly well hosed.
If they make a robot whose rear end I can’t gently caress, then, well, friends, that day will be Judgment Day.
|
#
?
Apr 28, 2024 00:51
|
|
- Sentient Data
- Aug 31, 2011
-
My molecule scrambler ray will disintegrate your armor with one blow!
|
I mean they took pains to teach us that in second grade, literally “computers do not think, they just run programs”
This was in the 80s
The people who didn't understand that taught the next generation (to whom computers have always existed in the same way cars and cats and dogs have always existed), so good luck on ever bringing reason back to the world
|
#
?
Apr 28, 2024 01:00
|
|
- AreWeDrunkYet
- Jul 8, 2006
-
|
What’s also going to happen is that business logic that’s usually vetted by people will get implemented more often without someone noticing earlier in the process. A business analyst will turn on a “low code” workflow reasonably never thinking about something like variable typing until there’s a massive error no one understands a few months down the line. Good processes around testing and deployment can stop that, but those are rarer than you think.
|
#
?
Apr 28, 2024 01:03
|
|
- redshirt
- Aug 11, 2007
-
|
I'm eager to get into bar fights with robots
|
#
?
Apr 28, 2024 01:17
|
|
- Adbot
-
ADBOT LOVES YOU
|
|
#
?
May 11, 2024 17:35
|
|