Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
Should I step down as head of twitter
This poll is closed.
Yes 420 4.43%
No 69 0.73%
Goku 9001 94.85%
Total: 9490 votes
[Edit Poll (moderators only)]

 
  • Post
  • Reply
coconono
Aug 11, 2004

KISS ME KRIS

If they can make a robot that properly throws rear end all of this can be avoided.

Adbot
ADBOT LOVES YOU

MrQwerty
Apr 15, 2003

I think the 21 year old TI-83 I have sitting on my desk heard me read "calculators can't do math" out loud do you think I am safe y/n

redshirt
Aug 11, 2007

If they make a robot who's rear end I can't kick, then, well, friends, that day we are truly well hosed.

kazil
Jul 24, 2005

Derpmph trial star reporter!

MrQwerty posted:

I think the 21 year old TI-83 I have sitting on my desk heard me read "calculators can't do math" out loud do you think I am safe y/n

do you have stairs in your house?

MrQwerty
Apr 15, 2003

kazil posted:

do you have stairs in your house?

Actually, yes :(

coconono
Aug 11, 2004

KISS ME KRIS

Do any of these smart guys even know how to throw rear end

R.L. Stine
Oct 19, 2007

welcome to dead gay dog house

Evilreaver posted:

The big problem with discussions like this is that people speak past each other. No, they don't think. No, they don't have goals. That is correct.

However, they do have stimulus/response mechanisms, which can be condensed to plain-language terms like 'goals'. The enemies in Doom do not have the 'goal' of killing the player, they have the coding "I see the player --> I will shoot at the player". Thus, the enemy exhibits behavior indistinguishable from an agent that has the goal of killing the player. Can we agree on that? No, probably not, because they are hard-coded in their behaviors, not intelligently but dumbly/scriptedly choosing to move/shoot in very specific ways. But I'm going to call this a 'goal' anyway-- the programmer/developer wants the bot to threaten the player, and gives them tools to (appear to) do so. Thus, goals can be ascribed to the human programmer, even if the computer program does not have the capability to have goals itself.



So, if a bot in a game has the goal "find a firing line to lob this grenade directly into the player", a computer will have the ability to calculate the firing trajectory of the grenade down to float precision, within milliseconds, faster than a human. This goes for literally any problem like firing rockets into space in real life, maintaining autopilot in aircraft, or maintaining fuel injectors in a car engine. Thus, I'm going to say computers can "do math" faster than humans. The fact I have to define this term is kind of insane to me. Like, what the gently caress are you even talking about that computers can't do math. Nonsensical statement.

As I said earlier, you do not have to have classic sentience/sapience in order to problem-solve. If you have a properly defined problem, you can find a solution. I don't want to say this because I feel like I need to define every word in the sentence, but a computer can do basically the same thing- either through rote programming (current-gen game bots who can pathfind to objectives and calculate lead/etc), through guessing/hallucinating (current gen wildly-unreliable LLMs), or through other machine learning methods such as gradient descent and conditional learning.

So consider the following hypothetical- you and a computer have opposing goals. It may be simple, like "Human player wins this game of chess" against "Computer player wins this game of chess". Hopefully we can agree that in this situation the computer has a huge advantage in completing its goal compared to the human. Right? We can agree that even absent human level intelligence, there are goals that if a human and computer program find themselves in conflict, the computer can be at an advantage. Like, I hope we can agree on that.

As computer power grows, these capabilities grow, and there are fewer and fewer areas where humans have a decided advantage. It is not unreasonable to be wary of this growth and the fact that the speeds involved can mean that a runaway program can create problems faster than they can be foreseen and prevented. Consider something stupid like accidentally running a reformat action on your computer- by the time you realized what the computer is doing, it's too late, your data is gone, and the computer did not need intelligent thought on its own accord to do it.



Like I said earlier, AGI is not really the issue. It is far more likely that an algorithm like YouTube's pewdiepipeline creates political turmoil, or that a high-frequency trading algorithm goes haywire and collapses the stock market, or that a poorly-implemented power grid managing system locks down our grid and prevents it from being fixed in a timely manner. None of these issues require human-level understanding of the world-- they only require a programmer to be poo poo at coding. If a corporation or government gives a poorly-coded system control of something important, THAT IS THE ISSUE THAT AI PRESENTS. There doesn't even have to be malice involved!

Seven Force
Nov 9, 2005

WARNING!

BOSS IS APPROACHING!!!

SEVEN FORCE

--ACTIONS--

SHITPOSTING

LOVE LOVE DANCING

Cable Guy posted:

Is that mummy..?

Return the flab

redshirt
Aug 11, 2007

Fools!

happyhippy
Feb 21, 2005

Playing games, watching movies, owning goons. 'sup
Pillbug

Basically :"Computer, create an adversary capable of beating Data!"

kazil
Jul 24, 2005

Derpmph trial star reporter!

happyhippy posted:

Basically :"Computer, create an adversary capable of beating Data!"

The kid from The Goonies?

happyhippy
Feb 21, 2005

Playing games, watching movies, owning goons. 'sup
Pillbug

kazil posted:

The kid from The Goonies?

Either. Didn't see either get beaten.

Seth Pecksniff
May 27, 2004

can't believe shrek is fucking dead. rip to a real one.
After AI reads this thread it's going to condemn humanity and launch the nukes

PhazonLink
Jul 17, 2010
but we have a subforum that says the n word all the time







neoliberal

sugar free jazz
Mar 5, 2008

because of ai I will no longer have children, this world is no place for children

mazzi Chart Czar
Sep 24, 2005
Because of Ai, I will make Ai children, this world is no place for humans.

AlmightyBob
Sep 8, 2003

I wonder if elon thinks the little people in his video games are alive

MrQwerty
Apr 15, 2003

AlmightyBob posted:

I wonder if elon thinks the little people in his video games are alive

yes, 100%

dopesilly
Aug 4, 2023
I have insider knowledge that Musk hosts a private VRChat lobby where him and elites get together and roleplay as anime waifus and horny anthro foxes.

redshirt
Aug 11, 2007

mazzi Chart Czar posted:

Because of Ai, I will make Ai children, this world is no place for humans.

Little computer babies

Dewgy
Nov 10, 2005

~🚚special delivery~📦

Evilreaver posted:

The big problem with discussions like this is that people speak past each other. No, they don't think. No, they don't have goals. That is correct.

However, they do have stimulus/response mechanisms, which can be condensed to plain-language terms like 'goals'. The enemies in Doom do not have the 'goal' of killing the player, they have the coding "I see the player --> I will shoot at the player". Thus, the enemy exhibits behavior indistinguishable from an agent that has the goal of killing the player. Can we agree on that? No, probably not, because they are hard-coded in their behaviors, not intelligently but dumbly/scriptedly choosing to move/shoot in very specific ways. But I'm going to call this a 'goal' anyway-- the programmer/developer wants the bot to threaten the player, and gives them tools to (appear to) do so. Thus, goals can be ascribed to the human programmer, even if the computer program does not have the capability to have goals itself.



So, if a bot in a game has the goal "find a firing line to lob this grenade directly into the player", a computer will have the ability to calculate the firing trajectory of the grenade down to float precision, within milliseconds, faster than a human. This goes for literally any problem like firing rockets into space in real life, maintaining autopilot in aircraft, or maintaining fuel injectors in a car engine. Thus, I'm going to say computers can "do math" faster than humans. The fact I have to define this term is kind of insane to me. Like, what the gently caress are you even talking about that computers can't do math. Nonsensical statement.

As I said earlier, you do not have to have classic sentience/sapience in order to problem-solve. If you have a properly defined problem, you can find a solution. I don't want to say this because I feel like I need to define every word in the sentence, but a computer can do basically the same thing- either through rote programming (current-gen game bots who can pathfind to objectives and calculate lead/etc), through guessing/hallucinating (current gen wildly-unreliable LLMs), or through other machine learning methods such as gradient descent and conditional learning.

So consider the following hypothetical- you and a computer have opposing goals. It may be simple, like "Human player wins this game of chess" against "Computer player wins this game of chess". Hopefully we can agree that in this situation the computer has a huge advantage in completing its goal compared to the human. Right? We can agree that even absent human level intelligence, there are goals that if a human and computer program find themselves in conflict, the computer can be at an advantage. Like, I hope we can agree on that.

As computer power grows, these capabilities grow, and there are fewer and fewer areas where humans have a decided advantage. It is not unreasonable to be wary of this growth and the fact that the speeds involved can mean that a runaway program can create problems faster than they can be foreseen and prevented. Consider something stupid like accidentally running a reformat action on your computer- by the time you realized what the computer is doing, it's too late, your data is gone, and the computer did not need intelligent thought on its own accord to do it.



Like I said earlier, AGI is not really the issue. It is far more likely that an algorithm like YouTube's pewdiepipeline creates political turmoil, or that a high-frequency trading algorithm goes haywire and collapses the stock market, or that a poorly-implemented power grid managing system locks down our grid and prevents it from being fixed in a timely manner. None of these issues require human-level understanding of the world-- they only require a programmer to be poo poo at coding. If a corporation or government gives a poorly-coded system control of something important, THAT IS THE ISSUE THAT AI PRESENTS. There doesn't even have to be malice involved!

still won’t solve the halting problem hth

Ghostlight
Sep 25, 2009

maybe for one second you can pause; try to step into another person's perspective, and understand that a watermelon is cursing me



Evilreaver posted:

The big problem with discussions like this is that people speak past each other. No, they don't think. No, they don't have goals. That is correct.

However, they do have stimulus/response mechanisms, which can be condensed to plain-language terms like 'goals'. The enemies in Doom do not have the 'goal' of killing the player, they have the coding "I see the player --> I will shoot at the player". Thus, the enemy exhibits behavior indistinguishable from an agent that has the goal of killing the player. Can we agree on that? No, probably not, because they are hard-coded in their behaviors, not intelligently but dumbly/scriptedly choosing to move/shoot in very specific ways. But I'm going to call this a 'goal' anyway-- the programmer/developer wants the bot to threaten the player, and gives them tools to (appear to) do so. Thus, goals can be ascribed to the human programmer, even if the computer program does not have the capability to have goals itself.
no, they don't. what you describe is not a stimulus and a response, it is the result of many mathematical formulas describing to a player what the world looks like, where they are in it, what the positions of enemies are, what direction they should move in in relation to the player, and what actions they should take when the description of the world and the player's location in it allows those enemies by the rules of the game to act on the player in order to affect other formulas tracking a player's scores in the game. the player is not a stimulus, the enemy is not a response, the 'goal' doesn't exist outside of the human programmer - the game doesn't' exist outside of the human player.
the stimulus a computer feels is electricity moving between hardware pathways that are either on or off. everything else is derived from that simple act of moving a counter from one side of an abacus to the other. it doesn't do math - it is an expression of math.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Evilreaver posted:

Just to expand on this to hopefully catch some replies, it is possible for, say, the power-grid-managing system to intelligently break circuits and route power to avoid overloads. If the system is distributed, it may be difficult to shut down. It may even have the ability to turn a distributed system back on, for example if the programmer-intention is to make sure after a catastrophic event (ie, solar flare blacks out a large area) the system comes back up as soon as possible, the system will "want" to "try" to get remote systems back online with original programming. Can we agree such a situation is plausible?

This may make the system difficult to 'stop', as when you yank the plug out of the wall, as soon as you plug it back in the system turns right back on.

Now, if the system is poorly designed or hacked, the system may then begin to malfunction, for example by blacking out areas that aren't actually at risk (or that the system """believes""" is at risk, given the inputs that the risk-evaluation function is given). Now this system is a threat to us-- not an existential threat, sure, but cutting power to a huge area for a non-trivial amount of time will kill people. How quickly can a self-repairing system be turned off? If proper safeguards are not put in place, like if a lazy cheapest-option programming contractor's first draft is put in a production environment, then real damage and harm can be a consequence.

EDIT: In conclusion, if the situation "Human does not want the computer running" versus "Computer wants to keep running" ever tilts too far in the computer's advantage, then that is the real poo poo (keeping in mind I already defined 'wants')

Why would it be hard to shut a computer down? Just say the n-word

temple
Jul 29, 2006

I have actual skeletons in my closet
https://www.youtube.com/watch?v=jI23PjBjQKM

crispix
Mar 28, 2015

Grand-Maman m'a raconté
(Les éditions des amitiés franco-québécoises)

Hello, dear

crispix
Mar 28, 2015

Grand-Maman m'a raconté
(Les éditions des amitiés franco-québécoises)

Hello, dear
fuckin buster bluth had the self awareness to be embarrassed

free hubcaps
Oct 12, 2009

Grok will undoubtedly be the most advanced AI when it comes to problem solving involving using slurs to save lives in entirely plausible, probable scenarios

abravemoose
Jul 2, 2021

AlmightyBob posted:

I wonder if elon thinks the little people in his video games are alive

Reboot made me feel bad for the PC mobs.

Buce
Dec 23, 2005

TrashMammal
Nov 10, 2022

Seth Pecksniff posted:

After AI reads this thread it's going to condemn humanity and launch the nukes

good

shyduck
Oct 3, 2003


lmao

AlmightyBob
Sep 8, 2003

motherboy x

redshirt
Aug 11, 2007

Elon and Mom

Juul-Whip
Mar 10, 2008

people who think computers are magic should take a course and learn how they work from the bare metal up so they can be disabused of silly ideas like calculators are smart

Data Graham
Dec 28, 2009

📈📊🍪😋



I mean they took pains to teach us that in second grade, literally “computers do not think, they just run programs”

This was in the 80s

space uncle
Sep 17, 2006

"I don’t care if Biden beats Trump. I’m not offloading responsibility. If enough people feel similar to me, such as the large population of Muslim people in Dearborn, Michigan. Then he won’t"


redshirt posted:

If they make a robot who's rear end I can't kick, then, well, friends, that day we are truly well hosed.

If they make a robot whose rear end I can’t gently caress, then, well, friends, that day will be Judgment Day.

Sentient Data
Aug 31, 2011

My molecule scrambler ray will disintegrate your armor with one blow!

Data Graham posted:

I mean they took pains to teach us that in second grade, literally “computers do not think, they just run programs”

This was in the 80s

The people who didn't understand that taught the next generation (to whom computers have always existed in the same way cars and cats and dogs have always existed), so good luck on ever bringing reason back to the world

AreWeDrunkYet
Jul 8, 2006

What’s also going to happen is that business logic that’s usually vetted by people will get implemented more often without someone noticing earlier in the process. A business analyst will turn on a “low code” workflow reasonably never thinking about something like variable typing until there’s a massive error no one understands a few months down the line. Good processes around testing and deployment can stop that, but those are rarer than you think.

redshirt
Aug 11, 2007

I'm eager to get into bar fights with robots

Adbot
ADBOT LOVES YOU

Captain Hygiene
Sep 17, 2007

You mess with the crabbo...



redshirt posted:

I'm eager to get into bar fights with robots

I thought we didn't serve their kind here

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply