Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
Should I step down as head of twitter
This poll is closed.
Yes 420 4.43%
No 69 0.73%
Goku 9001 94.85%
Total: 9490 votes
[Edit Poll (moderators only)]

 
  • Post
  • Reply
pillsburysoldier
Feb 11, 2008

Yo, peep that shit

Doesnt elon's dad hate him

pillsburysoldier fucked around with this message at 17:59 on Apr 27, 2024

Adbot
ADBOT LOVES YOU

MrQwerty
Apr 15, 2003

pillsburysoldier posted:

Doesnt elon's das hate him

Yes lol

euphronius
Feb 18, 2009

To the extent a computer “thinks” (in the sense marketers use the term) those “thoughts” are “merely” the thoughts and actions of the programmers and workers stored in the technology combined with the actions and thoughts of the person using it

...!
Oct 5, 2003

I SHOULD KEEP MY DUMB MOUTH SHUT INSTEAD OF SPEWING HORSESHIT ABOUT THE ORBITAL MECHANICS OF THE JAMES WEBB SPACE TELESCOPE.

CAN SOMEONE PLEASE TELL ME WHAT A LAGRANGE POINT IS?

euphronius posted:

To the extent a computer “thinks” (in the sense marketers use the term) those “thoughts” are “merely” the thoughts and actions of the programmers and workers stored in the technology combined with the actions and thoughts of the person using it

Computers don't think, op

euphronius
Feb 18, 2009

Exactly. That is what I’m saying

redshirt
Aug 11, 2007

Related to calculators, I just rediscovered how to convert fractions to percentages, in my head, and I was really happy upon this revelation.

Less Is Definitely
Jan 10, 2012

R.L. Stine posted:

i want a slur for ai so i can use it guilt free

Grok

ymgve
Jan 2, 2004


:dukedog:
Offensive Clock

R.L. Stine posted:

i want a slur for ai so i can use it guilt free

typewriter monkey

kru
Oct 5, 2003

Deep musk

Bip Roberts
Mar 29, 2005
Silicon n words

redshirt
Aug 11, 2007

What are you doing Elon?

dr_rat
Jun 4, 2001

redshirt posted:

What are you doing Elon?

Well we know he's not making up slurs against ai. He's far too terrified of it for that.

Lol, like the chip heads will care when they take over earth, all us soggy brains are going in the blender if we were polite to those code cocks, or not.

redshirt
Aug 11, 2007

dr_rat posted:

Well we know he's not making up slurs against ai. He's far too terrified of it for that.

Lol, like the chip heads will care when they take over earth, all us soggy brains are going in the blender if we were polite to those code cocks, or not.

WITNESS ME!

Strategic Tea
Sep 1, 2012

I need $6bn in urgent funding to safeguard against the AI taking revenge on us for calling it slurs

kazil
Jul 24, 2005

Derpmph trial star reporter!

Roko's rear end-a-lick

what's that dumb rear end AI gonna do? Torture me for all eternity?

Monica Bellucci
Dec 14, 2022

redshirt posted:

WITNESS ME!

Do you have a surname?

redshirt
Aug 11, 2007

Monica Bellucci posted:

Do you have a surname?

No ma'am.

kru
Oct 5, 2003

kazil posted:

Roko's rear end-a-lick

what's that dumb rear end AI gonna do? Torture me for all eternity?

You've done it now

Mozi
Apr 4, 2004

Forms change so fast
Time is moving past
Memory is smoke
Gonna get wider when I die
Nap Ghost
more like ai-yai-yai

...!
Oct 5, 2003

I SHOULD KEEP MY DUMB MOUTH SHUT INSTEAD OF SPEWING HORSESHIT ABOUT THE ORBITAL MECHANICS OF THE JAMES WEBB SPACE TELESCOPE.

CAN SOMEONE PLEASE TELL ME WHAT A LAGRANGE POINT IS?

kazil posted:

Roko's rear end-a-lick

what's that dumb rear end AI gonna do? Torture me for all eternity?

No but I will

redshirt
Aug 11, 2007

Mozi posted:

more like ai-yai-yai

AYAYAYAYAYAYA LUUUUCY!!!!

Avirosb
Nov 21, 2016

Everyone makes pisstakes

We have such sights to show you

dr_rat
Jun 4, 2001

Avirosb posted:

We have such sights to show you

Man the Hellraiser movies really got lame with their coenobites.

redshirt
Aug 11, 2007

dr_rat posted:

Man the Hellraiser movies really got lame with their coenobites.

Budget cuts you know

steinrokkan
Apr 2, 2011



Soiled Meat

She definitely has a crawlspace full of hobo corpses

Also are you morons seriously trying to have a debate on the merits of ai

Earwicker
Jan 6, 2003

sugar free jazz posted:

holy poo poo…….is this true? could gpt 6 or even 8 create a civilization ending virus???

if by "a civilization ending virus" you mean firaxis deciding to replace sid meier with some iteration of gpt then yes

redshirt
Aug 11, 2007

steinrokkan posted:

She definitely has a crawlspace full of hobo corpses

Also are you morons seriously trying to have a debate on the merits of ai

The only PLUS I recognize is the erasement of the human race at the hands of cyberspiders.

Evilreaver
Feb 26, 2007

GEORGE IS GETTIN' AUGMENTED!
Dinosaur Gum

MrQwerty posted:

Computers don't think

The big problem with discussions like this is that people speak past each other. No, they don't think. No, they don't have goals. That is correct.

However, they do have stimulus/response mechanisms, which can be condensed to plain-language terms like 'goals'. The enemies in Doom do not have the 'goal' of killing the player, they have the coding "I see the player --> I will shoot at the player". Thus, the enemy exhibits behavior indistinguishable from an agent that has the goal of killing the player. Can we agree on that? No, probably not, because they are hard-coded in their behaviors, not intelligently but dumbly/scriptedly choosing to move/shoot in very specific ways. But I'm going to call this a 'goal' anyway-- the programmer/developer wants the bot to threaten the player, and gives them tools to (appear to) do so. Thus, goals can be ascribed to the human programmer, even if the computer program does not have the capability to have goals itself.

Juul-Whip posted:

being able to count very quickly is not a form of intelligence

euphronius posted:

The computer is not “doing math”. It doesn’t even exist in the sense you are fooling yourself into thinking.

I swear to god you must be a marketer

So, if a bot in a game has the goal "find a firing line to lob this grenade directly into the player", a computer will have the ability to calculate the firing trajectory of the grenade down to float precision, within milliseconds, faster than a human. This goes for literally any problem like firing rockets into space in real life, maintaining autopilot in aircraft, or maintaining fuel injectors in a car engine. Thus, I'm going to say computers can "do math" faster than humans. The fact I have to define this term is kind of insane to me. Like, what the gently caress are you even talking about that computers can't do math. Nonsensical statement.

As I said earlier, you do not have to have classic sentience/sapience in order to problem-solve. If you have a properly defined problem, you can find a solution. I don't want to say this because I feel like I need to define every word in the sentence, but a computer can do basically the same thing- either through rote programming (current-gen game bots who can pathfind to objectives and calculate lead/etc), through guessing/hallucinating (current gen wildly-unreliable LLMs), or through other machine learning methods such as gradient descent and conditional learning.

So consider the following hypothetical- you and a computer have opposing goals. It may be simple, like "Human player wins this game of chess" against "Computer player wins this game of chess". Hopefully we can agree that in this situation the computer has a huge advantage in completing its goal compared to the human. Right? We can agree that even absent human level intelligence, there are goals that if a human and computer program find themselves in conflict, the computer can be at an advantage. Like, I hope we can agree on that.

As computer power grows, these capabilities grow, and there are fewer and fewer areas where humans have a decided advantage. It is not unreasonable to be wary of this growth and the fact that the speeds involved can mean that a runaway program can create problems faster than they can be foreseen and prevented. Consider something stupid like accidentally running a reformat action on your computer- by the time you realized what the computer is doing, it's too late, your data is gone, and the computer did not need intelligent thought on its own accord to do it.



Like I said earlier, AGI is not really the issue. It is far more likely that an algorithm like YouTube's pewdiepipeline creates political turmoil, or that a high-frequency trading algorithm goes haywire and collapses the stock market, or that a poorly-implemented power grid managing system locks down our grid and prevents it from being fixed in a timely manner. None of these issues require human-level understanding of the world-- they only require a programmer to be poo poo at coding. If a corporation or government gives a poorly-coded system control of something important, THAT IS THE ISSUE THAT AI PRESENTS. There doesn't even have to be malice involved!

Evilreaver fucked around with this message at 20:19 on Apr 27, 2024

Bip Roberts
Mar 29, 2005

Evilreaver posted:

The big problem with discussions like this is that people speak past each other. No, they don't think. No, they don't have goals. That is correct.

However, they do have stimulus/response mechanisms, which can be condensed to plain-language terms like 'goals'. The enemies in Doom do not have the 'goal' of killing the player, they have the coding "I see the player --> I will shoot at the player". Thus, the enemy exhibits behavior indistinguishable from an agent that has the goal of killing the player. Can we agree on that? No, probably not, because they are hard-coded in their behaviors, not intelligently but dumbly/scriptedly choosing to move/shoot in very specific ways. But I'm going to call this a 'goal' anyway-- the programmer/developer wants the bot to threaten the player, and gives them tools to (appear to) do so. Thus, goals can be ascribed to the human programmer, even if the computer program does not have the capability to have goals itself.



So, if a bot in a game has the goal "find a firing line to lob this grenade directly into the player", a computer will have the ability to calculate the firing trajectory of the grenade down to float precision, within milliseconds, faster than a human. This goes for literally any problem like firing rockets into space in real life, maintaining autopilot in aircraft, or maintaining fuel injectors in a car engine. Thus, I'm going to say computers can "do math" faster than humans. The fact I have to define this term is kind of insane to me. Like, what the gently caress are you even talking about that computers can't do math. Nonsensical statement.

As I said earlier, you do not have to have classic sentience/sapience in order to problem-solve. If you have a properly defined problem, you can find a solution. I don't want to say this because I feel like I need to define every word in the sentence, but a computer can do basically the same thing- either through rote programming (current-gen game bots who can pathfind to objectives and calculate lead/etc), through guessing/hallucinating (current gen wildly-unreliable LLMs), or through other machine learning methods such as gradient descent and conditional learning.

So consider the following hypothetical- you and a computer have opposing goals. It may be simple, like "Human player wins this game of chess" against "Computer player wins this game of chess". Hopefully we can agree that in this situation the computer has a huge advantage in completing its goal compared to the human. Right? We can agree that even absent human level intelligence, there are goals that if a human and computer program find themselves in conflict, the computer can be at an advantage. Like, I hope we can agree on that.

As computer power grows, these capabilities grow, and there are fewer and fewer areas where humans have a decided advantage. It is not unreasonable to be wary of this growth and the fact that the speeds involved can mean that a runaway program can create problems faster than they can be foreseen and prevented. Consider something stupid like accidentally running a reformat action on your computer- by the time you realized what the computer is doing, it's too late, your data is gone, and the computer did not need intelligent thought on its own accord to do it.



Like I said earlier, AGI is not really the issue. It is far more likely that an algorithm like YouTube's pewdiepipeline creates political turmoil, or that a high-frequency trading algorithm goes haywire and collapses the stock market, or that a poorly-implemented power grid managing system locks down our grid and prevents it from being fixed in a timely manner. None of these issues require human-level understanding of the world-- they only require a programmer to be poo poo at coding. If a corporation or government gives a poorly-coded system control of something important, THAT IS THE ISSUE THAT AI PRESENTS. There doesn't even have to be malice involved!

Chill dawg

MrQwerty
Apr 15, 2003

Lol having a philosophy of mind argument about AI in the thread about a man so terrified of AI he gives himself nightmares about it all night every night, then wakes up and goes, "I must make it a reality so I may master it"

Platystemon
Feb 13, 2012

BREADS

digital penitence posted:

Ah yeah, maybe it was there people found it! Honestly it's even funnier if he rented it and didn't even buy it.

https://abracadabranyc.com/products/devils-champion-leather-armor-set?variant=42175676219554

Evilreaver
Feb 26, 2007

GEORGE IS GETTIN' AUGMENTED!
Dinosaur Gum

Evilreaver posted:

Like I said earlier, AGI is not really the issue. It is far more likely that an algorithm like YouTube's pewdiepipeline creates political turmoil, or that a high-frequency trading algorithm goes haywire and collapses the stock market, or that a poorly-implemented power grid managing system locks down our grid and prevents it from being fixed in a timely manner. None of these issues require human-level understanding of the world-- they only require a programmer to be poo poo at coding. If a corporation or government gives a poorly-coded system control of something important, THAT IS THE ISSUE THAT AI PRESENTS. There doesn't even have to be malice involved!

Just to expand on this to hopefully catch some replies, it is possible for, say, the power-grid-managing system to intelligently break circuits and route power to avoid overloads. If the system is distributed, it may be difficult to shut down. It may even have the ability to turn a distributed system back on, for example if the programmer-intention is to make sure after a catastrophic event (ie, solar flare blacks out a large area) the system comes back up as soon as possible, the system will "want" to "try" to get remote systems back online with original programming. Can we agree such a situation is plausible?

This may make the system difficult to 'stop', as when you yank the plug out of the wall, as soon as you plug it back in the system turns right back on.

Now, if the system is poorly designed or hacked, the system may then begin to malfunction, for example by blacking out areas that aren't actually at risk (or that the system """believes""" is at risk, given the inputs that the risk-evaluation function is given). Now this system is a threat to us-- not an existential threat, sure, but cutting power to a huge area for a non-trivial amount of time will kill people. How quickly can a self-repairing system be turned off? If proper safeguards are not put in place, like if a lazy cheapest-option programming contractor's first draft is put in a production environment, then real damage and harm can be a consequence.

EDIT: In conclusion, if the situation "Human does not want the computer running" versus "Computer wants to keep running" ever tilts too far in the computer's advantage, then that is the real poo poo (keeping in mind I already defined 'wants')

Evilreaver fucked around with this message at 20:36 on Apr 27, 2024

Evilreaver
Feb 26, 2007

GEORGE IS GETTIN' AUGMENTED!
Dinosaur Gum

sorry that "computers can't do math" line fuckin sent me, I was owned as hell

Amphigory
Feb 6, 2005




I mean, a lot of that is just akin to a poorly designed gate letting a bunch of cows escape a field

Normally a gate is far better than a human at keeping cows in the field, but a poorly designed one could let them all out. The gate is given the goal of "keep the cows in the field". There's nothing you would even attempt to describe as intelligence in the gate

TotalLossBrain
Oct 20, 2010

Hier graben!
There isn't much difference between automating a manual labor task and a computer doing math.

At some point, a human has had to engineer precisely how that automated task is going to work - a labor step or a rote repetition in a miles long series of calculations.
The fact that the computer does this many times faster than a human is precisely the point, as it is with labor automation.
It's just a human designed widget to do
code:

something
faster than before

Monica Bellucci
Dec 14, 2022

MrQwerty posted:

Lol having a philosophy of mind argument about AI in the thread about a man so terrified of AI he gives himself nightmares about it all night every night, then wakes up and goes, "I must make it a reality so I may master it"

S-stepmotherboard, what are you doing?

Platystemon
Feb 13, 2012

BREADS
If we created a computer system that was as intelligent as a golden retriever, that would be an incredible advancement on the current state of the art.

And yet, we’ve had domestic dogs for tens of millennia, and there’s plenty of stuff that we can’t train them to do.

No amount of training will get a dog to drive a car competently, and no amount of feeding an LLM will get it to do much that it cannot do today.

Platystemon fucked around with this message at 20:43 on Apr 27, 2024

sugar free jazz
Mar 5, 2008

Evilreaver posted:

The big problem with discussions like this is that people speak past each other. No, they don't think. No, they don't have goals. That is correct.

However, they do have stimulus/response mechanisms, which can be condensed to plain-language terms like 'goals'. The enemies in Doom do not have the 'goal' of killing the player, they have the coding "I see the player --> I will shoot at the player". Thus, the enemy exhibits behavior indistinguishable from an agent that has the goal of killing the player. Can we agree on that? No, probably not, because they are hard-coded in their behaviors, not intelligently but dumbly/scriptedly choosing to move/shoot in very specific ways. But I'm going to call this a 'goal' anyway-- the programmer/developer wants the bot to threaten the player, and gives them tools to (appear to) do so. Thus, goals can be ascribed to the human programmer, even if the computer program does not have the capability to have goals itself.



So, if a bot in a game has the goal "find a firing line to lob this grenade directly into the player", a computer will have the ability to calculate the firing trajectory of the grenade down to float precision, within milliseconds, faster than a human. This goes for literally any problem like firing rockets into space in real life, maintaining autopilot in aircraft, or maintaining fuel injectors in a car engine. Thus, I'm going to say computers can "do math" faster than humans. The fact I have to define this term is kind of insane to me. Like, what the gently caress are you even talking about that computers can't do math. Nonsensical statement.

As I said earlier, you do not have to have classic sentience/sapience in order to problem-solve. If you have a properly defined problem, you can find a solution. I don't want to say this because I feel like I need to define every word in the sentence, but a computer can do basically the same thing- either through rote programming (current-gen game bots who can pathfind to objectives and calculate lead/etc), through guessing/hallucinating (current gen wildly-unreliable LLMs), or through other machine learning methods such as gradient descent and conditional learning.

So consider the following hypothetical- you and a computer have opposing goals. It may be simple, like "Human player wins this game of chess" against "Computer player wins this game of chess". Hopefully we can agree that in this situation the computer has a huge advantage in completing its goal compared to the human. Right? We can agree that even absent human level intelligence, there are goals that if a human and computer program find themselves in conflict, the computer can be at an advantage. Like, I hope we can agree on that.

As computer power grows, these capabilities grow, and there are fewer and fewer areas where humans have a decided advantage. It is not unreasonable to be wary of this growth and the fact that the speeds involved can mean that a runaway program can create problems faster than they can be foreseen and prevented. Consider something stupid like accidentally running a reformat action on your computer- by the time you realized what the computer is doing, it's too late, your data is gone, and the computer did not need intelligent thought on its own accord to do it.



Like I said earlier, AGI is not really the issue. It is far more likely that an algorithm like YouTube's pewdiepipeline creates political turmoil, or that a high-frequency trading algorithm goes haywire and collapses the stock market, or that a poorly-implemented power grid managing system locks down our grid and prevents it from being fixed in a timely manner. None of these issues require human-level understanding of the world-- they only require a programmer to be poo poo at coding. If a corporation or government gives a poorly-coded system control of something important, THAT IS THE ISSUE THAT AI PRESENTS. There doesn't even have to be malice involved!

this is terrifying. is someone working on this and making sure we're safe???

kazil
Jul 24, 2005

Derpmph trial star reporter!

Evilreaver posted:

The big problem with discussions like this is that people speak past each other. No, they don't think. No, they don't have goals. That is correct.

However, they do have stimulus/response mechanisms, which can be condensed to plain-language terms like 'goals'. The enemies in Doom do not have the 'goal' of killing the player, they have the coding "I see the player --> I will shoot at the player". Thus, the enemy exhibits behavior indistinguishable from an agent that has the goal of killing the player. Can we agree on that? No, probably not, because they are hard-coded in their behaviors, not intelligently but dumbly/scriptedly choosing to move/shoot in very specific ways. But I'm going to call this a 'goal' anyway-- the programmer/developer wants the bot to threaten the player, and gives them tools to (appear to) do so. Thus, goals can be ascribed to the human programmer, even if the computer program does not have the capability to have goals itself.



So, if a bot in a game has the goal "find a firing line to lob this grenade directly into the player", a computer will have the ability to calculate the firing trajectory of the grenade down to float precision, within milliseconds, faster than a human. This goes for literally any problem like firing rockets into space in real life, maintaining autopilot in aircraft, or maintaining fuel injectors in a car engine. Thus, I'm going to say computers can "do math" faster than humans. The fact I have to define this term is kind of insane to me. Like, what the gently caress are you even talking about that computers can't do math. Nonsensical statement.

As I said earlier, you do not have to have classic sentience/sapience in order to problem-solve. If you have a properly defined problem, you can find a solution. I don't want to say this because I feel like I need to define every word in the sentence, but a computer can do basically the same thing- either through rote programming (current-gen game bots who can pathfind to objectives and calculate lead/etc), through guessing/hallucinating (current gen wildly-unreliable LLMs), or through other machine learning methods such as gradient descent and conditional learning.

So consider the following hypothetical- you and a computer have opposing goals. It may be simple, like "Human player wins this game of chess" against "Computer player wins this game of chess". Hopefully we can agree that in this situation the computer has a huge advantage in completing its goal compared to the human. Right? We can agree that even absent human level intelligence, there are goals that if a human and computer program find themselves in conflict, the computer can be at an advantage. Like, I hope we can agree on that.

As computer power grows, these capabilities grow, and there are fewer and fewer areas where humans have a decided advantage. It is not unreasonable to be wary of this growth and the fact that the speeds involved can mean that a runaway program can create problems faster than they can be foreseen and prevented. Consider something stupid like accidentally running a reformat action on your computer- by the time you realized what the computer is doing, it's too late, your data is gone, and the computer did not need intelligent thought on its own accord to do it.



Like I said earlier, AGI is not really the issue. It is far more likely that an algorithm like YouTube's pewdiepipeline creates political turmoil, or that a high-frequency trading algorithm goes haywire and collapses the stock market, or that a poorly-implemented power grid managing system locks down our grid and prevents it from being fixed in a timely manner. None of these issues require human-level understanding of the world-- they only require a programmer to be poo poo at coding. If a corporation or government gives a poorly-coded system control of something important, THAT IS THE ISSUE THAT AI PRESENTS. There doesn't even have to be malice involved!

Thank

Adbot
ADBOT LOVES YOU

Mulva
Sep 13, 2011
It's about time for my once per decade ban for being a consistently terrible poster.
Yeah and if the government just decides to send jackboots to put you in a camp, you are also hosed. If the terrible thing happens, it is terrible. I agree. At any given moment for the next 50 years or so the jackboot thing is going to be a real possibility for everyone, and the AI thing is going to be a real problem for no-one.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply