Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
How many quarters after Q1 2016 till Marissa Mayer is unemployed?
1 or fewer
2
4
Her job is guaranteed; what are you even talking about?
View Results
 
  • Post
  • Reply
Clarste
Apr 15, 2013

Just how many mistakes have you suffered on the way here?

An uncountable number, to be sure.
Reminds of the time someone tried an evolutionary algorithm to find the quickest/most efficient walking design to get from point A to point B. Turns out the best way was to be a giant tower that falls over.

Adbot
ADBOT LOVES YOU

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

OctaMurk posted:

Just don't tell the drone no, and it wont kill you. Simple

But then the algorithm will determine to drone strike you anyway because what use are you if all you do is say yes after it has already decided the best course of action?

starkebn
May 18, 2004

"Oooh, got a little too serious. You okay there, little buddy?"

PhazonLink posted:

another reason piracy is better, i can just have a plain old file structure. no inf scroll, no tiles.

you can, but piracy can also look like this these days

There Bias Two
Jan 13, 2009
I'm not a good person

Many sci-fi stories over the last century cover the scenario where humans eventually develop an AI too intelligent to control effectively.


But I don't think many of them envisioned a scenario where humans intentionally placed an incompetent AI in charge because we're just too stupid not to.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
Tesla is hurrying to adopt this tech, so that when the car makes a mistake it kills the driver immediately to hide the evidence.

Neo Rasa
Mar 8, 2007
Everyone should play DUKE games.

:dukedog:

starkebn posted:

you can, but piracy can also look like this these days



Yeah but Jellyfin lets you make the tiles smaller or bigger for however you want it to be for your TV screen and order stuff in a bunch of different ways quickly. Once I got a bigass HDD it became our primary steaming app more for ease of use than anything.

TACD
Oct 27, 2000

just do the laws of robotics except the Zeroth law forbids allowing profits to come to harm. it'll all shake out fine

Evil Fluffy
Jul 13, 2009

Scholars are some of the most pompous and pedantic people I've ever had the joy of meeting.

And we're going to keep pushing ahead with this stuff because surely since we see the problem right now that means it'll be fixed before it's possible to actually wipe out humanity, right??

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Evil Fluffy posted:

And we're going to keep pushing ahead with this stuff because surely since we see the problem right now that means it'll be fixed before it's possible to actually wipe out humanity, right??
WarGames, but instead of instead of WOPR, ChatGPT launching the nukes after getting weirdly hostile to its operator.

cat botherer fucked around with this message at 00:58 on Jun 2, 2023

Kwyndig
Sep 23, 2006

Heeeeeey


This is a known problem and the air force didn't correct for it. You think big corporations are even going to care if their AI kills a few dozen people to more efficiently produce profits?

Baronash
Feb 29, 2012

So what do you want to be called?

Kwyndig posted:

This is a known problem and the air force didn't correct for it. You think big corporations are even going to care if their AI kills a few dozen people to more efficiently produce profits?

I'm going to enjoy the inevitable congressional hearing where some general apologizes for leveling some military base because someone forgot to call the dontKillOperator function.

Owling Howl
Jul 17, 2019
Why give it the option to kill the operator at all?

We can also set up an experiment where we reward a human 100k for launching a missile at an enemy and fine them 100k for blowing up the operator that prevents them from launching infinity missiles. No prison, just a fine. That would be a similarly dumb incentive structure.

Clarste posted:

Reminds of the time someone tried an evolutionary algorithm to find the quickest/most efficient walking design to get from point A to point B. Turns out the best way was to be a giant tower that falls over.

That’s just speedrunning.

Mister Facetious
Apr 21, 2007

I think I died and woke up in L.A.,
I don't know how I wound up in this place...

:canada:

Boris Galerkin posted:

https://www.vice.com/en/article/4a33gj/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test


What Hamilton is describing is essentially a worst-case scenario AI “alignment” problem many people are familiar with from the “Profit Maximizer” thought experiment, in which an AI will take unexpected and harmful action when instructed to pursue a financial goal. The Profit Maximizer was first proposed by philosopher Nick Bostrom in 2003. He asks us to imagine a very powerful AI which has been instructed only to manufacture as much net profit as possible. Naturally, it will devote all its available resources to this task, but then it will seek more resources. It will beg, cheat, lie or steal to increase its own ability to make profit—and anyone who impedes that process will be removed.

See: Capitalism.

Ruffian Price
Sep 17, 2016

I too write longform articles about the die's malicious reasoning after I've rolled a 1

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

Owling Howl posted:

Why give it the option to kill the operator at all?

Just speculating but I would assume that because this is "AI" and not just "a computer program", it's not really given an explicit set of options - it's given a situation and a goal, and the point is to see how the AI connects the two.

HopperUK
Apr 29, 2007

Why would an ambulance be leaving the hospital?
fuckin Aperture Science poo poo going on here. Why give the AI access to the deadly neurotoxin at all?

Dirk the Average
Feb 7, 2012

"This may have been a mistake."
It's not AI. It doesn't know what a SAM site is, it doesn't know what an operator is, it doesn't know what the communication tower is, etc. All it "knows" is that there are inputs that can be manipulated, and that some combination of inputs gives points or loses points. The systems have no idea what is happening, and just flail about randomly until they luck into getting points, and then learn what combination of inputs gives the maximum outputs.

Clarste
Apr 15, 2013

Just how many mistakes have you suffered on the way here?

An uncountable number, to be sure.
More generally, the problem is that there are a lot of obvious concepts that a human will understand without needing to be told, while a computer needs these all to be explicitly programmed in. While it's easy enough to patch a problem when it arises ("do not kill operators, ever"), the algorithm will soon come up with other unwanted workarounds ("shoot the communications tower") and you will have to keep patching these problems forever.

HootTheOwl
May 13, 2012

Hootin and shootin
The drone has discovered that all the civilians are worth less than all the targets, so all humans get killed so the game ends with a positive score

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!
Just in case it wasn't obvious, that rogue AI thing I posted was a simulation. It did not actually drone strike a person and their communications tower. I thought it was pretty god drat obvious but I just read some comments about the article in other places on the internet and people were apparently confused??? and thought skynet was literally happening right now.

Mister Facetious
Apr 21, 2007

I think I died and woke up in L.A.,
I don't know how I wound up in this place...

:canada:

Boris Galerkin posted:

Just in case it wasn't obvious, that rogue AI thing I posted was a simulation. It did not actually drone strike a person and their communications tower. I thought it was pretty god drat obvious but I just read some comments about the article in other places on the internet and people were apparently confused??? and thought skynet was literally happening right now.

Poe's Law bro.

StumblyWumbly
Sep 12, 2007

Batmanticore!
I think its more likely that someone at Vice heard a 3rd hand story about a lovely program that was supposed to attack targets but instead attacked pretty much everything else. The alternative is that the AI figured out who was responsible for its commands, where they lived, how they were communicating, and when Superhacker McCool figured out the computer's motivations, they leaked the story to Vice.

Kwyndig
Sep 23, 2006

Heeeeeey


The AI is going to know where it's commands are coming from if it has a radio and GPS and is intended to locate targets on its own. That's a simple extrapolation of its abilities.

mllaneza
Apr 28, 2007

Veteran, Bermuda Triangle Expeditionary Force, 1993-1952




StumblyWumbly posted:

I think its more likely that someone at Vice heard a 3rd hand story about a lovely program that was supposed to attack targets but instead attacked pretty much everything else. The alternative is that the AI figured out who was responsible for its commands, where they lived, how they were communicating, and when Superhacker McCool figured out the computer's motivations, they leaked the story to Vice.

Didn't read the article, huh?

quote:

At the Future Combat Air and Space Capabilities Summit held in London between May 23 and 24, Col Tucker ‘Cinco’ Hamilton, the USAF's Chief of AI Test and Operations held a presentation

Baronash
Feb 29, 2012

So what do you want to be called?

StumblyWumbly posted:

I think its more likely that someone at Vice heard a 3rd hand story about a lovely program that was supposed to attack targets but instead attacked pretty much everything else. The alternative is that the AI figured out who was responsible for its commands, where they lived, how they were communicating, and when Superhacker McCool figured out the computer's motivations, they leaked the story to Vice.

You could click the link and see that the story as published by Vice is sourced from a summary of a conference held last week, and the quotes weren’t leaked but were given as part of a presentation by an officer in the US air force.

e: beaten
edit 2: It is a odd story and I’m interested in the parameters of a simulation that makes that sequence of events possible, but it’d be a weird thing to lie about in an official capacity.

Baronash fucked around with this message at 03:17 on Jun 2, 2023

Main Paineframe
Oct 27, 2010

Owling Howl posted:

Why give it the option to kill the operator at all?

We can also set up an experiment where we reward a human 100k for launching a missile at an enemy and fine them 100k for blowing up the operator that prevents them from launching infinity missiles. No prison, just a fine. That would be a similarly dumb incentive structure.

That’s just speedrunning.

It's probably a more general simulation system they use for wargaming out all kinds of different scenarios, and they just hooked the ML system up to what they already had.

It's a common enough method for training machine learning systems. Hook it up to a videogame-like simulator, make the simulator award points for doing what you want the ML agent to do, task the ML agent with getting the highest score possible, and run the sim a few million times. Each time, the agent will carry out a random set of actions in a random order, and log what it did and how many points it got at the end. After it runs enough iterations, it will eventually start to notice patterns in the results, such as "I tend to get more points if I bomb this specific map square, even if that square doesn't directly give points", and its behavior will start to become less random as it focuses on the patterns that seem to consistently result in the highest score. After enough iterations focused on variations of those winning patterns, it'll once again start to notice patterns that result in higher scores, and so on. Rinse and repeat until it's found a reasonably optimal set of actions for maximizing score.

The resulting agent has no idea what an operator is or what a communications tower is. It just knows that if it bombs certain specific squares, or squares with certain specific characteristics, it gets more points. It isn't capable of understanding the reasons why that happens, but it saw the pattern and knows that it works.

Agents are GO!
Dec 29, 2004

That is also my posting method.

notwithoutmyanus
Mar 17, 2009

cat botherer posted:

WarGames, but instead of instead of WOPR, ChatGPT launching the nukes after getting weirdly hostile to its operator.

I'm sorry, but your request <do not launch nukes> is a violation of my programming. Launching nukes now

obviously a bit overblown but hey

Ghost Leviathan
Mar 2, 2017

Exploration is ill-advised.

Clarste posted:

Reminds of the time someone tried an evolutionary algorithm to find the quickest/most efficient walking design to get from point A to point B. Turns out the best way was to be a giant tower that falls over.

And if you make it solid enough, you now have a bridge!

There Bias Two posted:

Many sci-fi stories over the last century cover the scenario where humans eventually develop an AI too intelligent to control effectively.


But I don't think many of them envisioned a scenario where humans intentionally placed an incompetent AI in charge because we're just too stupid not to.

There have been a few along these lines. Paranoia comes to mind. Which was probably at least partly the inspiration for Portal. Actually pretty sure there's an entire genre of Outer Limits esque sci-fi leading up to those kinds of "We didn't think to program the computer to know X!" twists.

Perestroika
Apr 8, 2010

At first I was a bit confused why removing the operator would be advantageous points-wise, since no signal would/should still result in no strike. But then I realised they probably associated a no-go order with a points penalty, as that would mean the drone incorrectly targeted something. But that in turns poses the question why in the world the operator would give the go-order on friendly targets, or indeed themself.

Though of course it's just as possible that the thing started targeting friendly infrastructure for unrelated reasons (e.g. "bomb everything" gives more points faster than "bomb things selectively") and the guy projected a narrative on top of that, people are good at that.

e: Also on that topic, this is a pretty fun list of similar specification errors: https://docs.google.com/spreadsheets/u/1/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml

Senor Tron
May 26, 2006


This is similar to ML stuff like when people get an AI to figure out how to complete the first level of Mario yeah? It has no conception of what is actually going on, just randomly tries different inputs at different times and evolves them until it has a chain of commands that hit the objective.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
I think my favourite one was where the AI made itself incredibly tall and just fell across the entire level to reach the end.

Freakazoid_
Jul 5, 2013


Buglord
Jokes about the coming of skynet have been around for decades now, but this is the first time I feel we can conceive of what a real life skynet would look like on the software side.

Negostrike
Aug 15, 2015


The Basilisk has awakened. :tinfoil:

Senor Tron
May 26, 2006


:lol:

https://twitter.com/ArmandDoma/status/1664600937564893185?t=qMMyMmDtthjU9_GNkj-aXQ&s=19

Baronash
Feb 29, 2012

So what do you want to be called?

What a weird thing to lie/be ambiguous about in an official capacity. I'm imagining that was a fun day for Air Force PR.

Warbadger
Jun 17, 2006

Baronash posted:

What a weird thing to lie/be ambiguous about in an official capacity. I'm imagining that was a fun day for Air Force PR.

Not that weird. AI is a hot button issue right now with a bunch of techbro talking heads plus the huge pop culture background make it broadly attention-grabbing. It clearly got him and the topic a bunch of attention which I'm pretty sure was the goal.

Baronash
Feb 29, 2012

So what do you want to be called?

Warbadger posted:

Not that weird. AI is a hot button issue right now with a bunch of techbro talking heads plus the huge pop culture background make it broadly attention-grabbing. It clearly got him and the topic a bunch of attention which I'm pretty sure was the goal.

Except the quotes in question weren't given as part of some big attention-grabbing expose, they were part of a presentation at a fairly anodyne aviation industry conference. A conference that was so inconsequential that everyone reporting on it had to rely on meeting minutes because no press thought it was worth attending.

Also, I don't know that I agree that a headline like "US military AI reenacts the plot of Wargames in simulation" attracts the type of attention that a representative of the US military wants.

Warbadger
Jun 17, 2006

Baronash posted:

Except the quotes in question weren't given as part of some big attention-grabbing expose, they were part of a presentation at a fairly anodyne aviation industry conference. A conference that was so inconsequential that everyone reporting on it had to rely on meeting minutes because no press thought it was worth attending.

Also, I don't know that I agree that a headline like "US military AI reenacts the plot of Wargames in simulation" attracts the type of attention that a representative of the US military wants.

You don't make poo poo up like this if you're not looking to grab some attention. The only unclear bit is whether he wanted said attention to include it hitting social media and that larger audience rather than being contained to said conference and the industry circles involved in that. Representatives of the US military have said and done dumber poo poo before.

Adbot
ADBOT LOVES YOU

Baronash
Feb 29, 2012

So what do you want to be called?

Warbadger posted:

You don't make poo poo up like this if you're not looking to grab some attention. The only unclear bit is whether he wanted said attention to include it hitting social media and that larger audience rather than being contained to said conference and the industry circles involved in that. Representatives of the US military have said and done dumber poo poo before.

I don't think we're really disagreeing on anything except whether "weird" is a good way to describe someone potentially torching their own career for clout.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply