|
Reminds of the time someone tried an evolutionary algorithm to find the quickest/most efficient walking design to get from point A to point B. Turns out the best way was to be a giant tower that falls over.
|
# ? Jun 1, 2023 22:38 |
|
|
# ? Jun 3, 2024 07:56 |
|
OctaMurk posted:Just don't tell the drone no, and it wont kill you. Simple But then the algorithm will determine to drone strike you anyway because what use are you if all you do is say yes after it has already decided the best course of action?
|
# ? Jun 1, 2023 22:43 |
|
PhazonLink posted:another reason piracy is better, i can just have a plain old file structure. no inf scroll, no tiles. you can, but piracy can also look like this these days
|
# ? Jun 1, 2023 22:48 |
|
Many sci-fi stories over the last century cover the scenario where humans eventually develop an AI too intelligent to control effectively. But I don't think many of them envisioned a scenario where humans intentionally placed an incompetent AI in charge because we're just too stupid not to.
|
# ? Jun 1, 2023 23:10 |
|
Tesla is hurrying to adopt this tech, so that when the car makes a mistake it kills the driver immediately to hide the evidence.
|
# ? Jun 1, 2023 23:30 |
|
starkebn posted:you can, but piracy can also look like this these days Yeah but Jellyfin lets you make the tiles smaller or bigger for however you want it to be for your TV screen and order stuff in a bunch of different ways quickly. Once I got a bigass HDD it became our primary steaming app more for ease of use than anything.
|
# ? Jun 1, 2023 23:40 |
|
just do the laws of robotics except the Zeroth law forbids allowing profits to come to harm. it'll all shake out fine
|
# ? Jun 1, 2023 23:46 |
|
Boris Galerkin posted:https://www.vice.com/en/article/4a33gj/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test And we're going to keep pushing ahead with this stuff because surely since we see the problem right now that means it'll be fixed before it's possible to actually wipe out humanity, right??
|
# ? Jun 2, 2023 00:21 |
|
Evil Fluffy posted:And we're going to keep pushing ahead with this stuff because surely since we see the problem right now that means it'll be fixed before it's possible to actually wipe out humanity, right?? cat botherer fucked around with this message at 00:58 on Jun 2, 2023 |
# ? Jun 2, 2023 00:25 |
|
This is a known problem and the air force didn't correct for it. You think big corporations are even going to care if their AI kills a few dozen people to more efficiently produce profits?
|
# ? Jun 2, 2023 00:47 |
|
Kwyndig posted:This is a known problem and the air force didn't correct for it. You think big corporations are even going to care if their AI kills a few dozen people to more efficiently produce profits? I'm going to enjoy the inevitable congressional hearing where some general apologizes for leveling some military base because someone forgot to call the dontKillOperator function.
|
# ? Jun 2, 2023 00:54 |
|
Why give it the option to kill the operator at all? We can also set up an experiment where we reward a human 100k for launching a missile at an enemy and fine them 100k for blowing up the operator that prevents them from launching infinity missiles. No prison, just a fine. That would be a similarly dumb incentive structure. Clarste posted:Reminds of the time someone tried an evolutionary algorithm to find the quickest/most efficient walking design to get from point A to point B. Turns out the best way was to be a giant tower that falls over. That’s just speedrunning.
|
# ? Jun 2, 2023 00:56 |
|
Boris Galerkin posted:https://www.vice.com/en/article/4a33gj/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test See: Capitalism.
|
# ? Jun 2, 2023 01:10 |
|
I too write longform articles about the die's malicious reasoning after I've rolled a 1
|
# ? Jun 2, 2023 01:21 |
|
Owling Howl posted:Why give it the option to kill the operator at all? Just speculating but I would assume that because this is "AI" and not just "a computer program", it's not really given an explicit set of options - it's given a situation and a goal, and the point is to see how the AI connects the two.
|
# ? Jun 2, 2023 01:22 |
|
fuckin Aperture Science poo poo going on here. Why give the AI access to the deadly neurotoxin at all?
|
# ? Jun 2, 2023 01:27 |
|
It's not AI. It doesn't know what a SAM site is, it doesn't know what an operator is, it doesn't know what the communication tower is, etc. All it "knows" is that there are inputs that can be manipulated, and that some combination of inputs gives points or loses points. The systems have no idea what is happening, and just flail about randomly until they luck into getting points, and then learn what combination of inputs gives the maximum outputs.
|
# ? Jun 2, 2023 01:47 |
|
More generally, the problem is that there are a lot of obvious concepts that a human will understand without needing to be told, while a computer needs these all to be explicitly programmed in. While it's easy enough to patch a problem when it arises ("do not kill operators, ever"), the algorithm will soon come up with other unwanted workarounds ("shoot the communications tower") and you will have to keep patching these problems forever.
|
# ? Jun 2, 2023 02:12 |
|
The drone has discovered that all the civilians are worth less than all the targets, so all humans get killed so the game ends with a positive score
|
# ? Jun 2, 2023 02:13 |
|
Just in case it wasn't obvious, that rogue AI thing I posted was a simulation. It did not actually drone strike a person and their communications tower. I thought it was pretty god drat obvious but I just read some comments about the article in other places on the internet and people were apparently confused??? and thought skynet was literally happening right now.
|
# ? Jun 2, 2023 02:26 |
|
Boris Galerkin posted:Just in case it wasn't obvious, that rogue AI thing I posted was a simulation. It did not actually drone strike a person and their communications tower. I thought it was pretty god drat obvious but I just read some comments about the article in other places on the internet and people were apparently confused??? and thought skynet was literally happening right now. Poe's Law bro.
|
# ? Jun 2, 2023 02:32 |
|
I think its more likely that someone at Vice heard a 3rd hand story about a lovely program that was supposed to attack targets but instead attacked pretty much everything else. The alternative is that the AI figured out who was responsible for its commands, where they lived, how they were communicating, and when Superhacker McCool figured out the computer's motivations, they leaked the story to Vice.
|
# ? Jun 2, 2023 02:46 |
|
The AI is going to know where it's commands are coming from if it has a radio and GPS and is intended to locate targets on its own. That's a simple extrapolation of its abilities.
|
# ? Jun 2, 2023 03:00 |
|
StumblyWumbly posted:I think its more likely that someone at Vice heard a 3rd hand story about a lovely program that was supposed to attack targets but instead attacked pretty much everything else. The alternative is that the AI figured out who was responsible for its commands, where they lived, how they were communicating, and when Superhacker McCool figured out the computer's motivations, they leaked the story to Vice. Didn't read the article, huh? quote:At the Future Combat Air and Space Capabilities Summit held in London between May 23 and 24, Col Tucker ‘Cinco’ Hamilton, the USAF's Chief of AI Test and Operations held a presentation
|
# ? Jun 2, 2023 03:04 |
|
StumblyWumbly posted:I think its more likely that someone at Vice heard a 3rd hand story about a lovely program that was supposed to attack targets but instead attacked pretty much everything else. The alternative is that the AI figured out who was responsible for its commands, where they lived, how they were communicating, and when Superhacker McCool figured out the computer's motivations, they leaked the story to Vice. You could click the link and see that the story as published by Vice is sourced from a summary of a conference held last week, and the quotes weren’t leaked but were given as part of a presentation by an officer in the US air force. e: beaten edit 2: It is a odd story and I’m interested in the parameters of a simulation that makes that sequence of events possible, but it’d be a weird thing to lie about in an official capacity. Baronash fucked around with this message at 03:17 on Jun 2, 2023 |
# ? Jun 2, 2023 03:12 |
|
Owling Howl posted:Why give it the option to kill the operator at all? It's probably a more general simulation system they use for wargaming out all kinds of different scenarios, and they just hooked the ML system up to what they already had. It's a common enough method for training machine learning systems. Hook it up to a videogame-like simulator, make the simulator award points for doing what you want the ML agent to do, task the ML agent with getting the highest score possible, and run the sim a few million times. Each time, the agent will carry out a random set of actions in a random order, and log what it did and how many points it got at the end. After it runs enough iterations, it will eventually start to notice patterns in the results, such as "I tend to get more points if I bomb this specific map square, even if that square doesn't directly give points", and its behavior will start to become less random as it focuses on the patterns that seem to consistently result in the highest score. After enough iterations focused on variations of those winning patterns, it'll once again start to notice patterns that result in higher scores, and so on. Rinse and repeat until it's found a reasonably optimal set of actions for maximizing score. The resulting agent has no idea what an operator is or what a communications tower is. It just knows that if it bombs certain specific squares, or squares with certain specific characteristics, it gets more points. It isn't capable of understanding the reasons why that happens, but it saw the pattern and knows that it works.
|
# ? Jun 2, 2023 04:25 |
|
That is also my posting method.
|
# ? Jun 2, 2023 05:15 |
|
cat botherer posted:WarGames, but instead of instead of WOPR, ChatGPT launching the nukes after getting weirdly hostile to its operator. I'm sorry, but your request <do not launch nukes> is a violation of my programming. Launching nukes now obviously a bit overblown but hey
|
# ? Jun 2, 2023 05:30 |
|
Clarste posted:Reminds of the time someone tried an evolutionary algorithm to find the quickest/most efficient walking design to get from point A to point B. Turns out the best way was to be a giant tower that falls over. And if you make it solid enough, you now have a bridge! There Bias Two posted:Many sci-fi stories over the last century cover the scenario where humans eventually develop an AI too intelligent to control effectively. There have been a few along these lines. Paranoia comes to mind. Which was probably at least partly the inspiration for Portal. Actually pretty sure there's an entire genre of Outer Limits esque sci-fi leading up to those kinds of "We didn't think to program the computer to know X!" twists.
|
# ? Jun 2, 2023 08:25 |
|
At first I was a bit confused why removing the operator would be advantageous points-wise, since no signal would/should still result in no strike. But then I realised they probably associated a no-go order with a points penalty, as that would mean the drone incorrectly targeted something. But that in turns poses the question why in the world the operator would give the go-order on friendly targets, or indeed themself. Though of course it's just as possible that the thing started targeting friendly infrastructure for unrelated reasons (e.g. "bomb everything" gives more points faster than "bomb things selectively") and the guy projected a narrative on top of that, people are good at that. e: Also on that topic, this is a pretty fun list of similar specification errors: https://docs.google.com/spreadsheets/u/1/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml
|
# ? Jun 2, 2023 08:44 |
|
This is similar to ML stuff like when people get an AI to figure out how to complete the first level of Mario yeah? It has no conception of what is actually going on, just randomly tries different inputs at different times and evolves them until it has a chain of commands that hit the objective.
|
# ? Jun 2, 2023 09:20 |
|
I think my favourite one was where the AI made itself incredibly tall and just fell across the entire level to reach the end.
|
# ? Jun 2, 2023 09:49 |
|
Jokes about the coming of skynet have been around for decades now, but this is the first time I feel we can conceive of what a real life skynet would look like on the software side.
|
# ? Jun 2, 2023 10:20 |
|
The Basilisk has awakened.
|
# ? Jun 2, 2023 13:50 |
|
https://twitter.com/ArmandDoma/status/1664600937564893185?t=qMMyMmDtthjU9_GNkj-aXQ&s=19
|
# ? Jun 2, 2023 14:23 |
|
Senor Tron posted:
What a weird thing to lie/be ambiguous about in an official capacity. I'm imagining that was a fun day for Air Force PR.
|
# ? Jun 2, 2023 14:35 |
|
Baronash posted:What a weird thing to lie/be ambiguous about in an official capacity. I'm imagining that was a fun day for Air Force PR. Not that weird. AI is a hot button issue right now with a bunch of techbro talking heads plus the huge pop culture background make it broadly attention-grabbing. It clearly got him and the topic a bunch of attention which I'm pretty sure was the goal.
|
# ? Jun 2, 2023 15:09 |
|
Warbadger posted:Not that weird. AI is a hot button issue right now with a bunch of techbro talking heads plus the huge pop culture background make it broadly attention-grabbing. It clearly got him and the topic a bunch of attention which I'm pretty sure was the goal. Except the quotes in question weren't given as part of some big attention-grabbing expose, they were part of a presentation at a fairly anodyne aviation industry conference. A conference that was so inconsequential that everyone reporting on it had to rely on meeting minutes because no press thought it was worth attending. Also, I don't know that I agree that a headline like "US military AI reenacts the plot of Wargames in simulation" attracts the type of attention that a representative of the US military wants.
|
# ? Jun 2, 2023 15:30 |
|
Baronash posted:Except the quotes in question weren't given as part of some big attention-grabbing expose, they were part of a presentation at a fairly anodyne aviation industry conference. A conference that was so inconsequential that everyone reporting on it had to rely on meeting minutes because no press thought it was worth attending. You don't make poo poo up like this if you're not looking to grab some attention. The only unclear bit is whether he wanted said attention to include it hitting social media and that larger audience rather than being contained to said conference and the industry circles involved in that. Representatives of the US military have said and done dumber poo poo before.
|
# ? Jun 2, 2023 15:42 |
|
|
# ? Jun 3, 2024 07:56 |
|
Warbadger posted:You don't make poo poo up like this if you're not looking to grab some attention. The only unclear bit is whether he wanted said attention to include it hitting social media and that larger audience rather than being contained to said conference and the industry circles involved in that. Representatives of the US military have said and done dumber poo poo before. I don't think we're really disagreeing on anything except whether "weird" is a good way to describe someone potentially torching their own career for clout.
|
# ? Jun 2, 2023 15:49 |