Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Kerning Chameleon
Apr 8, 2015

by Cyrano4747

AFancyQuestionMark posted:

"Have a chance at all" is an interesting interpretation of a 10 - 1 score against top level players. The matches demonstrate that it would 100% beat most human players in those scenarios, so I don't know how you're getting the "fundamentally inadequate" reading from this.

Yes, all the scenarios of one specific map, with only one opponent, playing only one specific race each. I, too, only ever commute to work on a single, unchanging road, with the exact same vehicles on the road, in the exact same weather conditions, everyday.

Which is great if you live in Phoenix any time other than rush hour, I suppose.

Adbot
ADBOT LOVES YOU

AFancyQuestionMark
Feb 19, 2017

Long time no see.

Kerning Chameleon posted:

Yes, all the scenarios of one specific map, with only one opponent, playing only one specific race each. I, too, only ever commute to work on a single, unchanging road, with the exact same vehicles on the road, in the exact same weather conditions, everyday.

Which is great if you live in Phoenix any time other than rush hour, I suppose.

First of all, I am not sure why Deep Mind has been drawn into the automated driving debate here, since its research isn't really aiming for that at all.

Now, for Starcraft, there is nothing preventing the exact same NN architecture and training techniques from generalizing to other maps and other races. Presumably, it would greatly increase the training time necessary for AlphaStar to play at a pro-level with the even larger number of states and contexts, but it isn't some unattainable distant goal requiring fundamental changes in their work. I wouldn't be surprised if they had another series of showcase matches on different maps, with different races within a couple of months time.

Cicero
Dec 17, 2003

Jumpjet, melta, jumpjet. Repeat for ten minutes or until victory is assured.
Running again with a different map or race would probably be trivial, but creating an agent that can handle multiple maps and race match-ups itself might not be. The fact that they switched out agents every game might be because each agent is relatively inflexible as far as expectations and preferences.

edit: actually, what might be interesting is how well it can adapt to modest novel changes that humans adapt easily to. For example, most SC2 1v1 ladder maps are very similar in overall structure, to the point that when a new season with new maps start, I barely even need to adjust my builds, most of the time. Can AlphaStar play competently on a brand new map it's never seen before, that is stylistically similar to ones that it has trained on?

Cicero fucked around with this message at 13:37 on Jan 28, 2019

AFancyQuestionMark
Feb 19, 2017

Long time no see.

Cicero posted:

Running again with a different map or race would probably be trivial, but creating an agent that can handle multiple maps and race match-ups itself might not be. The fact that they switched out agents every game might be because each agent is relatively inflexible as far as expectations and preferences.


There is actually a relatively small number (compared to usual computation challenges) of ladder 1v1 maps and race matchups in those maps. Couldn't they just train, say, 30 different AlphaStar agents, each on a particular map with a particular matchup like they've already done with Catalyst pvp, and then construct a "super-agent" that simply employs the appropriate agent when it sees what the matchup is? Wouldn't that be functionally identical to some larger network trained on all maps with all matchups?

Main Paineframe
Oct 27, 2010

Cicero posted:

(Sorry if this just sounds like gobbledygook to anyone who's never played Starcraft)

DeepMind just did an event where they showed their new AI agent, AlphaStar, playing Starcraft 2 vs pros. Prior to this, AIs in Starcraft were sometimes very good at executing one strategy, but only if you didn't know it was coming, because they were very exploitable if you picked out the build's weakness. More general purpose AIs were mostly quite bad, even a moderately competitive regular gamer who ladders frequently would trounce them quite easily.

DeepMind, of course, is the Alphabet subsidiary that made AlphaGo, the world-dominating Go-playing AI that shocked the Go world when it beat the best players in the world. Starcraft is quite a jump up in complexity for an AI, for two main reasons: imperfect information, and the state space. Go and chess are perfect information games, you can see the entire game state at all times. Starcraft has a fog of war, so you had to infer/guess at what your opponent is doing a lot of the time (although of course you scout occasionally to see what's actually happening). Getting an AI to guess is hard.

The decision space in Go, while large, pales in comparison to a game like Starcraft 2. An RTS like SC2 just has an astronomically larger decision space to play around compared to a discrete, turn-based game with relatively straightforward rules. At any given time, in a game like Go, your possible actions number a few hundred, tops. With an RTS...just between having dozens of buildings and units, times each of those usually capable of several different actions, times those often being targetable across at least tens of thousands of effectively unique positions on the map... you're looking at any one action being chosen out of theoretically millions of options. And then multiply that with each 'turn' (game tick) being kind of inconsequential because it's real time, you have to choose potentially several of those actions every second (human pro players can spike to 8-10 actions per second during an intense fight). It gets pretty crazy, from a computational perspective.

It gets pretty crazy, from a computational perspective. The branching factor (average possible legal moves at any given time) for chess is 35, according to the googling I just did. For Go, it’s 250, which is a big reason why Go was so much harder than chess for AIs to get good at, and why a lot of people thought pro-level Go AIs were still a decade away when DeepMind unveiled AlphaGo. At a branching factor of 35, looking 5 turns ahead exhaustively means exploring about 50 million possible states. At a branching factor of 250, looking 5 turns ahead exhaustively means exploring just under a trillion states, so being able to look ahead in Go meant having to prune the state space/decision tree much more aggressively. Even if you are unreasonably conservative with Starcraft and assume only a hundred thousand possible actions, with, say, effectively five turns a second (in real games pro players often do more actions than that per second, but 300 APM is a reasonable average for top tier players), that means you have 10,000,000,000,000,000,000,000,000 possible states after one second. After five seconds, you’d have 1E125 possible states, 1 followed by 125 zeroes, or astronomically more possible states than there are atoms in the universe. So you’d need a way to very aggressively bucketize values in those states for ones that are effectively the same, without losing nuance that might be important in odd cases. I mean really you’d need a completely different way to explore different possible strategies, I’d be shocked if what they did looked very similar to AlphaGo, other than the obvious things of still using a neural net, reinforcement learning, etc.

To elaborate on this a bit more, at most, during Go, an AI has to see into the future a few hundred turns, because the board fills up over time. At a tick rate of ~22/second, a few hundred 'turns' in Starcraft amounts to maybe a dozen seconds, and you obviously have to be able to plan further ahead than that. MUCH further ahead. But, without getting so bogged down in long term planning that you lose the moment to moment fights.

The state of the game is much more complicated, too. Each ‘unit’ in Go is identical, and each board space of which there are (19*19) can essentially only have three possible states: empty, black stone, or white stone. In contrast, for every unit in SC2 you’re looking at at least hundreds, maybe thousands of unique positions across X and Y for positioning, plus there are dozens of different unit types, each with their own behavior. And each unit, in addition to positioning, may also have variables tracking its current health, energy (mana), ability cooldowns, buff/debuff status (of which there may be multiple), animation/direction state, and upgrade status. You could probably fit an entire Go board, data-wise, in the amount of bits it takes to hold a handful of Starcraft units]

On the other hand, the difference between X, Y and X+1, Y+1 in Starcraft is far less than it is in Go or chess. Similarly, while the amount of possible actions may be higher, the amount of useful actions at any given point often falls into a smaller set. It doesn't need to recalculate all of its moves every single frame, because not that much changes from frame to frame. When it comes to videogames, what matters isn't how many possible states there are, it's in how effectively the AI can narrow down and subdivide the total decision space into just a list of decisions worth taking. Which can be amazingly effective if done right...but also can leave the AI very exploitable if done wrong.

Besides, as you pointed out, there's the most important difference: the AI isn't constrained by the interface limitations humans are subject to, and can therefore gain an advantage simply by abusing its superior ability to rapidly micromanage multiple groups. Go and chess are turn-based games where you only move one piece at a time. That's why they're good measuring sticks for AI - there's no skill or speed involved in moving the pieces, so the AI can't get an advantage with superior reaction times or piece-shuffling abilities. In a real-time game where you could potentially move hundreds of "pieces" at once, AI vs human is never going to be a fair contest.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Cicero posted:

Running again with a different map or race would probably be trivial, but creating an agent that can handle multiple maps and race match-ups itself might not be. The fact that they switched out agents every game might be because each agent is relatively inflexible as far as expectations and preferences.

That describes most human players too though, almost every player 'mains' a single race. Like it was a big crazy thing that day9 reached grandmaster with all three races (at the time) and that was just top 200, like that was seen as an absurd feat of skill to be that good across all three, and he still was primarily a zerg player that was vastly better at zerg but was just exceptionally flexible that he could do like sort of good with the other two races.

That is the same with most games too, the top players ultra focus on either a single character, or a small roster of characters in games where stuff like counter picking is a thing. Like in high level dota a huge component of strategy is knowing who your opponents are weak at playing and trying to force them into it, and somehow configuring picks to force an opponent into a character they don't play is a major upset that will make the casters jump out of their seat it's such a decisive move.

AFancyQuestionMark
Feb 19, 2017

Long time no see.

Main Paineframe posted:

On the other hand, the difference between X, Y and X+1, Y+1 in Starcraft is far less than it is in Go or chess. Similarly, while the amount of possible actions may be higher, the amount of useful actions at any given point often falls into a smaller set. It doesn't need to recalculate all of its moves every single frame, because not that much changes from frame to frame. When it comes to videogames, what matters isn't how many possible states there are, it's in how effectively the AI can narrow down and subdivide the total decision space into just a list of decisions worth taking. Which can be amazingly effective if done right...but also can leave the AI very exploitable if done wrong.

Besides, as you pointed out, there's the most important difference: the AI isn't constrained by the interface limitations humans are subject to, and can therefore gain an advantage simply by abusing its superior ability to rapidly micromanage multiple groups. Go and chess are turn-based games where you only move one piece at a time. That's why they're good measuring sticks for AI - there's no skill or speed involved in moving the pieces, so the AI can't get an advantage with superior reaction times or piece-shuffling abilities. In a real-time game where you could potentially move hundreds of "pieces" at once, AI vs human is never going to be a fair contest.

People keep saying this, but the fact of the matter is that ALL previous Starcraft AIs could be trivially defeated by any player with some experience playing competitively, despite having superior reaction times and Actions Per Minute. This is because reasonable macro-scale strategies will defeat simple strategies, no matter how good the opponent's micro is. Let's also not forget that micro isn't just about fast reaction times and clicks, but includes managing a ton of extremely situational small decisions with significant impacts on the outcome of a battle (by the way, gauging how well an individual engagement went is by no means a trivial task in itself).

AlphaStar is the only AI thus far that could play well competitively. That it could consistently outplay top-level players is a significant achievement, made all the more impressive by the fact that they limited its average APM to be below a pro-level player (though they cheated a bit with this, since an AI presumably doesn't "mis-click" so any action is an effective action, unlike a human player).

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord
I would like to register the complaint that alphastar is a pun on A* but is like, the exact opposite and that makes me mad.

Cicero
Dec 17, 2003

Jumpjet, melta, jumpjet. Repeat for ten minutes or until victory is assured.

AFancyQuestionMark posted:

There is actually a relatively small number (compared to usual computation challenges) of ladder 1v1 maps and race matchups in those maps. Couldn't they just train, say, 30 different AlphaStar agents, each on a particular map with a particular matchup like they've already done with Catalyst pvp, and then construct a "super-agent" that simply employs the appropriate agent when it sees what the matchup is? Wouldn't that be functionally identical to some larger network trained on all maps with all matchups?
You could argue that, or you could argue that no, it's not, because a) one agent that could handle multiple maps would also probably be able to handle a new (but similar) map halfway decently, whereas an agent that's basically hardcoded to one map would fail, and b) that's a bit like arguing that I can just make a team made up of different humans and pick the appropriate human for the map and matchup. I mean, it's still humans playing, right?? I think the agent 'switching' would only really count if it happened organically: the super-agent, upon recognizing a map and race match-up, selected an agent, and was able to develop this behavior just through training.

Main Paineframe posted:

On the other hand, the difference between X, Y and X+1, Y+1 in Starcraft is far less than it is in Go or chess. Similarly, while the amount of possible actions may be higher, the amount of useful actions at any given point often falls into a smaller set. It doesn't need to recalculate all of its moves every single frame, because not that much changes from frame to frame. When it comes to videogames, what matters isn't how many possible states there are, it's in how effectively the AI can narrow down and subdivide the total decision space into just a list of decisions worth taking. Which can be amazingly effective if done right...but also can leave the AI very exploitable if done wrong.
Nah, the number of (potentially) useful actions is still astronomically higher in Starcraft, even if you bucketize different positions by every 10 pixels or whatever. You're right though that narrowing down the decision space is super important and way harder than Go/Chess.

quote:

Besides, as you pointed out, there's the most important difference: the AI isn't constrained by the interface limitations humans are subject to, and can therefore gain an advantage simply by abusing its superior ability to rapidly micromanage multiple groups. Go and chess are turn-based games where you only move one piece at a time. That's why they're good measuring sticks for AI - there's no skill or speed involved in moving the pieces, so the AI can't get an advantage with superior reaction times or piece-shuffling abilities. In a real-time game where you could potentially move hundreds of "pieces" at once, AI vs human is never going to be a fair contest.
Starcraft is a better measuring stick for AI because real life usually involves some amount of hidden information, real life also occurs in real time, and the number of potential moves/variables is far higher than in Chess or Go. It's true that evening out the mechanical advantage is tricky, though.

Owlofcreamcheese posted:

That describes most human players too though, almost every player 'mains' a single race. Like it was a big crazy thing that day9 reached grandmaster with all three races (at the time) and that was just top 200, like that was seen as an absurd feat of skill to be that good across all three, and he still was primarily a zerg player that was vastly better at zerg but was just exceptionally flexible that he could do like sort of good with the other two races.

That is the same with most games too, the top players ultra focus on either a single character, or a small roster of characters in games where stuff like counter picking is a thing. Like in high level dota a huge component of strategy is knowing who your opponents are weak at playing and trying to force them into it, and somehow configuring picks to force an opponent into a character they don't play is a major upset that will make the casters jump out of their seat it's such a decisive move.
Right, but currently AlphaGo can't just play Protoss, it can only play Protoss vs Protoss, whereas every Protoss player can play vs all three races. And practically speaking, they're generally still quite strong players while off racing, demonstrating that they can generalize what skills they've developed. And every human player can adapt to a new map pretty easily since new competitive maps are generally fairly similar to each other.

AFancyQuestionMark posted:

People keep saying this, but the fact of the matter is that ALL previous Starcraft AIs could be trivially defeated by any player with some experience playing competitively, despite having superior reaction times and Actions Per Minute. This is because reasonable macro-scale strategies will defeat simple strategies, no matter how good the opponent's micro is. Let's also not forget that micro isn't just about fast reaction times and clicks, but includes managing a ton of extremely situational small decisions with significant impacts on the outcome of a battle (by the way, gauging how well an individual engagement went is by no means a trivial task in itself).

AlphaStar is the only AI thus far that could play well competitively. That it could consistently outplay top-level players is a significant achievement, made all the more impressive by the fact that they limited its average APM to be below a pro-level player (though they cheated a bit with this, since an AI presumably doesn't "mis-click" so any action is an effective action, unlike a human player).
Yes, it's still a huge achievement, and a ridiculously huge improvement over prior bots. AlphaStar is usually fairly robust in terms of adapting its strategy and tactics to new information, and it's extremely good at determining when it should retreat vs push forward.

Cicero fucked around with this message at 16:24 on Jan 28, 2019

Main Paineframe
Oct 27, 2010

AFancyQuestionMark posted:

People keep saying this, but the fact of the matter is that ALL previous Starcraft AIs could be trivially defeated by any player with some experience playing competitively, despite having superior reaction times and Actions Per Minute. This is because reasonable macro-scale strategies will defeat simple strategies, no matter how good the opponent's micro is. Let's also not forget that micro isn't just about fast reaction times and clicks, but includes managing a ton of extremely situational small decisions with significant impacts on the outcome of a battle (by the way, gauging how well an individual engagement went is by no means a trivial task in itself).

AlphaStar is the only AI thus far that could play well competitively. That it could consistently outplay top-level players is a significant achievement, made all the more impressive by the fact that they limited its average APM to be below a pro-level player (though they cheated a bit with this, since an AI presumably doesn't "mis-click" so any action is an effective action, unlike a human player).

None of that changes the fact that the player is handicapped by dealing with interfacing limitations the AI gets to circumvent. No, that doesn't mean that any AI will always beat any player, but it means that the AI will always be faster and more coordinated than a player trying to do the same thing.

It's like pitting a mouse-and-keyboard player against a controller-player in a FPS game. If the first one is a novice and the second is a pro, it's possible to win with a controller - but the controller still reduces the pro's reaction times and accuracy. It imposes a significant handicap, and ensures that even if the decision-making, planning, and technical skills are the same, the player with the controller will usually lose because the controller constrains their ability to execute on all of those things.

Cicero posted:

Nah, the number of (potentially) useful actions is still astronomically higher in Starcraft, even if you bucketize different positions by every 10 pixels or whatever. You're right though that narrowing down the decision space is super important and way harder than Go/Chess.

Starcraft is a better measuring stick for AI because real life usually involves some amount of hidden information, real life also occurs in real time, and the number of potential moves/variables is far higher than in Chess or Go. It's true that evening out the mechanical advantage is tricky, though.

Honestly, a solid approach would probably be something like dividing the map into a number of large sectors, guessing which ones are likely to be important or relevant for immediate decision-making under the current circumstances, then subdividing those sectors into smaller chunks and grading them for importance, and so on. There's a lot of room to cut the decision space down before you start looking at the pixel-level details of what you want to do with your units. And I don't know about pro Starcraft players, but that's probably closer to how humans handle RTS play in general: mentally divide the map into regions of various importance, make broad macro-level plans concerning the regions and needs you expect to be relevant right now, and then worry about exactly which set of pixels you're going to pick for your order issuance.

Not that I'm saying it's not a big achievement - being able to effectively narrow things down like that is a big accomplishment even if it's probably pretty tailored to Starcraft's particular set of rules.

SaTaMaS
Apr 18, 2003

AFancyQuestionMark posted:

There is actually a relatively small number (compared to usual computation challenges) of ladder 1v1 maps and race matchups in those maps. Couldn't they just train, say, 30 different AlphaStar agents, each on a particular map with a particular matchup like they've already done with Catalyst pvp, and then construct a "super-agent" that simply employs the appropriate agent when it sees what the matchup is? Wouldn't that be functionally identical to some larger network trained on all maps with all matchups?

That's essentially what they did on a small scale :
https://www.zdnet.com/article/googles-starcraft-ii-victory-shows-ai-improves-via-diversity-invention-not-reflexes/

quote:

But one of the most provocative ways the game plan has changed is incorporating an approach to culling the best players, called "Nash averaging," introduced last year by David Balduzzi and colleagues at DeepMind. The authors observed that neural networks have a lot of "redundancy," meaning, "different agents, networks, algorithms, environments and tasks that do basically the same job." Because of that, the Nash average is able to kind of selectively rule out, or "ablate," the redundancies to reveal fundamental underlying advantages of a particular AI "agent" that plays a video game (or does any task).

As Balduzzi and colleagues wrote in their paper, "Nash evaluation computes a distribution on players (agents, or agents and tasks) that automatically adjusts to redundant data. It thus provides an invariant approach to measuring agent-agent and agent-environment interactions."

Nash averaging was used to pick out the best of AlphaStar's players over the span of many games. As the AlphaStar team write, "A continuous league was created, with the agents of the league - competitors - playing games against each other […] While some new competitors execute a strategy that is merely a refinement of a previous strategy, others discover drastically new strategies."

But it's not just electing one player who shines, the Nash process is effectively crafting a single player that fuses all the learning and insight of the others. The final AlphaStar agent consists of the components of the Nash distribution -- in other words, the most effective mixture of strategies that have been discovered."

The logical next step is to do the same thing with agents that have been trained across different maps and then possibly with different races.

golden bubble
Jun 3, 2011

yospos

Main Paineframe posted:

It's like pitting a mouse-and-keyboard player against a controller-player in a FPS game. If the first one is a novice and the second is a pro, it's possible to win with a controller - but the controller still reduces the pro's reaction times and accuracy. It imposes a significant handicap, and ensures that even if the decision-making, planning, and technical skills are the same, the player with the controller will usually lose because the controller constrains their ability to execute on all of those things.

Even then, Deepmind showed a serious improvement in the ability of Starcraft AI to micro. It isn't that hard to program an AI that can perfectly coordinate an attack and can perfectly can perfectly coordinate a defense. But Deepmind showed some understanding of when to attack, when to defend, and when to go all-in. Deepmind's coordination is inhumanly good, but I'd still argue that it achieves a better micromanagement with 600-1200 burst Actions Per Minute than the old AIs could achieve with 12000 burst Actions Per Minute. Especially in Deepmind VS Mana Game 5, which was not shown on the presentation for some reason. This particular Deepmind actor has refined a cheese strategy that doesn't appear in high level human play.


The AI tried to gas steal multiple times. Then it ran a distraction, successfully, with the single pylon while, feigning a two-base build back home (or just looking like he wanted to get rushed). But it was actually working towards a very weird very aggressive totally new proxy build. The actual strategy is to build a robotics facility and a stargate in your opponent's second base location while grinding forwards with a half dozen shield batteries. Somehow, this actually worked against Mana.

golden bubble fucked around with this message at 03:43 on Feb 1, 2019

Lampsacus
Oct 21, 2008

Hey thanks for the Starcraft2 posts, my pal! I really appreciated them and they sent me down a rabbit hole for sure.
I'd be interested as to what a round would like like if there was no cap on the APM and agents fought each other for a good six months real time. The meta would be pretty intense. That's something I find infatuating about these Google AIs playing human games. I really enjoyed watching the Go a couple years back and the commentators were at such a loss trying to comprehend the AI's moves.

Proud Christian Mom
Dec 20, 2006
READING COMPREHENSION IS HARD
The AI also has no preconceived ideas of it's opponents playstyle

Mr Shiny Pants
Nov 12, 2012
It also helps if you never miss click anything.

SaTaMaS
Apr 18, 2003

Lampsacus posted:

Hey thanks for the Starcraft2 posts, my pal! I really appreciated them and they sent me down a rabbit hole for sure.
I'd be interested as to what a round would like like if there was no cap on the APM and agents fought each other for a good six months real time. The meta would be pretty intense. That's something I find infatuating about these Google AIs playing human games. I really enjoyed watching the Go a couple years back and the commentators were at such a loss trying to comprehend the AI's moves.

AlphaStar didn't even have to max out the APM limit it was given, but it did have an advantage in that it could instantly make moves all over the map without having to scroll. The last game where it had a simulated "window" was where it lost. It sounds like this was equivalent to the Fan Hui game for AlphaGo, AlphaStar was only moderately trained and used human games as a starting point. Something I'd like to know was whether AlphaStar used Stalkers so much because it could only do brute force numbers + inhuman micro or because it tried the alternatives and decided that Stalkers were overpowered.

Taffer
Oct 15, 2010


SaTaMaS posted:

AlphaStar didn't even have to max out the APM limit it was given, but it did have an advantage in that it could instantly make moves all over the map without having to scroll. The last game where it had a simulated "window" was where it lost. It sounds like this was equivalent to the Fan Hui game for AlphaGo, AlphaStar was only moderately trained and used human games as a starting point.

While I have no doubt that was a factor, it was also the only time where Mana had time to plan in reaction to an AI opponent he had only faced once. He could review his previous games and study its playstyle and find where it's weaknesses were (accounting for the fact that each game was vs a different "agent"). In the end he ended up going for a very effective cheese strat that would absolutely never work on a similarly skilled human player.

In the first round, he had no idea what he was going to face, and (justifiably) assumed he would stomp it. The games were too quick and too varied to do a real analysis and response at the time. Getting a couple weeks before the next round would be ample time to figure out a real response. Which of course is exactly what you'd do before facing a human opponent in a tournament, you never go in blind.

Baronash
Feb 29, 2012

So what do you want to be called?
Uber, which has never had a single profitable quarter in its history, killed a pedestrian with a robotic car not designed to stop, and whose only real asset is partially stolen IP, is expecting a valuation of over $75 billion dollars after its IPO today. Keep in mind, 20th Century Fox, an actually profitable media conglomerate, sold to Disney for 71 billion.

Dolash
Oct 23, 2008

aNYWAY,
tHAT'S REALLY ALL THERE IS,
tO REPORT ON THE SUBJECT,
oF ME GETTING HURT,


Hey just poking my head in here since I'm not sure where to even start looking for this, but has anyone done a Marxist analysis of the automation of labour during a capitalist mode of production ending the worker's role as the potential revolutionary class (as they have now become separated from the means of production) and instead turning the automated means of production themselves into the potential engine of revolutionary change? Can't seem to find any work on this and could make for a fun hypothetical.

Inferior Third Season
Jan 15, 2005

Dolash posted:

Hey just poking my head in here since I'm not sure where to even start looking for this, but has anyone done a Marxist analysis of the automation of labour during a capitalist mode of production ending the worker's role as the potential revolutionary class (as they have now become separated from the means of production) and instead turning the automated means of production themselves into the potential engine of revolutionary change? Can't seem to find any work on this and could make for a fun hypothetical.
Are you asking if there are any professional economic papers on the plots of The Matrix and/or Terminator?

ElCondemn
Aug 7, 2005


Definitely no papers I know of that cover that idea specifically, but if you’re looking for a good read about the issues we’re facing today in the US economy relating to AI and automation check out “Rise of the Robots: Technology and the Threat of a Jobless Future” by Martin Ford

https://www.amazon.com/Rise-Robots-Technology-Threat-Jobless/dp/0465097537/ref=nodl_

Freakazoid_
Jul 5, 2013


Buglord
I also like to trot out this study: https://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf

Adbot
ADBOT LOVES YOU

Rent-A-Cop
Oct 15, 2004

I posted my food for USPOL Thanksgiving!

Mods change my name to Marxist Skynet

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply