Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Xae
Jan 19, 2005

1337JiveTurkey posted:

It’ll definitely happen at the business level that some merchandising AI is gonna poo poo the bed and start ordering and stocking just the weirdest stuff until some human nominally supervising it notices.

That started happening 10 years ago.

Inventory Management isn't a strong AI, but it is still a rats nest of rules and exceptions that might as well be as far as the ability to understand it's logic goes.

Adbot
ADBOT LOVES YOU

GABA ghoul
Oct 29, 2011

1337JiveTurkey posted:

It’ll definitely happen at the business level that some merchandising AI is gonna poo poo the bed and start ordering and stocking just the weirdest stuff until some human nominally supervising it notices.

Can't be much worse than the guy currently responsible for it. Always orders 1 or 2 pants in normal, human sizes for every clothing store and then 20 pairs of L, XL, XXL, XXXXL. The elephant pants then hang around unsold for months till they are shredded and turned into cheap wall insulation or something. Rinse& repeat for all eternity. Seriously, it's like they haven't looked at a BMI distribution of the pants buying population, ever.

(Sorry, I was just shopping. Don't mind me.)

Xae
Jan 19, 2005

Raspberry Jam It In Me posted:

Can't be much worse than the guy currently responsible for it. Always orders 1 or 2 pants in normal, human sizes for every clothing store and then 20 pairs of L, XL, XXL, XXXXL. The elephant pants then hang around unsold for months till they are shredded and turned into cheap wall insulation or something. Rinse& repeat for all eternity. Seriously, it's like they haven't looked at a BMI distribution of the pants buying population, ever.

(Sorry, I was just shopping. Don't mind me.)

They account for this. They break it down by the location and the intended demographics.

So you're buying clothes they expect overweight people to wear.

Bar Ran Dun
Jan 22, 2006




Just heard NYK Line is going to be testing an automated bridge next year on a large containership.

Teal
Feb 25, 2013

by Nyc_Tattoo

Hot Dog Day #82 posted:

Matt Damon’s Elysium could also be a good look at the future, since it has a lot of super poor people just kind of milling around between working lovely soulless jobs and being policed by an unsympathetic justice system staffed by robots.

The fact the mistreatment of the Poor was a perfect example of 100% irrational meaningless malice aside, remember the movie ended on the note that all it took for the world to flip from dystopia to uptopia was for somebody to edit "worldAI.conf" to change "people = theRich;" to "people = everyone;".

The movie manages to be both irrationally pessimistic and optimistic at the same time.

Also I think Blokamp must have gotten bullied by some Australian as a kid a lot.

GABA ghoul
Oct 29, 2011

Xae posted:

They account for this. They break it down by the location and the intended demographics.

So you're buying clothes they expect overweight people to wear.

I highly doubt that blue denim jeans are fat people clothes. S and M are always the first sizes to be sold out with everything in the shop and the leftover sale rack is always ~90% clothes in L+. It's just incredibly lovely inventory management. Sentient inventory AI when? Why not done?

Teal
Feb 25, 2013

by Nyc_Tattoo

Raspberry Jam It In Me posted:

I highly doubt that blue denim jeans are fat people clothes. S and M are always the first sizes to be sold out with everything in the shop and the leftover sale rack is always ~90% clothes in L+. It's just incredibly lovely inventory management. Sentient inventory AI when? Why not done?

That totally has been a thing for ages but
a) It takes investment in changing the procedure the store, training of the management to operate it, etc; with the margins these places operate at and value of the merchandise, they can afford to shred a LOT of fatboy pants before it outweights the value of that.
b) Even with a system like that in place you're always gonna end up with some extras and consider that people are more likely able to fit into larger clothes if they can't get their exact size than the other way around, so it's the lower risk option to overstock on.

Mukaikubo
Mar 14, 2006

"You treat her like a lady... and she'll always bring you home."

Owlofcreamcheese posted:

I wonder how corrosive to society it is that current american pop culture fiction has been stuck on dystopias for several decades.

Like other times and places mixed it up, look back at 1960s sci-fi or Japanese sci-fi or whatever and you see them mixing it up between "the future is going to all go wrong and suck" and "here is a pretty okay future and we made up some stuff that might be cool if someone invented" but the US has sorta just put it's food on the dystopia peddle and rarely let off.

Like it's so obvious whenever anyone mentions anything that everyone is so able to barf out 50 hit movies from the past 5 years about how awful everything is but the opposite is always some weird lol anime or doctor who from england or a book from 1947. Like culture gives people so many words to talk about how doomed everything is and so few words to even think of any problem being solved or different problems existing in the future or whatever.

Like "the original star trek" is pretty much everyone's only touchstone for talking about anything hopeful, but everyone can rattle off brainlessly 50 movies about how bad every aspect of everything that exists is going to be soon.

I'll argue with you that it's not just american at all. You know the Three Body Problem, by Cixin Liu? He's currently the dean of Chinese scifi writers, screamingly popular, and his masterwork to date is a trilogy of novels that is depressing and nihilistic about the future on a scale that puts most American scifi to shame.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Mukaikubo posted:

I'll argue with you that it's not just american at all. You know the Three Body Problem, by Cixin Liu? He's currently the dean of Chinese scifi writers, screamingly popular, and his masterwork to date is a trilogy of novels that is depressing and nihilistic about the future on a scale that puts most American scifi to shame.

Eh, my issue isn't really that unhappy sci-fi exists or that people should only write utopias. It's that when there is some new thing people love to go "it's like popular movie A", "it's like popular movie B", and if all the popular movies present a united front on what it's like then that becomes the narrative.

Like when people say "aliens" there is maybe 3 cultural touchstones for what an alien is like. To the point people have very strong opinions about how they would be, despite them being so far totally 100% a made up thing.

His Divine Shadow
Aug 7, 2000

I'm not a fascist. I'm a priest. Fascists dress up in black and tell people what to do.
Not sure were this goes, this thread feels kinda relevant thogh.

https://www.buzzfeed.com/tedchiang/the-real-danger-to-civilization-isnt-ai-its-runaway

quote:

This summer, Elon Musk spoke to the National Governors Association and told them that “AI is a fundamental risk to the existence of human civilization.” Doomsayers have been issuing similar warnings for some time, but never before have they commanded so much visibility. Musk isn’t necessarily worried about the rise of a malicious computer like Skynet from The Terminator. Speaking to Maureen Dowd for a Vanity Fair article published in April, Musk gave an example of an artificial intelligence that’s given the task of picking strawberries. It seems harmless enough, but as the AI redesigns itself to be more effective, it might decide that the best way to maximize its output would be to destroy civilization and convert the entire surface of the Earth into strawberry fields. Thus, in its pursuit of a seemingly innocuous goal, an AI could bring about the extinction of humanity purely as an unintended side effect.

This scenario sounds absurd to most people, yet there are a surprising number of technologists who think it illustrates a real danger. Why? Perhaps it’s because they’re already accustomed to entities that operate this way: Silicon Valley tech companies.


Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share? This hypothetical strawberry-picking AI does what every tech startup wishes it could do — grows at an exponential rate and destroys its competitors until it’s achieved an absolute monopoly. The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.

In psychology, the term “insight” is used to describe a recognition of one’s own condition, such as when a person with mental illness is aware of their illness. More broadly, it describes the ability to recognize patterns in one’s own behavior. It’s an example of metacognition, or thinking about one’s own thinking, and it’s something most humans are capable of but animals are not. And I believe the best test of whether an AI is really engaging in human-level cognition would be for it to demonstrate insight of this kind.

Insight is precisely what Musk’s strawberry-picking AI lacks, as do all the other AIs that destroy humanity in similar doomsday scenarios. I used to find it odd that these hypothetical AIs were supposed to be smart enough to solve problems that no human could, yet they were incapable of doing something most every adult has done: taking a step back and asking whether their current course of action is really a good idea. Then I realized that we are already surrounded by machines that demonstrate a complete lack of insight, we just call them corporations. Corporations don’t operate autonomously, of course, and the humans in charge of them are presumably capable of insight, but capitalism doesn’t reward them for using it. On the contrary, capitalism actively erodes this capacity in people by demanding that they replace their own judgment of what “good” means with “whatever the market decides.

Because corporations lack insight, we expect the government to provide oversight in the form of regulation, but the internet is almost entirely unregulated. Back in 1996, John Perry Barlow published a manifesto saying that the government had no jurisdiction over cyberspace, and in the intervening two decades that notion has served as an axiom to people working in technology. Which leads to another similarity between these civilization-destroying AIs and Silicon Valley tech companies: the lack of external controls. If you suggest to an AI prognosticator that humans would never grant an AI so much autonomy, the response will be that you fundamentally misunderstand the situation, that the idea of an ‘off’ button doesn’t even apply. It’s assumed that the AI’s approach will be “the question isn’t who is going to let me, it’s who is going to stop me,” i.e., the mantra of Ayn Randian libertarianism that is so popular in Silicon Valley.

The ethos of startup culture could serve as a blueprint for civilization-destroying AIs. “Move fast and break things” was once Facebook’s motto; they later changed it to “Move fast with stable infrastructure,” but they were talking about preserving what they had built, not what anyone else had. This attitude of treating the rest of the world as eggs to be broken for one’s own omelet could be the prime directive for an AI bringing about the apocalypse. When Uber wanted more drivers with new cars, its solution was to persuade people with bad credit to take out car loans and then deduct payments directly from their earnings. They positioned this as disrupting the auto loan industry, but everyone else recognized it as predatory lending. The whole idea that disruption is something positive instead of negative is a conceit of tech entrepreneurs. If a superintelligent AI were making a funding pitch to an angel investor, converting the surface of the Earth into strawberry fields would be nothing more than a long overdue disruption of global land use policy.

There are industry observers talking about the need for AIs to have a sense of ethics, and some have proposed that we ensure that any superintelligent AIs we create be “friendly,” meaning that their goals are aligned with human goals. I find these suggestions ironic given that we as a society have failed to teach corporations a sense of ethics, that we did nothing to ensure that Facebook’s and Amazon’s goals were aligned with the public good. But I shouldn’t be surprised; the question of how to create friendly AI is simply more fun to think about than the problem of industry regulation, just as imagining what you’d do during the zombie apocalypse is more fun than thinking about how to mitigate global warming.

There have been some impressive advances in AI recently, like AlphaGo Zero, which became the world’s best Go player in a matter of days purely by playing against itself. But this doesn’t make me worry about the possibility of a superintelligent AI “waking up.” (For one thing, the techniques underlying AlphaGo Zero aren’t useful for tasks in the physical world; we are still a long way from a robot that can walk into your kitchen and cook you some scrambled eggs.) What I’m far more concerned about is the concentration of power in Google, Facebook, and Amazon. They’ve achieved a level of market dominance that is profoundly anticompetitive, but because they operate in a way that doesn’t raise prices for consumers, they don’t meet the traditional criteria for monopolies and so they avoid antitrust scrutiny from the government. We don’t need to worry about Google’s DeepMind research division, we need to worry about the fact that it’s almost impossible to run a business online without using Google’s services.

It’d be tempting to say that fearmongering about superintelligent AI is a deliberate ploy by tech behemoths like Google and Facebook to distract us from what they themselves are doing, which is selling their users’ data to advertisers. If you doubt that’s their goal, ask yourself, why doesn’t Facebook offer a paid version that’s ad free and collects no private information? Most of the apps on your smartphone are available in premium versions that remove the ads; if those developers can manage it, why can’t Facebook? Because Facebook doesn’t want to. Its goal as a company is not to connect you to your friends, it’s to show you ads while making you believe that it’s doing you a favor because the ads are targeted.

So it would make sense if Mark Zuckerberg were issuing the loudest warnings about AI, because pointing to a monster on the horizon would be an effective red herring. But he’s not; he’s actually pretty complacent about AI. The fears of superintelligent AI are probably genuine on the part of the doomsayers. That doesn’t mean they reflect a real threat; what they reflect is the inability of technologists to conceive of moderation as a virtue. Billionaires like Bill Gates and Elon Musk assume that a superintelligent AI will stop at nothing to achieve its goals because that’s the attitude they adopted. (Of course, they saw nothing wrong with this strategy when they were the ones engaging in it; it’s only the possibility that someone else might be better at it than they were that gives them cause for concern.)

There’s a saying, popularized by Fredric Jameson, that it’s easier to imagine the end of the world than to imagine the end of capitalism. It’s no surprise that Silicon Valley capitalists don’t want to think about capitalism ending. What’s unexpected is that the way they envision the world ending is through a form of unchecked capitalism, disguised as a superintelligent AI. They have unconsciously created a devil in their own image, a boogeyman whose excesses are precisely their own.

Which brings us back to the importance of insight. Sometimes insight arises spontaneously, but many times it doesn’t. People often get carried away in pursuit of some goal, and they may not realize it until it’s pointed out to them, either by their friends and family or by their therapists. Listening to wake-up calls of this sort is considered a sign of mental health.

We need for the machines to wake up, not in the sense of computers becoming self-aware, but in the sense of corporations recognizing the consequences of their behavior. Just as a superintelligent AI ought to realize that covering the planet in strawberry fields isn’t actually in its or anyone else’s best interests, companies in Silicon Valley need to realize that increasing market share isn’t a good reason to ignore all other considerations. Individuals often reevaluate their priorities after experiencing a personal wake-up call. What we need is for companies to do the same — not to abandon capitalism completely, just to rethink the way they practice it. We need them to behave better than the AIs they fear and demonstrate a capacity for insight.

pangstrom
Jan 25, 2003

Wedge Regret
Yeah I came here to post that. There is a lot to that argument and the US tax cuts are very much like strawberry fields lite imo.

Harold Fjord
Jan 3, 2004
When Asimov thought up robots the first rule he gave them was "don't hurt people."

When we began establishing automated processes the first rule we gave them was "make us as much money as possible."

Doctor Malaver
May 23, 2007

Ce qui s'est passé t'a rendu plus fort

I don't see how this mentality is different to other corporations. Except that these move faster thanks to technology.

Mozi
Apr 4, 2004

Forms change so fast
Time is moving past
Memory is smoke
Gonna get wider when I die
Nap Ghost
"It’s easier to imagine the end of the world than to imagine the end of capitalism."

This very much rings true for me.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Nevvy Z posted:

When Asimov thought up robots the first rule he gave them was "don't hurt people."

Asimov mostly wrote books about how making an absolute law saying "don't hurt people" hurts people over and over in all sorts of 5-35 page logic puzzle mysteries about how a robot could do a thing that seemingly broke the rules. So I'm not really sure he thought the rules were a super effective idea.

Ghost Leviathan
Mar 2, 2017

Exploration is ill-advised.

Teal posted:

The fact the mistreatment of the Poor was a perfect example of 100% irrational meaningless malice aside, remember the movie ended on the note that all it took for the world to flip from dystopia to uptopia was for somebody to edit "worldAI.conf" to change "people = theRich;" to "people = everyone;".

The movie manages to be both irrationally pessimistic and optimistic at the same time.

Also I think Blokamp must have gotten bullied by some Australian as a kid a lot.

Chappie has some fun takes on pretty much all of that, including AI-based law enforcement, the way that gang culture operates on a personal and psychological level shaping people from childhood, and that it's better to be a criminal than a slave. Also Hugh Jackman is the bad guy.

Science fiction with social themes gives people glimpses into a different world, and a point of reference outside the world they know, the less subtle the better, and I think that's laudable. Just need to have more fiction to give people ideas of what a world without capitalism would be like.

Tei
Feb 19, 2011

Mozi posted:

"It’s easier to imagine the end of the world than to imagine the end of capitalism."

This very much rings true for me.

Is wrong.

Heres how capitalism die:

Machines build machines and goods. Automated AI systems manage companies. The company produce 1 million cars / day. Had 0 employees.
200 people own the company.
Nobody can buy cars, except these 200 guys. The warehouse is filled with 100 million cars. Nobody is buying.

Of these 200 people, a kid rape a homeless women in the street in clear day, then run over the women with his car. Everything is recorded by high-fidelity 3D cameras from different angles. The kid is not sentenced, the news celebrate it, and the public opinion is with this kid. The news have been painting this women as a slut and possibly trying to entrap the kid. So much that even reasonable people have a doubt.

Capitalism is dead. Democracy is dead. The justice system is dead. Very few hands have all the power and 99.99% of all the human population have nothing to do and zero power. They cant even riot because the police is just too effective (in both a soft way and a brute force way).

Rastor
Jun 2, 2001

AI can produce anything, even Something Awful forums posts

Harold Fjord
Jan 3, 2004

Owlofcreamcheese posted:

Asimov mostly wrote books about how making an absolute law saying "don't hurt people" hurts people over and over in all sorts of 5-35 page logic puzzle mysteries about how a robot could do a thing that seemingly broke the rules. So I'm not really sure he thought the rules were a super effective idea.

I think it's probably better to start with a rule along those lines that isn't 100% perfect than to start with the rule "more money, gently caress everyone"

Ghost Leviathan
Mar 2, 2017

Exploration is ill-advised.
They never needed robots that can think or reason, they just need robots that can lift things.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Tei posted:

The kid is not sentenced, the news celebrate it, and the public opinion is with this kid.

If the thing happens that everyone wants to happen it's not really a failure of democracy. Like it's a failure of society in a billion other ways to be in such a degenerate age or whatever but not really democracy if the laws reflect the will of the people.

Mozi
Apr 4, 2004

Forms change so fast
Time is moving past
Memory is smoke
Gonna get wider when I die
Nap Ghost

In the situation you describe, capitalism would be alive and well - more than ever, now that Capital itself is unconstrained by any human sympathies.

What I meant instead by that is that the world as a whole is heading for a massive collapse as a consequence of the demands of eternal growth, but any change from the current path would also involve the deaths of billions of people and is hard to imagine what it would look like as a whole, at least for myself.

Slavvy
Dec 11, 2012

I'm not convinced there's any path that doesn't involve billions dying at this point.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord
I'm not convinced people aren't going to be really crushed if the future comes and some stuff is good and some stuff is bad and their childhood fantasy of megadeath punishing the wicked never come.

Trabisnikof
Dec 24, 2005

What if we have megadeath and also the wicked aren't punished?

Xae
Jan 19, 2005

Mozi posted:

"It’s easier to imagine the end of the world than to imagine the end of capitalism."

This very much rings true for me.

Because as long as someone is has something someone else wants and is willing to trade for it Capitalism will exist in some form.

Even if it is two stone age tribes trading relics ripped from the ruins of skyscrapers.

Guavanaut
Nov 27, 2009

Looking At Them Tittys
1969 - 1998



Toilet Rascal
That's just trading, it's only Capitalism if you can extract wealth from capital, as well as through labor and barter.

You might have two stone age tribes selling ruined skyscraper futures, or offering joint-stock purchase offers on an expedition to visit a distant ruined skyscraper, but you normally need a more developed society for that. Maybe a future where the skyscrapers are ruined but all the Teletype Model 28s survived, which isn't implausible, and where the stone age tribes didn't immediately try using them as siege rams, which is less likely.

Slavvy
Dec 11, 2012

Owlofcreamcheese posted:

I'm not convinced people aren't going to be really crushed if the future comes and some stuff is good and some stuff is bad and their childhood fantasy of megadeath punishing the wicked never come.

Good stuff: home sapiens lives to fight another day, capitalism dies in practice if not in law, art, culture, science preserved and safe in the arcologies,. Bad stuff: 99% of humanity is either dead or eating itself. Wickedness punished: arbitrary and neither here nor there.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Slavvy posted:

Good stuff: home sapiens lives to fight another day, capitalism dies in practice if not in law, art, culture, science preserved and safe in the arcologies,. Bad stuff: 99% of humanity is either dead or eating itself. Wickedness punished: arbitrary and neither here nor there.

Like if you grow old are you going to be angry at the world when none of this hyperbolic stuff happens? Or will you be relieved? or will it just be a postponed doomsday thing where you are 109 years old and you are still telling everyone any minute now judgement day is coming?

Like when global warming gets bad and 100+ million people die are you going to be running around yelling at everyone because it's the worst thing that ever happened in all of human history but you wanted it to be total annihilation of all things?

TyroneGoldstein
Mar 30, 2005

Raspberry Jam It In Me posted:

Can't be much worse than the guy currently responsible for it. Always orders 1 or 2 pants in normal, human sizes for every clothing store and then 20 pairs of L, XL, XXL, XXXXL. The elephant pants then hang around unsold for months till they are shredded and turned into cheap wall insulation or something. Rinse& repeat for all eternity. Seriously, it's like they haven't looked at a BMI distribution of the pants buying population, ever.

(Sorry, I was just shopping. Don't mind me.)

This is Costco in a nutshell. Good luck finding a medium (normal large) and everything else is essentially a drapery that was turned into a piece that makes you look like you're wearing a mumu.

Slavvy
Dec 11, 2012

Owlofcreamcheese posted:

Like if you grow old are you going to be angry at the world when none of this hyperbolic stuff happens? Or will you be relieved? or will it just be a postponed doomsday thing where you are 109 years old and you are still telling everyone any minute now judgement day is coming?

Like when global warming gets bad and 100+ million people die are you going to be running around yelling at everyone because it's the worst thing that ever happened in all of human history but you wanted it to be total annihilation of all things?

I'm willing to engage with you and I really hope you can find it in yourself to do the same instead of being evasive with a bunch of nonsense. Let's try again.

My view: humanity will shortly have a set of tools that allow a level of social transformation hitherto impossible but the structure of civilisation today is such that these tools will almost certainly be used to the detriment of 90% of humanity. This view is informed by millennia of precedent.

Your view: humanity will shortly have a set of tools that allow a level of social transformation hitherto impossible. Despite the structure of civilisation today and millennia of historical precedent, everything will be fine and largely exactly like before but better. This view is informed by __________.

Fill in the blank part and we'll go from there. I'm not baiting you, I genuinely want to understand if you have something besides feels to back up your argument.

Hieronymous Alloy
Jan 30, 2009


Why! Why!! Why must you refuse to accept that Dr. Hieronymous Alloy's Genetically Enhanced Cream Corn Is Superior to the Leading Brand on the Market!?!




Morbid Hound

Owlofcreamcheese posted:

I'm not convinced people aren't going to be really crushed if the future comes and some stuff is good and some stuff is bad and their childhood fantasy of megadeath punishing the wicked never come.

Where does Thundarr the Barbarian figure in all this

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Slavvy posted:

My view: humanity will shortly have a set of tools that allow a level of social transformation hitherto impossible but the structure of civilisation today is such that these tools will almost certainly be used to the detriment of 90% of humanity. This view is informed by millennia of precedent.

I disagree with the assertion that there actually is millennia of precedent of tools being used to the detriment of 90% of humanity in any systematic way.

Look at something like world hunger or poverty or access to water or disease or whatever. It's stuff that goes up and down over the years with it seemingly generally going down. Less people are in global poverty today than were 10 years ago. Less people are starving. Less people have guinea worms.

Bad things happen too. Some years hunger goes way up, or disease. The US is a mess and we have way more hook worms than we have in 100 years. There is no absolute rule every single thing improves every single year and nothing ever goes backwards. But there is no rule that everything is bad forever and no one ever uses tools to make anything better.

Someone somewhere is fixing water access. The amount of people without water has gone generally down overall over time. Either some dumb elite that doesn't know better that they were supposed to be satan incarnate and are being nice when they shouldn't be, some elite that is being seemingly nice but has ulterior motives or some non-elite that is fixing their own problem because water is such a widespread technology that it's within their reach now too. There hasn't been some rising tide of people having less and less water as elites grow in power. (nor have elites always grown only more powerful, with that being another thing that sometimes gets better and sometimes gets worse year to year, for both political and technological reasons). I bet in the US there will be less people with access to water in 5 years than there was 5 years ago. I bet globally the opposite.

GABA ghoul
Oct 29, 2011

It's hard to imagine the end of Capitalism because nobody has any idea what an alternative would even look like or how it would function. Outside of extreme scenarios like war& social upheavals, we have never seen anything else but free markets, private property, etc. since the dawn of civilization. Stuff like the centrally planned economies of the Warsaw pact were more akin to State Capitalism than anything truly new.

If someone had a plausible vision of a future without Capitalism, maybe we could start imagining it and putting it into sci-fi shows, instead of vomiting out 10 new zombie shows per year. And I'm not talking about stuff like Star Trek, cause the whole premise of Star Trek society completely breaks apart if you think for more than ten second about it.

Hieronymous Alloy
Jan 30, 2009


Why! Why!! Why must you refuse to accept that Dr. Hieronymous Alloy's Genetically Enhanced Cream Corn Is Superior to the Leading Brand on the Market!?!




Morbid Hound
There are plenty of non-capitalistic societies in science fiction. See anything from Ian M. Banks' Culture series to Ursula LeGuin's The Dispossessed.

Teal
Feb 25, 2013

by Nyc_Tattoo

Owlofcreamcheese posted:

I disagree with the assertion that there actually is millennia of precedent of tools being used to the detriment of 90% of humanity in any systematic way.

Look at something like world hunger or poverty or access to water or disease or whatever. It's stuff that goes up and down over the years with it seemingly generally going down. Less people are in global poverty today than were 10 years ago. Less people are starving. Less people have guinea worms.

Bad things happen too. Some years hunger goes way up, or disease. The US is a mess and we have way more hook worms than we have in 100 years. There is no absolute rule every single thing improves every single year and nothing ever goes backwards. But there is no rule that everything is bad forever and no one ever uses tools to make anything better.

Someone somewhere is fixing water access. The amount of people without water has gone generally down overall over time. Either some dumb elite that doesn't know better that they were supposed to be satan incarnate and are being nice when they shouldn't be, some elite that is being seemingly nice but has ulterior motives or some non-elite that is fixing their own problem because water is such a widespread technology that it's within their reach now too. There hasn't been some rising tide of people having less and less water as elites grow in power. (nor have elites always grown only more powerful, with that being another thing that sometimes gets better and sometimes gets worse year to year, for both political and technological reasons). I bet in the US there will be less people with access to water in 5 years than there was 5 years ago. I bet globally the opposite.

I think you made a compelling point but I feel like you're ignoring a crucial factor which might have a stronger effect than facebook or taxibots.

For the first time in history, we're observing impending depletion of crucial resources that we can hardly envision existence of meaningful society or even individual welfare without. Fine; the energy scare since the big oil crisis turned out to be likely mostly false; renewables have exploded and as long as we don't block sun Matrix style we can probably count on relatively affordable, relatively reliable sources of generally useful energy, means of some local travel, etc.

There's still a heap of kinda big deal issues; most obviously, global warming which is projected to start making a lot of equatorial areas literal killzones within this century, not to mention all the obvious but not as easily quantifiable effects on farming, ocean ecosystems (which is another source of food we hugely rely on).

Then there's also topsoil depletion, aquifer depletion, nutritional value of all plant produce decreasing with increasing CO2 levels, emergence of antibiotics resistance among some serious illnesses, etc.

Basically, all the way through history The Rich didn't really have much to lose from letting the poor Kinda Languish Outta There; there was room, there was usually at least some form of nutritious if unsavory food the rich could sell to the poor to make themselves richer.

We're currently at peak of abundance yet probably approaching crush of scarcity that will really test the willingness to not use all these fantastic Poor People Destroying Tools to use.

GABA ghoul
Oct 29, 2011

Hieronymous Alloy posted:

There are plenty of non-capitalistic societies in science fiction. See anything from Ian M. Banks' Culture series to Ursula LeGuin's The Dispossessed.

These are utopic stories exploring broad philosophical themes, they are not meant to be plausible and consistent future societies, just like the world of 1984 was not meant to be a plausible totalitarian society.

I mean, people complain that we have 10 different TV shows about the end of the world, but 0 shows about the end of Capitalism. I can imagine what the end of the world would look like, but I can't imagine what the end of Capitalism would look like. I don't think anyone really can, outside of some broad, vague ideas.

Yuli Ban
Nov 22, 2016

Bot
Anyone following news on the front of generative adversarial networks? What they're doing these days is insane. Watching them in action is the first time that I've felt jobs are at risk. I could imagine the possibility and talk of how future AI and robotics will lead to such an age, but until I discovered GANs, I never really had an idea for how it would happen. And the craziest thing is that, despite all the reassurances we've been giving ourselves about how robots will only do the jobs we don't want to and that the future will be filled with artists, it's the creative jobs that might be going away first.

If I could have the chance to coin a term, "media synthesis" sounds good. I mean it sounds dystopian, but there are amazing possibilities as well.

There's a slew of news stories about media synthesis coming out in the past few days.
And there's also these older ones (some dating back to 2014!)



A year-old video talking about image synthesis:
https://www.youtube.com/watch?v=rAbhypxs1qQ

And a more recent one, this one from DeepMind:
https://www.youtube.com/watch?v=9bcbh2hC7Hw

GANs can generate gorgeous 1024x1024 images now
https://www.youtube.com/watch?v=XOxxPcy5Gr4


These are not images that are plucked from Google via a text-to-image search. The computer is essentially "imagining" these things and people based on images it's seen before. Of course, it took thousands of hours with ridiculously strong GPUs to do it, but it's been done.

Oh, and here's image translation.
https://www.youtube.com/watch?v=D4C1dB9UheQ

Once you realize that AI can democratize the creation of art and entertainment, the possibilities really do become endless— for better and for worse. I choose to focus on the better.
You see, I can't draw for poo poo. My level now isn't much better than a 9-year-old art student, and I've not bothered to practice to get any better because I just can't seem to overcome depth in drawings while my hand seems to not want to make any line look natural. Yet I've always imagined making a comic. I'm much more adept at writing and narrative, so if only I didn't have to worry about drawing— you know, the part that defines comics as comics— I'd be in the clear.

GANs could help me do that. With an algorithm of that sort, I could generate stylized people who look hand drawn, setting them in different poses, [i]generating a panel in a variety of art styles[i]. It's not the same as one of those filters that takes a picture of a person and makes it look like a cartoon by adding vexel or cel-shading but actually generating an image of a person from scratch, but defying realistic proportions in lieu of cartoon/anime ones.

Right now, I don't know how possible that is. But the crazy thing is that I don't think we'll be waiting long for such a thing.

And it's certainly not the only aspect of media synthesis that's on the horizon.
url=https://deepmind.com/blog/wavenet-generative-model-raw-audio/]WaveNet: A Generative Model for Raw Audio[/url]
Lyrebird claims it can recreate any voice using just one minute of sample audio
Want realistic-sounding speech without hiring voice actors? There's an algorithm for that too.

Japanese AI Writes a Novel, Nearly Wins Literary Award
Want an epic, thought-provoking novel or poem but you have virtually no writing skills? There's an algorithm for that too. And if you're like me and you prefer to write your own novels/stories, then there's going to be an algorithm that edits it better than any professional and turns that steaming finger turd into a polished platinum trophy.


OpenAI’s co-founder Greg Brockman thinks 2018 we will see “perfect“ video synthesis from scratch and speech synthesis. I don't think it'll be perfect, but definitely near perfect.

All this acts as a base for the overarching trend: a major disruption in the entertainment industry. Algorithms can already generate fake celebrities, so how long before cover models are out of a job? We can create very realistic human voices and it's only going to get better; once we fine-tune intonation and timbres, voice actors could be out of a job too. The biggest bottleneck towards photorealism and physics-based realism in video games is time and money, because all those gold-plated pixels in the latest AAA game required thousands of dollars each. At some point, you reach diminishing returns based on time and money investment, so why not use algorithms to fill in that gap? If you have no skills at creating a video game, why not use an algorithm to design assets and backgrounds for you? If we get to a point where it's advanced enough, it could even code the drat game for you.

In the 2020s, I don't see all of this happening. The ones that require static images, enhancing motion, or generating limited dynamic media will certainly take off. In ten years, I bet the entire manga industry in Japan will be crashing and burning while American hold-outs cling bitterly onto canon-centric relevance while all the plebs generate every single disturbing plotline they could imagine with beloved characters.

The early 2020s will be a time of human creativity enhanced by algorithms. A time where you can generate assets, design objects without requiring to hire anyone, and refine written content while still maintaining fully human control over the creative process. I can already see the first major YouTube animation with 1 million+ views that's mostly animated by a team with AI filling in a lot of the blanks to give it a more professional feel alongside generating the voices for the characters. Dunno when it'll happen, but it will happen very soon. Much sooner than a lot of people are comfortable with. But don't expect to type "Make me a comic" and expect to get a full-fledged story out of it. The AI will generate the content for you, but it's up to you to make sense of it and choose what you think works. So you have to generate each panel yourself, choosing the best ones, choosing good colors, and then you have to organize those panels. The AI won't do it for you because early '20s AI will likely still lack common sense and narrative-based logic.


TLDR: researchers are working on/refining algorithms that can generate media, including visual art, music, speech, and written content. In the early 2020s, content creators will be using this to bring to life professionally-done works with as small of teams as possible. It may be possible for a single person with no skills in art, voice acting, or music to put together a long-running comic with voice-acting and original music using only a computer program and their own skills at writing a story. This will likely be the first really major, really public example of automation takin' teh jerbs.

Teal
Feb 25, 2013

by Nyc_Tattoo

Yuli Ban posted:

Anyone following news on the front of generative adversarial networks?

Yep, I work with a startup for document processing and while our current pipeline doesn't utilize any GANs in particular, it's currently the holy grail of what we would like to eventually use. It's just rather high-effort and we won't know if it's particularly feasible for our use case until we sink the development time into it, and everyone who can work on neural nets is hella busy with the ones we already have.

There's a big "hurrah" and "ooh" in the Slack a few times a week as somebody links another crazy paper about Another Thing GANs do Crazy Well.

Neural Nets are kinda a meme within the Machine Learning community in general and for example my lecturers at university are a bit salty about them as they're steamrolling all the other successes of machine learning from last 20 years and GANs are simply the bleeding edge of that right now.

Adbot
ADBOT LOVES YOU

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Teal posted:

There's still a heap of kinda big deal issues;

Yeah, of course. It's a nihilist truth that you can't promise that everything is going to go well. Eventually entropy wins. Every problem we ever solve has another problem behind it and we will win till we lose. The future between now and the end of time is a series of ever worse issues that we will either solve or not solve until one is so bad that everything solves itself when everyone in existence dies simultaneously and there is no more next problems.

Like there isn't really any magic combination of things we can do to avert all disaster forever. We can do our best to do what we can to prevent stuff, and do what we can to survive stuff, but there is no year where bad things are going to stop happening. In the next 100 years some really hosed up stuff is gonna happen and people will die. Like literally no matter what. From a thousand different awful things. It's very good to do what is possible to stop any of it! but if you are talking about the future on time scales of decades and centuries and stuff, you can't say that there isn't going to be unimaginably suffering. Ever. Like literally unimaginable even. someone from 1800 couldn't even think of a bunch of the awful things that have happened in the last 100 years. It's really only on very short amounts of time into the future that you can reasonably hope to keep things under control to the point no major bad things happen, and only to a point even then.

  • Locked thread