Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Teal
Feb 25, 2013

by Nyc_Tattoo

Tiny Brontosaurus posted:

Yes, going out in society is a social activity. Maybe you trudge and sulk and shrink away hissing anytime someone makes eye contact, but I know the cashiers who work at my store and talk to them, and frequently run into people I know or have some reason to chat with the other people there. I wonder just how much of human existence you've decided you're too good for.


That's easy mode, yo. You should develop skills on coping with strangers when you don't have a structured activity and someone else making the introductions for you. And PSA: If you don't teach your kids how to shop for food they'll grow up to be fat takeout addicts like you.

This is another case of "introducing robots to this defunct system will make it worse by disabling a non-solution to it's problems".
Just like the society that produces more food than ever, with more living space (and elasticity to produce more) and energy (and elasticity to increase energetic efficiency) somehow having bottom half of the society struggling to make ends meet having "we need more people doing SOMETHING so they are technically useful" focus (aka "BUT THE JERBS, THE JERBSSS"), going to grocery store somehow being some sort of crucial social glue is again a symptom of failure and degradation that shouldn't enjoy some sort of protection from the scurry robots that are about to uproot another of our historically crucial social virtues.
The time spared can be used for actual real socialisation, including socialisation with strangers - you can take your kids for a walk and engage other strangers on a walk with their kids, enjoying their own Time They Don't Have To Spend Buying Groceries In.
If everyone spends the extra time ignoring their kids and watching Netflix then yeah, you have a problem, but you should focus on working on that problem rather than clinging to a patchy "virtue out of lack of other options".

Adbot
ADBOT LOVES YOU

Teal
Feb 25, 2013

by Nyc_Tattoo
I am still not seeing the inherent advantage of grocery shopping as a socially bonding, community bolstering experience.
Initially it was socially relevant because the people selling you poo poo were usually the ones who grew it or built it. The social contact came with a promise of quality - if you swindle somebody, they know where you live, and they will make sure all your other customers will know it soon, too.
We've got a butcher around a corner and I know I can buy raw sausage from him and count on not making GBS threads my guts out, I am glad to know him personally.
I can't get mad at the grocery store clerk for quality of the bagged toast bread he sells to me when it's bread that's baked in Hungary and shipped to Czech Republic across Austria, even if I wanted, and bread is the one thing you would expect to stay mostly local (and I still buy other sorts of bread locally). Raw meat is from Poland, vegetables are from China or god knows where, fruit is from Spain. The person sells stuff they have completely no quality control over (and they can't even throw out bad shipments without oversight of chain management whom I have no chance of meeting).
Social-towards-clerk shopping makes sense only if the produce is actually local.
And when it's about just meeting local people in sake of meeting them, I don't see how it's better to meet them in a grocery store rather than meet them while watching a sports event or a theatre play or a concert, or just walking a dog in the park, or loving whatever.
Shopping sucks and I will gladly have my homogenized Hungarian bread provided by a blissfully ignorant multicopter rather than by a wage slaving supermarket clerk who hates his job and his life.

Teal
Feb 25, 2013

by Nyc_Tattoo

This gonna end in some weird not-so-horror future where nobody knows for sure if their job still has a genuine purpose or if they are one of the 95% people kept working a fake job while the robots do the real work, isn't it?

Teal fucked around with this message at 16:48 on Feb 7, 2017

Teal
Feb 25, 2013

by Nyc_Tattoo

Owlofcreamcheese posted:

I think this has come up before but is the 2010s the ideal peak of human society or is there any automation technologies from earlier times we should have banned/restricted/made illegal?

Like apparently 350,000 people were employed as AT&T telephone switchboard operators. Should they have had their job protected by law and should we mandate we return that sector to human hands? Or is it only future technology based job loss that is bad and all the ones in the past were good? Are there ones in the future that might be good and are there ones in the past that were bad? What is the metric?

Man imagine how many loving jobs could there be if we employed people to do IP routing.

Teal
Feb 25, 2013

by Nyc_Tattoo
https://www.theguardian.com/politics/2017/feb/26/robert-mercer-breitbart-war-on-media-steve-bannon-donald-trump-nigel-farage

Can somebody please pick this apart as totally untrue fud, scaremongering and slander so I won't have trouble sleeping at night?

Teal
Feb 25, 2013

by Nyc_Tattoo
https://motherboard.vice.com/en_us/article/gydydm/gal-gadot-fake-ai-porn

What are we going to do when literally no video/audio recording is going to be acceptable as reliable evidence of something that actually happened?

Other than furiously beating it to [insert your fantasy crush] doing [insert your unmentionable gross fetishes] infinitely, forever.

Teal
Feb 25, 2013

by Nyc_Tattoo

CrazySalamander posted:

https://imgur.com/gallery/gkLFz

Algorithms have granted us a tantalizing look at Harry Potter and the Portrait of what Looked Like a Large Pile of Ash.

Jokes aside, it seems like it was at least semi human directed- they describe training predictive keyboards on the Harry Potter books, so what I presume occurred was the human would pick a letter and hit enter if the word had promise or another letter if they wanted to force it in a different direction. The biggest clue to this effect is the quote "The password was 'BEEF WOMEN'," two words that I would lay money never occurred next to each other in the Harry Potter books, let alone in all caps.

It's a bit irritating that they make it sound like it was completely auto generated- as random as it seems it is most definitely not typical of a markov style walk of this length. It is way too cogent for that.

The impression I got was that they took something like Google Android keyboard and trained it with Harry Potter data set. Those offer you a selection of words to use next above the keyboard itself. Then they (mostly?) tapped at these (possibly with more options than the google keyboard usually fits into the single row offered on display).

I could totally see this becoming a "productivity" increasing tool for Amazon "ironic erotica" schlock like Chuck Tingle puts out.

Also for tabloids.

I hope the AIs won't be too distraught when they realize that their specialisation in assisting Creative Witing is a job nobody wanted to pay for to begin with.

Teal fucked around with this message at 14:20 on Dec 13, 2017

Teal
Feb 25, 2013

by Nyc_Tattoo
Wait, 1/20? As in, 5% of the people there are homeless? Holy poo poo :wth:

Teal
Feb 25, 2013

by Nyc_Tattoo

Owlofcreamcheese posted:

Rich companies won't "help" anyone ever, but giant companies like walmart that rely on there being millions of low income shoppers will always do everything in their power to make sure there is low income shoppers. That doesn't make them your friend if they champion funding to make sure american society doesn't get TOO poor, because they also will champion causes to make sure american society doesn't get too well off for them either. Like there is a reason walmart is one of the world's largest donor of food to food banks while also having a huge worker population on food stamps.

I'm not sure that I find "they don't want you to starve to death, they want you in state of controlled languish" particularly reassuring.

Teal
Feb 25, 2013

by Nyc_Tattoo

Hot Dog Day #82 posted:

Matt Damon’s Elysium could also be a good look at the future, since it has a lot of super poor people just kind of milling around between working lovely soulless jobs and being policed by an unsympathetic justice system staffed by robots.

The fact the mistreatment of the Poor was a perfect example of 100% irrational meaningless malice aside, remember the movie ended on the note that all it took for the world to flip from dystopia to uptopia was for somebody to edit "worldAI.conf" to change "people = theRich;" to "people = everyone;".

The movie manages to be both irrationally pessimistic and optimistic at the same time.

Also I think Blokamp must have gotten bullied by some Australian as a kid a lot.

Teal
Feb 25, 2013

by Nyc_Tattoo

Raspberry Jam It In Me posted:

I highly doubt that blue denim jeans are fat people clothes. S and M are always the first sizes to be sold out with everything in the shop and the leftover sale rack is always ~90% clothes in L+. It's just incredibly lovely inventory management. Sentient inventory AI when? Why not done?

That totally has been a thing for ages but
a) It takes investment in changing the procedure the store, training of the management to operate it, etc; with the margins these places operate at and value of the merchandise, they can afford to shred a LOT of fatboy pants before it outweights the value of that.
b) Even with a system like that in place you're always gonna end up with some extras and consider that people are more likely able to fit into larger clothes if they can't get their exact size than the other way around, so it's the lower risk option to overstock on.

Teal
Feb 25, 2013

by Nyc_Tattoo

Owlofcreamcheese posted:

I disagree with the assertion that there actually is millennia of precedent of tools being used to the detriment of 90% of humanity in any systematic way.

Look at something like world hunger or poverty or access to water or disease or whatever. It's stuff that goes up and down over the years with it seemingly generally going down. Less people are in global poverty today than were 10 years ago. Less people are starving. Less people have guinea worms.

Bad things happen too. Some years hunger goes way up, or disease. The US is a mess and we have way more hook worms than we have in 100 years. There is no absolute rule every single thing improves every single year and nothing ever goes backwards. But there is no rule that everything is bad forever and no one ever uses tools to make anything better.

Someone somewhere is fixing water access. The amount of people without water has gone generally down overall over time. Either some dumb elite that doesn't know better that they were supposed to be satan incarnate and are being nice when they shouldn't be, some elite that is being seemingly nice but has ulterior motives or some non-elite that is fixing their own problem because water is such a widespread technology that it's within their reach now too. There hasn't been some rising tide of people having less and less water as elites grow in power. (nor have elites always grown only more powerful, with that being another thing that sometimes gets better and sometimes gets worse year to year, for both political and technological reasons). I bet in the US there will be less people with access to water in 5 years than there was 5 years ago. I bet globally the opposite.

I think you made a compelling point but I feel like you're ignoring a crucial factor which might have a stronger effect than facebook or taxibots.

For the first time in history, we're observing impending depletion of crucial resources that we can hardly envision existence of meaningful society or even individual welfare without. Fine; the energy scare since the big oil crisis turned out to be likely mostly false; renewables have exploded and as long as we don't block sun Matrix style we can probably count on relatively affordable, relatively reliable sources of generally useful energy, means of some local travel, etc.

There's still a heap of kinda big deal issues; most obviously, global warming which is projected to start making a lot of equatorial areas literal killzones within this century, not to mention all the obvious but not as easily quantifiable effects on farming, ocean ecosystems (which is another source of food we hugely rely on).

Then there's also topsoil depletion, aquifer depletion, nutritional value of all plant produce decreasing with increasing CO2 levels, emergence of antibiotics resistance among some serious illnesses, etc.

Basically, all the way through history The Rich didn't really have much to lose from letting the poor Kinda Languish Outta There; there was room, there was usually at least some form of nutritious if unsavory food the rich could sell to the poor to make themselves richer.

We're currently at peak of abundance yet probably approaching crush of scarcity that will really test the willingness to not use all these fantastic Poor People Destroying Tools to use.

Teal
Feb 25, 2013

by Nyc_Tattoo

Yuli Ban posted:

Anyone following news on the front of generative adversarial networks?

Yep, I work with a startup for document processing and while our current pipeline doesn't utilize any GANs in particular, it's currently the holy grail of what we would like to eventually use. It's just rather high-effort and we won't know if it's particularly feasible for our use case until we sink the development time into it, and everyone who can work on neural nets is hella busy with the ones we already have.

There's a big "hurrah" and "ooh" in the Slack a few times a week as somebody links another crazy paper about Another Thing GANs do Crazy Well.

Neural Nets are kinda a meme within the Machine Learning community in general and for example my lecturers at university are a bit salty about them as they're steamrolling all the other successes of machine learning from last 20 years and GANs are simply the bleeding edge of that right now.

Teal
Feb 25, 2013

by Nyc_Tattoo

muon posted:

but it does not imply that the GAN "knows" how to make a horse/zebra walk like humans can easily imagine. Now we could design a neural net to learn how to do that, but **the only way we know how to do that today requires thousands and thousands of labelled data sets of horses walking** before we could even begin to create that model. Neural networks need to have nuanced semantic understanding and ability to generalize to really be able to help create content in the way you're describing. That's an extremely difficult task that I don't know we'll solve by the 20s.

While I essentially agree with the notion that GANs aren't quite self-redeeming and an end-all and there's obviously a ton of stuff to do but specifically the bolded part is actually one of the things GANs are meant to (and already succeed to to some degree) eliminate; meticulously specific hand crafted labels based on not just our understanding of the data but also thorough reprocessing of it all to convey that information with some degree of explicitness.

The idea behind GANs is that if you wanted to teach a GAN how horses walk, you'd just throw a massive pile of unlabelled footage that has some kinda horse somewhere walking and the "game" between the two neural nets that make the structure is that the generative part tries to come up with fake footage of walking horses and the other one is learning to distinguish the fakes from the originals, so the first one learns things like euclidean geometry and eventually horse mechanics and horse biology because the other one gives it the feedback (in less explicit terms) that "no, this caterpillar thing that moves by shifting it's volume through space like an amoeba definitely isn't a horse and this sample is a bad fake".

GANs ideally don't need labelled data (although it can help) and that's part of what makes them so incredibly powerful; sometimes they work excellently even in cases where we can't be arsed to get the labels done or we literally don't have the explicit understanding of the task at hand to come up with labels.

Teal
Feb 25, 2013

by Nyc_Tattoo

Yuli Ban posted:

I mentioned this myself, that in the 20s, these algorithms will most assuredly generate increasingly amazing content but they will not understand said content and it's up to you to make sense of it all.

To use the comic example again, I could get the algorithm to generate a person in the style of Jack Kirby; first they have to consume thousands of picture of a person and then thousands of works from Kirby, and then eventually it'll understand how to warp the dimensions of a real human to fit a more stylized version, all to create "Dr. Minsky". I could get the algorithm to make this synthetic comic book Minsky take a variety of poses that I feed into it, or maybe alter his hair color or his wardrobe based on text inputs. I can add speech bubbles (I mean, I could do that really), and then I could create another person in the same manner. GANs will be able to do this by the early 2020s. It seems like a long-stretch, but there's a reason why I'm calling it— mostly due to the fact an actual machine learning expert whose pulse is firmly on the field is saying that what's being done in the labs is about right for the course, and he's still conservative about these things.

They won't be able to actually generate a full comic, start to finish, that makes sense. It's up to me to put every panel in order and fill in the text boxes. It's up to me to decide which panel works and which one doesn't. It's up to me to make the story, the reason why these characters are made in the first place. Sort of as if you have fully automated production of paper, but it's up to you to turn it into a book and sell it. It's true when you say that GANs are limited and will still have limits. I'm just saying that we'll still be able to do amazing things within the confines of these limits.

It's not going to be until the 2030s, at the absolute earliest, that we'll start seeing truly creative AI in any real way. I'd love to be wrong, but that's the way I'm seeing it at the moment.

While I agree it's a fool's bet, I feel like assigning decades to this kinda advance is meaningless considering how much of an insane leap have we made since 2000. You never know when somebody gonna come up with a trick that will allow them to feed the whole Wikipedia through a nvidia Titan X and get "Hello, I'm God, I'm here to save you from yourselves." on the console prompt.

"Machine learning relies on huge amount of data with labels for everything, always, a LOT of them." was a borderline axiomatic, heck, five years ago, and look where are we now.

Teal fucked around with this message at 01:10 on Dec 21, 2017

Teal
Feb 25, 2013

by Nyc_Tattoo

Owlofcreamcheese posted:

Also if a guy is in a room with a bunch of how to answer Chinese questions flowchart books and answers Chinese questions does anyone really understand Chinese etc etc etc

Only because "people need to write some non-dystopic fiction" was mentioned recently, I will use this opportunity to plug blindsight (you can legally read the whole Peter Watts' 2006 novel in the link, he released it for free), which is an incredibly puzzling and elaborate scifi exploration of possible inhuman intelligence (even though the artificial one isn't focus, although present and relevant). Chinese room is a very relevant concept through it, and while it's rather drastic and bleak, it's still a future I'd actually like to exist in.

Teal
Feb 25, 2013

by Nyc_Tattoo

Owlofcreamcheese posted:

Reading the title and trying to think of which book that is I thought maybe it was either

WWW:Wake which is about the internet coming alive after china seals off it's section of the internet and the perspective of having two individuals is what sparks a thoughtless computer into having a sense of self (and then a blind little girl who gets bionic eyes that don't work but let her see the flow of the internet and that does something else I can't remember because I read this book like 12 years ago).

OR

That it was some short story that I can't remember the name of about a blind man being a world class super genius and science realizing that his brain had rewired to use his visual contrex for problem solving and science figures out a technique to do that repeatably so people end up having to remove their kid's eyes because no amount of study will ever make you as good as an eyeless genius. Until it spreads to nearly any skilled labor and only unskilled laborers have eyes.

It's actually named after the neurological phenomenon called "blindsight" which occurs exactly once in the story but is rather aptly indicative of the wild nihilistic soul brutalizing ride the book takes you on. After I finished it I started having a bit of depressive episodes. I really don't recommend it to people who have confirmed psychological dissociation problems or general nihilism/existential dread issues.

Teal
Feb 25, 2013

by Nyc_Tattoo


I cannot loving handle this

Teal fucked around with this message at 12:39 on Dec 21, 2017

Teal
Feb 25, 2013

by Nyc_Tattoo

Raspberry Jam It In Me posted:

I don't get it. What is it trying to say? Is this a schizophrenia thing?

It's a right wing blog is trying to imply that the UK Labour Party (lefties) has this overly pessimistic, negative view of the future involving automation, whereas the UK Conservative Party can lead us to this shining vision of perfect future where everyone drives Smart cars on fields and meets up robots while wearing VR goggles for no apparent reason.

It's basically like, a really vulgar, cheap satire of the two sides of the argument in this thread, except meant seriously and specifically lampooning one side, tied to a particular political party.

Teal
Feb 25, 2013

by Nyc_Tattoo

Trabisnikof posted:

lol if you think these AI tools will be free to use.

If the current trends regarding bleeding edge neural net development tools (Keras, TensorFlow), as well as the academic community around them (how much of it gets pushed to arxiv for instance) are indicative of anything, yeah, they very well might be the most by default "free to use" technological field there's ever been.

Google and Facebook openly publish a lot (if not most) of their neural net research.

OpenAI literally has it in their mission statement.

At least so far, AI has been ahead of various other computer tools like media editing, control systems, maths, etc. when it comes to sharing your discoveries and tools with the broad community free of charge.

Lot of these fancy rear end showy experiments you can literally fork off Github and train with your data on a consumer grade (sadly, preferably nvidia, and sadly preferably closed source drivers) GPU.

It's a lot more "free" and accessible than a fuckton of other STEM fields where computerized tools have been a must for decades.

Teal fucked around with this message at 23:37 on Dec 21, 2017

Teal
Feb 25, 2013

by Nyc_Tattoo

Trabisnikof posted:

Of course you stick in all this nonsense about "intentionally horded" to avoid the core part of the criticism, that when a machine replaces a job, rarely can that worker afford to buy that machine outright instead.

It isn't "intentional hording" for expensive machines to replace workers, but that doesn't change the fact that when a job is automated it isn't the worker who will end up owning the machine. Sure, at least the cost of entry to software is lower than mechanical machines, but the one-sided relationship is the same. When software replaces a job the worker can't solve their problems by just buying the software, even if they can afford it.

I'd argue that relative to the profit most people can expect, artistry is pretty drat expensive as is, especially if we're talking digital art where you're expected to shell out thousands of dollars for a basic Cintiq and then hundreds more for the professional software.
Current "AI Software" costs are, depending on the accessibility level you want (which is however inverse to the kind of power you can get out of it, as the bleeding edge poo poo is all mostly in papers and github repos), either literally nothing as bunch of the best poo poo is FOSS, or actually included in the costs of your Photoshop and whatnot, as ML based filters and algorithms make it into that as drag and drop plugins. Hardware wise all of those GAN examples can be trained within order of days on a GTX1070 or some such, in computer that's otherwise equivalent of mid-range gaming machine. And there I'm talking about training your own nets with your own data (which would mean you're trying to adapt a specific artistic style for example); run-time wise, neural nets are at the point where for some uses (item recognition, or in more vulgar case, facial recognition) they can run realtime on a smart phone.

I'm not gonna pretend I can predict it will stay this way forever but the current trend implies neural net based intelligent tools might become one of the most "available for everyone" tools there's ever been, presuming the AI community doesn't somehow magically flip from mostly-FOSS to mostly-proprietary (worth noting it's been mostly progressing towards being more open), presuming we're not talking about people becoming too poor to own a consumer laptop.

Teal fucked around with this message at 10:03 on Dec 22, 2017

Teal
Feb 25, 2013

by Nyc_Tattoo

Tei posted:

I have a problem here.

You guys talk like owning a robot is all would take for a poor to be at the same level a rich guy that owns a robot.

Heres something for you:
https://www.cnet.com/news/facebook-closes-19-billion-deal-for-whatsapp/

Facebook paid 19 billion dollars for a IRC client. Writting a IRC client is not very hard, all it takes is about 2 hours of programming.

http://archive.oreilly.com/pub/h/1968

So whats going on here? Why Facebook paid 19 billions for a "robot" that is really cheap? It seems that you don't care about the code itself (the robot) you care about the captured customers base.

Poors can own robots, the means of production, but thats useless to them. They will still be poors. Theres more here in play that owning the means of productions. Theres also something to say about having a contact with the people that provide raw materials, having bargaining power, having a marketing muscle, and so on.


Lets not be naive. A university student can own a printer, and that will not save him from paying 300$ for a crappy book his teacher wrote. Just owning a means of production don't make people peers from the people with power.

Facebook hasn't paid 19 billion for a (bad) IRC client, it paid 19 billion for a competing social media network with at the time cca. 0.6 billion users.

The value of WhatsApp wasn't in the company, or the tool, but in the clients. Companies literally buy companies in sake of inheriting the clients all the time, and the plasticity of social networks is particularly limited as people usually hesitate to jump the fence from something their friends use.

Facebook needs the people, so it can datamine their communication and show them ads. It didn't buy a robot, it bought a well grown (I'm sorry to put it as rudely as that, but in this context it's the most fitting analogy) spying network.

This wasn't a purchase of software or a tool, it was purchase of a company, their know how, and most importantly, their market share.

Facebook buying a copy of "Photoshop That Can Phone Less Relevant Details In Automatically" doesn't make my copy of that work any worse. If they buy out whomever was developing the tool, the community will fork the source code of the last free release and carry on from there (that is, if it was something worth keeping around).

And yes of course, a bigger company will be able to get more data for their machine learner and big computation farms and all that, but that applied to shovels and bulldozers as well, except right now you can build a bulldozer in your living room and some of them are as good if not better than the best ones Facebook (who also happens to be contributing to the DIY effort themselves, but that just collapses the whole analogy irreparably) is toying with.

Teal fucked around with this message at 17:16 on Dec 22, 2017

Teal
Feb 25, 2013

by Nyc_Tattoo

Doctor Malaver posted:

A friend who develops on MS Azure was telling me today how it's insane how many new services they announce on a weekly basis. Face recognition, face mood recognition, text summarizing, text analysis sentence by sentence... He tries to stay up to date but too much is going on. And it all costs like a fraction of a cent per use. And that's just Azure, AWS is bigger and better.

Which reminds me MS Azure and AWS both throw considerable sums of completely free credit (just free off the bat, not free bonus to some other bought) at startups that show off promise, which is obviously an advertising scheme/market move, but still means that if you get lucky and are convincing, you can get a fuckton of compute (including the GPUs which is the stuff Neural Nets need) done for literally free.

Teal
Feb 25, 2013

by Nyc_Tattoo
Coherent story is hard to do via AI because for understanding basic building blocks of storytelling like justice, redemption, wanderlust and wonder, you'd already have to be slowly nearing Artificial General Intelligence levels. This whole topic slowly turned from scary and feasible to really nice but currently not quite there.

Obviously, you can always do something like Dishonored and quasi-script all the spectral facets of what flavor of sadistic murderer the player is, but the amount of content needed for that will quickly get unmanageable even on an AAA budget, and yeah, procedural generation and AI is helping and we're definitely gonna see more and more of it, but I think it will start from there; letting the designer order literally man-decades of content to their massive convoluted meta-script, and only slowly the AI will move upwards from making new textures and models to rooms to buildings to cities and only after that meaningful off-script quests and characters and whatnot.

It's a question if you count Dwarf Fortress which is a weird case where you keep wondering if there's actual depth and thought to the world or if you're starting to hallucinate due to the brain damage you've suffered by the interface. There's poo poo like legends, entire story arcs, all of the world is generated procedurally, but then somebody uses gold worth more than their literal life to carve a picture of themselves eating cheese and you realize it's more of cleverly dressed noise than something coherent.

Teal
Feb 25, 2013

by Nyc_Tattoo

Yuli Ban posted:

And now witness this.
 
Remember that time about a year and a half ago when a man fed Blade Runner into a neural network?
 
Turns out that the neural net was able to remember Blade Runner and managed to transpose its aesthetic onto other films.
The Neural Net That Recreated ‘Blade Runner’ Has the Movie Stuck in Its Memory
The AI that made ‘Blade Runner: Auto-encoded’ transposed the aesthetic of the movie onto other sci-fi classics

Okay I don't like this article, and while I love what the guy is toying with, I hate the way he talks about it.

Autoencoders are more like a glorified data compression algorithm that does it's earnest to figure out new patterns that will help it to fit in just a smidge more info on the stuff it's meant to learn into the horribly inadequate amount of memory it has. The side effect of that is that indeed, in ideal case, it does learn the "essence" of the entry data and finds the patterns that you'd struggle to uncover through classic engineery methods for you, and you can sift through the data in this new bastardized form and get a kind of "machine's notes" on it, and then for instance run clustering on that representation.

It's not "mind of an AI", it's more like making a classroom full of of kinda special elemenetary graders write cliffnotes on a book and telling them "just describe whichever bits of each chapter found the most interesting" to all of them, and then averaging the results.

It doesn't give insight on how AI works, or insight how eyesight works; it gives an insight on what's the lowest entropy description of a thing with a purposefully insufficient language for describing it.

Teal
Feb 25, 2013

by Nyc_Tattoo

Tei posted:

Let me use this space to whine about human memory. Is like a lovely compression algorithm.

The critical difference between human memory and an autoencoder is that we build the representation on basis of abstract yet meaningful parallels and there's no proof this will ever work by feeding a "large enough" neural net based autoencoder of this type "enough data".

The foundation of your memory of a piece of cake you ate yesterday is understanding of cocoa which fills in the flavour, the colour, the scent, the likely texture cocoa based cakes have. You don't need to remember the angles the slice had, because you've seen a whole cake before and have seen a circle sliced into wedges countless times before. You don't need to think about if it came served on porcelain or a patch of snow, because you know cakes come on platters, and those are usually made of porcelain.

Feeding all those connections in a meaningful manner without "stubs" into a neural net might easily turn out to simply not work; even if you have the resources to make it huge and give it all the data and time it might possibly mean, NNs come without a guarantee of ever converging to the global minimum (e: of training error, which is in this case extremely hard to define); it might simply enough not figure out the right connections in stuff (and it's often the case, and the best you can do is shrug and keep trying new and new combinations of parameters and data representations and eventually something else to do).

Teal fucked around with this message at 10:54 on Dec 27, 2017

Teal
Feb 25, 2013

by Nyc_Tattoo

Tei posted:

It don't seems to be a huge difference. Or maybe I am too dumb to understand any of this.

It don't sounds like autoencoders work very differently than human memory, only that human memory uses another subsystem (a tokenizer/ a lexer phase) that autoencoders don't include.

It would be easier to formulate the differences in exact and specific terms if we had complete understanding of how our mind and memory works.

For one thing; what the other guy said - one of the biggest hurdles of Deep Learning development that a lot of people is trying to figure a way around is that NNs are generally very spatially sensitive and for some specific input space (for instance XY raster image), if you want to distinguish an apple no matter where it is in the picture, you ideally need to show them a more or less uniform distribution of apples in all possible positions in the image - if all your samples had an apple in the bottom right corner, you'd hit a wonderful accuracy and all, but a new sample with an apple in top left corner would get misclassified. Similarly to that, our autoencoder will learn to put the characters to the cinematically common placements, but gonna be really weirded out once it encounters something hosed up or purposefully weird.

Other than that, though, there's the whole "comprehension" versus "chinese room" problem; a lot of the neural net poo poo can look super impressive but if you go back and analyse the results and toy with some different or purposefully weird testing data, you quickly realize that it's less that you have some sorta amazing AI on your hands, and more that the problem you're presenting it with was actually loving stupid, and that it found out a really cheap way to pretend it comprehends while it's just a smooth brute force.

Teal
Feb 25, 2013

by Nyc_Tattoo

Malcolm XML posted:

this is not true

max-pooling gives invariance to a bunch of things assuming you've trained correctly

That's a bit of a simplification; it does give decent spatial invariance in position, but if the position actually matters to you, or if some complex relation of the positions matters to you (for example, text), you'll have to figure out various hacky stuff that makes the whole deal often harder and harder to train. Yeah, the U model is all the rage, but I don't think it's quite where we gonna end at.

quote:

you don't need to be perfect to replace a bunch of repetitive jobs esp if you can control for things like rotation and scaling using such tricks as "lenses" and "orienting" like all the fruit classification systems do

Yeah that's true, again, I'm less critiquing usefulness and practical use of today NNs and more of trying to explain (in probably quite badly worded way) how are not quite there when it comes to the idea of them dreaming up movies at the drop of the hat (not even conceptually yet).

Teal
Feb 25, 2013

by Nyc_Tattoo

Owlofcreamcheese posted:

The future is now



Is this the leaked villain from the next Bond movie?

Cybernetic prosthesis and a pet duck?

Teal
Feb 25, 2013

by Nyc_Tattoo
Awful news for all of us who hoped their tomato picking job would be safe from automation.

The presented success rate of 75% isn't that impressive though. How often do you guys drop a tomato you're attempting to pick, guys?

Teal fucked around with this message at 17:33 on Jan 1, 2018

Teal
Feb 25, 2013

by Nyc_Tattoo
Guys you're reading too much into the thing I said about dropping, I just really wanted to make a post where I get to say both tomato and automation

Teal
Feb 25, 2013

by Nyc_Tattoo
Roomba lawnmowers are already a thing, and they're taking care of people's lawns already. Right now, you have to mark the areas it's not meant to mow down (flower patches, produce) with wires pinned to the ground.
Next thing, it will be able to tell what to not-mow-down autonomously, without you having to mark it.
Next thing after that, it might as well also learn to distinguish and pluck weed from areas it shouldn't mow, but can still reach (you could probably literally pull that off with single-motor shears on a single-motor flip out arm).
Next thing after that, it will tweet "YOUR poo poo IS READY FOR PICKING, MEATBAG" and attach photos of the particularly ripe automatos when it encounters some.

It will definitely be more of a niche thing for nerds and enthusiasts and the practical impact will be less "self sufficiency produce for them pyorple" and more whitecollar dads lazing in a recliner, watching the sucker go (and getting up to get it unstuck once in a while).

I feel like the most socially benefit optimistic pitch I can do is that automato pickers will be super relevant for vertical farming and hydroponics. If it enables the designers to pack the stuff into not necessarily at all human-accessible racks allowing it to be stacked in stupidly dense rooms and buildings that will never have to facilitate for entry of a person shaped person and only gonna focus on density of the produce and efficient use of light and water, the productivity (which is already pretty impressive in current designs) will shoot through all kinds of roofs.

So the hope here is that automatoes will be so cheap you'll easily be able to afford some real organic automato with the side of soylent even when living off your UBI.

Teal fucked around with this message at 10:36 on Jan 2, 2018

Teal
Feb 25, 2013

by Nyc_Tattoo

Hm, the automato picking machine will need a lot of work before it can pick tomatoes while simultaneously providing the sexappeal of a shirtless worker.

Teal
Feb 25, 2013

by Nyc_Tattoo

Tasmantor posted:

Vertical farming also gains in being able to be close to the final destination, don't have to ship it far if it was grown in the 'burbs.

Will SOMEBODY PLEASE think of the truck drivers?

Teal
Feb 25, 2013

by Nyc_Tattoo

Blue Star posted:

Cant wait to be constantly misgendered by software as it scans my big ugly mannish trans woman face. Gonna be rad. :)

I'd not be so worried; we had a demo of at the time apparently cutting-ege recognizer in hallway of our university a few months back and it tagged me as almost perfectly 50/50 impossible to gender even though I never made any effort to make that distinction difficult; I guess I had regular fat fucker face and long hair.

I feel like gender-read will return a lot of "ambiguous" even among cispeople, let alone among transpeople who usually do try to give off specific cues.

It did pin my age within 2 years though, and the emotion reading was pretty great too.

Teal
Feb 25, 2013

by Nyc_Tattoo

paternity suitor posted:

Ew, now I'm imagining some sort of app that scans people's reaction to your appearance and gives you a ranking at the end of the day so that you know what to wear...those shoes cost you 2 attraction points Rachel.

Why not cut out the meaty middlemen and just train a machine that will tell you that you look like poo poo on its own, before you head out?

I'm borderline ignorant of how to look good, but I wouldn't mind to point a camera at myself and get

"Tie just looks like poo poo, dump it"
"Shirt collar is crooked"
"Socks don't match"

Asking a fellow person how do you look is awkward as gently caress, but I would feel less bothered by taking advice from cold unfeeling heap of algebra to help me with choice between a paper sack and a noose.

Teal
Feb 25, 2013

by Nyc_Tattoo

Tasmantor posted:

Algorithms have been nothing but good for us so far! If only one could tell me how to dress :downs:

Who writes that winner? Apple, Google or Amazon gonna get the right to tell us all how we should dress? gently caress silicon valley I don't think anyone should be able to tell you how to dress, not fashion mags, not highschool kid, not some weird group think you believe in and sure as gently caress not the people who think "move fast and break things" is a cool attitude.

Firstly, there's some basic aesthetics that vast majority of people can agree on; it's not like you have to appeal to Vogue to point out someone's shirt is inside out.

Secondly, this is easily something you could end up being able to build on your own (or as a small community), feeding it a dataset of "looks you like" and "looks you don't like". Even today neural nets won't have a slightest issue learning basic things like clashing colors.

Teal
Feb 25, 2013

by Nyc_Tattoo

Guavanaut posted:

But like all purely aesthetic choices, what if wearing your shirt inside out becomes the new coolness?

Thinking about it, being told by an AI to flip your shirt the right way out might be exactly the thing that might trigger that trend.

Teal
Feb 25, 2013

by Nyc_Tattoo

Inescapable Duck posted:

And not even bringing up how automated beauty standards based on celebrities and popular opinion are absolutely going to poo poo on minorities, racial and sexual.

I get the feeling that kinda poo poo will get better rather than worse. Once somebody lays down a formula that, even if on basis of learning from a dataset, hands out a rating, there can be a fact driven resistance to the results along the lines of "hey, bitch, you clearly are not representing X enough" and techies will be lot more able to comprehend/accept that than some trash fashion magazine of the old.

Adbot
ADBOT LOVES YOU

Teal
Feb 25, 2013

by Nyc_Tattoo

Tasmantor posted:

You don't like t-shirts and jeans, they are comfortable and cheap, they endure for a reason.

What the gently caress is with this thread? Automation is useful at making work easier so we have more time to be humans but all you goony dicks want machines to be human for you so you can capitalism harder.

Like gently caress me, if it's so trivial to dress well that an ai could be made overnight then just loving learn how to dress your self. It will take you ten minutes on YouTube and then day to day, go nuts, ask another flesh bag of you look like poo poo. Sure you got bullied a lot in highschool but that was because you're awkward as hell and kid are cruel Basterds. You're adults now and people like interacting with other people, we are social monkeys, they won't hurt you. The world isn't a game of DND you don't dress to min max on some loving spread sheet, grow up.

I mean at least it's automation of something you are talking about but your analysis of it is "yes I am a social retard, I would love an ai mum to dress me and tell me I am special!". The paperback sci fi about a world of man babies, dressed by all loving algorithms and growing food in their yards by letting a specialised green thumb quad copter do the work, writes itself. It's the saddest most pathetic future you could go for, one where people exist to watch tools live for them instead of work for them.

Somewhere in all the bitching about people only seeing bad futures, with no work but the same work or die society, you all switched to letting the tail wag the dog.

There's a lot of people who opt out of social interaction altogether if they can. If you can ease at least one of the things they really don't enjoy doing and it ends up being what tips them between bothering and not bothering to be social, you did them service (and common good, unless you also happen to prefer some eugenic view that it's better to wait till they off themselves).

You sound like somebody trying to very aggressively and dismissively lecture people about dealing with problems you've never experienced. If you have the good will and idea how to deal help the asocial (whom you definitely can't call simply a product of technology as they have been recognized pretty much all the way through recorded history) in a more organic way, by all means, go do that! But don't scream and flail because somebody else came up with a crutchy tech based solution.

Teal fucked around with this message at 12:07 on Jan 7, 2018

  • Locked thread