|
Tiny Brontosaurus posted:Yes, going out in society is a social activity. Maybe you trudge and sulk and shrink away hissing anytime someone makes eye contact, but I know the cashiers who work at my store and talk to them, and frequently run into people I know or have some reason to chat with the other people there. I wonder just how much of human existence you've decided you're too good for. This is another case of "introducing robots to this defunct system will make it worse by disabling a non-solution to it's problems". Just like the society that produces more food than ever, with more living space (and elasticity to produce more) and energy (and elasticity to increase energetic efficiency) somehow having bottom half of the society struggling to make ends meet having "we need more people doing SOMETHING so they are technically useful" focus (aka "BUT THE JERBS, THE JERBSSS"), going to grocery store somehow being some sort of crucial social glue is again a symptom of failure and degradation that shouldn't enjoy some sort of protection from the scurry robots that are about to uproot another of our historically crucial social virtues. The time spared can be used for actual real socialisation, including socialisation with strangers - you can take your kids for a walk and engage other strangers on a walk with their kids, enjoying their own Time They Don't Have To Spend Buying Groceries In. If everyone spends the extra time ignoring their kids and watching Netflix then yeah, you have a problem, but you should focus on working on that problem rather than clinging to a patchy "virtue out of lack of other options".
|
# ¿ Feb 7, 2017 10:52 |
|
|
# ¿ May 2, 2024 22:27 |
|
I am still not seeing the inherent advantage of grocery shopping as a socially bonding, community bolstering experience. Initially it was socially relevant because the people selling you poo poo were usually the ones who grew it or built it. The social contact came with a promise of quality - if you swindle somebody, they know where you live, and they will make sure all your other customers will know it soon, too. We've got a butcher around a corner and I know I can buy raw sausage from him and count on not making GBS threads my guts out, I am glad to know him personally. I can't get mad at the grocery store clerk for quality of the bagged toast bread he sells to me when it's bread that's baked in Hungary and shipped to Czech Republic across Austria, even if I wanted, and bread is the one thing you would expect to stay mostly local (and I still buy other sorts of bread locally). Raw meat is from Poland, vegetables are from China or god knows where, fruit is from Spain. The person sells stuff they have completely no quality control over (and they can't even throw out bad shipments without oversight of chain management whom I have no chance of meeting). Social-towards-clerk shopping makes sense only if the produce is actually local. And when it's about just meeting local people in sake of meeting them, I don't see how it's better to meet them in a grocery store rather than meet them while watching a sports event or a theatre play or a concert, or just walking a dog in the park, or loving whatever. Shopping sucks and I will gladly have my homogenized Hungarian bread provided by a blissfully ignorant multicopter rather than by a wage slaving supermarket clerk who hates his job and his life.
|
# ¿ Feb 7, 2017 14:25 |
|
Jesus Horse posted:France has a fake jobs program This gonna end in some weird not-so-horror future where nobody knows for sure if their job still has a genuine purpose or if they are one of the 95% people kept working a fake job while the robots do the real work, isn't it? Teal fucked around with this message at 16:48 on Feb 7, 2017 |
# ¿ Feb 7, 2017 16:46 |
|
Owlofcreamcheese posted:I think this has come up before but is the 2010s the ideal peak of human society or is there any automation technologies from earlier times we should have banned/restricted/made illegal? Man imagine how many loving jobs could there be if we employed people to do IP routing.
|
# ¿ Feb 7, 2017 17:06 |
|
https://www.theguardian.com/politics/2017/feb/26/robert-mercer-breitbart-war-on-media-steve-bannon-donald-trump-nigel-farage Can somebody please pick this apart as totally untrue fud, scaremongering and slander so I won't have trouble sleeping at night?
|
# ¿ Mar 1, 2017 22:33 |
|
https://motherboard.vice.com/en_us/article/gydydm/gal-gadot-fake-ai-porn What are we going to do when literally no video/audio recording is going to be acceptable as reliable evidence of something that actually happened? Other than furiously beating it to [insert your fantasy crush] doing [insert your unmentionable gross fetishes] infinitely, forever.
|
# ¿ Dec 12, 2017 13:40 |
|
CrazySalamander posted:https://imgur.com/gallery/gkLFz The impression I got was that they took something like Google Android keyboard and trained it with Harry Potter data set. Those offer you a selection of words to use next above the keyboard itself. Then they (mostly?) tapped at these (possibly with more options than the google keyboard usually fits into the single row offered on display). I could totally see this becoming a "productivity" increasing tool for Amazon "ironic erotica" schlock like Chuck Tingle puts out. Also for tabloids. I hope the AIs won't be too distraught when they realize that their specialisation in assisting Creative Witing is a job nobody wanted to pay for to begin with. Teal fucked around with this message at 14:20 on Dec 13, 2017 |
# ¿ Dec 13, 2017 14:16 |
|
Wait, 1/20? As in, 5% of the people there are homeless? Holy poo poo
|
# ¿ Dec 14, 2017 21:14 |
|
Owlofcreamcheese posted:Rich companies won't "help" anyone ever, but giant companies like walmart that rely on there being millions of low income shoppers will always do everything in their power to make sure there is low income shoppers. That doesn't make them your friend if they champion funding to make sure american society doesn't get TOO poor, because they also will champion causes to make sure american society doesn't get too well off for them either. Like there is a reason walmart is one of the world's largest donor of food to food banks while also having a huge worker population on food stamps. I'm not sure that I find "they don't want you to starve to death, they want you in state of controlled languish" particularly reassuring.
|
# ¿ Dec 15, 2017 00:21 |
|
Hot Dog Day #82 posted:Matt Damon’s Elysium could also be a good look at the future, since it has a lot of super poor people just kind of milling around between working lovely soulless jobs and being policed by an unsympathetic justice system staffed by robots. The fact the mistreatment of the Poor was a perfect example of 100% irrational meaningless malice aside, remember the movie ended on the note that all it took for the world to flip from dystopia to uptopia was for somebody to edit "worldAI.conf" to change "people = theRich;" to "people = everyone;". The movie manages to be both irrationally pessimistic and optimistic at the same time. Also I think Blokamp must have gotten bullied by some Australian as a kid a lot.
|
# ¿ Dec 20, 2017 11:49 |
|
Raspberry Jam It In Me posted:I highly doubt that blue denim jeans are fat people clothes. S and M are always the first sizes to be sold out with everything in the shop and the leftover sale rack is always ~90% clothes in L+. It's just incredibly lovely inventory management. Sentient inventory AI when? Why not done? That totally has been a thing for ages but a) It takes investment in changing the procedure the store, training of the management to operate it, etc; with the margins these places operate at and value of the merchandise, they can afford to shred a LOT of fatboy pants before it outweights the value of that. b) Even with a system like that in place you're always gonna end up with some extras and consider that people are more likely able to fit into larger clothes if they can't get their exact size than the other way around, so it's the lower risk option to overstock on.
|
# ¿ Dec 20, 2017 13:17 |
|
Owlofcreamcheese posted:I disagree with the assertion that there actually is millennia of precedent of tools being used to the detriment of 90% of humanity in any systematic way. I think you made a compelling point but I feel like you're ignoring a crucial factor which might have a stronger effect than facebook or taxibots. For the first time in history, we're observing impending depletion of crucial resources that we can hardly envision existence of meaningful society or even individual welfare without. Fine; the energy scare since the big oil crisis turned out to be likely mostly false; renewables have exploded and as long as we don't block sun Matrix style we can probably count on relatively affordable, relatively reliable sources of generally useful energy, means of some local travel, etc. There's still a heap of kinda big deal issues; most obviously, global warming which is projected to start making a lot of equatorial areas literal killzones within this century, not to mention all the obvious but not as easily quantifiable effects on farming, ocean ecosystems (which is another source of food we hugely rely on). Then there's also topsoil depletion, aquifer depletion, nutritional value of all plant produce decreasing with increasing CO2 levels, emergence of antibiotics resistance among some serious illnesses, etc. Basically, all the way through history The Rich didn't really have much to lose from letting the poor Kinda Languish Outta There; there was room, there was usually at least some form of nutritious if unsavory food the rich could sell to the poor to make themselves richer. We're currently at peak of abundance yet probably approaching crush of scarcity that will really test the willingness to not use all these fantastic Poor People Destroying Tools to use.
|
# ¿ Dec 20, 2017 23:15 |
|
Yuli Ban posted:Anyone following news on the front of generative adversarial networks? Yep, I work with a startup for document processing and while our current pipeline doesn't utilize any GANs in particular, it's currently the holy grail of what we would like to eventually use. It's just rather high-effort and we won't know if it's particularly feasible for our use case until we sink the development time into it, and everyone who can work on neural nets is hella busy with the ones we already have. There's a big "hurrah" and "ooh" in the Slack a few times a week as somebody links another crazy paper about Another Thing GANs do Crazy Well. Neural Nets are kinda a meme within the Machine Learning community in general and for example my lecturers at university are a bit salty about them as they're steamrolling all the other successes of machine learning from last 20 years and GANs are simply the bleeding edge of that right now.
|
# ¿ Dec 21, 2017 00:27 |
|
muon posted:but it does not imply that the GAN "knows" how to make a horse/zebra walk like humans can easily imagine. Now we could design a neural net to learn how to do that, but **the only way we know how to do that today requires thousands and thousands of labelled data sets of horses walking** before we could even begin to create that model. Neural networks need to have nuanced semantic understanding and ability to generalize to really be able to help create content in the way you're describing. That's an extremely difficult task that I don't know we'll solve by the 20s. While I essentially agree with the notion that GANs aren't quite self-redeeming and an end-all and there's obviously a ton of stuff to do but specifically the bolded part is actually one of the things GANs are meant to (and already succeed to to some degree) eliminate; meticulously specific hand crafted labels based on not just our understanding of the data but also thorough reprocessing of it all to convey that information with some degree of explicitness. The idea behind GANs is that if you wanted to teach a GAN how horses walk, you'd just throw a massive pile of unlabelled footage that has some kinda horse somewhere walking and the "game" between the two neural nets that make the structure is that the generative part tries to come up with fake footage of walking horses and the other one is learning to distinguish the fakes from the originals, so the first one learns things like euclidean geometry and eventually horse mechanics and horse biology because the other one gives it the feedback (in less explicit terms) that "no, this caterpillar thing that moves by shifting it's volume through space like an amoeba definitely isn't a horse and this sample is a bad fake". GANs ideally don't need labelled data (although it can help) and that's part of what makes them so incredibly powerful; sometimes they work excellently even in cases where we can't be arsed to get the labels done or we literally don't have the explicit understanding of the task at hand to come up with labels.
|
# ¿ Dec 21, 2017 01:04 |
|
Yuli Ban posted:I mentioned this myself, that in the 20s, these algorithms will most assuredly generate increasingly amazing content but they will not understand said content and it's up to you to make sense of it all. While I agree it's a fool's bet, I feel like assigning decades to this kinda advance is meaningless considering how much of an insane leap have we made since 2000. You never know when somebody gonna come up with a trick that will allow them to feed the whole Wikipedia through a nvidia Titan X and get "Hello, I'm God, I'm here to save you from yourselves." on the console prompt. "Machine learning relies on huge amount of data with labels for everything, always, a LOT of them." was a borderline axiomatic, heck, five years ago, and look where are we now. Teal fucked around with this message at 01:10 on Dec 21, 2017 |
# ¿ Dec 21, 2017 01:08 |
|
Owlofcreamcheese posted:Also if a guy is in a room with a bunch of how to answer Chinese questions flowchart books and answers Chinese questions does anyone really understand Chinese etc etc etc Only because "people need to write some non-dystopic fiction" was mentioned recently, I will use this opportunity to plug blindsight (you can legally read the whole Peter Watts' 2006 novel in the link, he released it for free), which is an incredibly puzzling and elaborate scifi exploration of possible inhuman intelligence (even though the artificial one isn't focus, although present and relevant). Chinese room is a very relevant concept through it, and while it's rather drastic and bleak, it's still a future I'd actually like to exist in.
|
# ¿ Dec 21, 2017 01:30 |
|
Owlofcreamcheese posted:Reading the title and trying to think of which book that is I thought maybe it was either It's actually named after the neurological phenomenon called "blindsight" which occurs exactly once in the story but is rather aptly indicative of the wild nihilistic soul brutalizing ride the book takes you on. After I finished it I started having a bit of depressive episodes. I really don't recommend it to people who have confirmed psychological dissociation problems or general nihilism/existential dread issues.
|
# ¿ Dec 21, 2017 09:37 |
|
I cannot loving handle this Teal fucked around with this message at 12:39 on Dec 21, 2017 |
# ¿ Dec 21, 2017 12:36 |
|
Raspberry Jam It In Me posted:I don't get it. What is it trying to say? Is this a schizophrenia thing? It's a right wing blog is trying to imply that the UK Labour Party (lefties) has this overly pessimistic, negative view of the future involving automation, whereas the UK Conservative Party can lead us to this shining vision of perfect future where everyone drives Smart cars on fields and meets up robots while wearing VR goggles for no apparent reason. It's basically like, a really vulgar, cheap satire of the two sides of the argument in this thread, except meant seriously and specifically lampooning one side, tied to a particular political party.
|
# ¿ Dec 21, 2017 14:14 |
|
Trabisnikof posted:lol if you think these AI tools will be free to use. If the current trends regarding bleeding edge neural net development tools (Keras, TensorFlow), as well as the academic community around them (how much of it gets pushed to arxiv for instance) are indicative of anything, yeah, they very well might be the most by default "free to use" technological field there's ever been. Google and Facebook openly publish a lot (if not most) of their neural net research. OpenAI literally has it in their mission statement. At least so far, AI has been ahead of various other computer tools like media editing, control systems, maths, etc. when it comes to sharing your discoveries and tools with the broad community free of charge. Lot of these fancy rear end showy experiments you can literally fork off Github and train with your data on a consumer grade (sadly, preferably nvidia, and sadly preferably closed source drivers) GPU. It's a lot more "free" and accessible than a fuckton of other STEM fields where computerized tools have been a must for decades. Teal fucked around with this message at 23:37 on Dec 21, 2017 |
# ¿ Dec 21, 2017 23:35 |
|
Trabisnikof posted:Of course you stick in all this nonsense about "intentionally horded" to avoid the core part of the criticism, that when a machine replaces a job, rarely can that worker afford to buy that machine outright instead. I'd argue that relative to the profit most people can expect, artistry is pretty drat expensive as is, especially if we're talking digital art where you're expected to shell out thousands of dollars for a basic Cintiq and then hundreds more for the professional software. Current "AI Software" costs are, depending on the accessibility level you want (which is however inverse to the kind of power you can get out of it, as the bleeding edge poo poo is all mostly in papers and github repos), either literally nothing as bunch of the best poo poo is FOSS, or actually included in the costs of your Photoshop and whatnot, as ML based filters and algorithms make it into that as drag and drop plugins. Hardware wise all of those GAN examples can be trained within order of days on a GTX1070 or some such, in computer that's otherwise equivalent of mid-range gaming machine. And there I'm talking about training your own nets with your own data (which would mean you're trying to adapt a specific artistic style for example); run-time wise, neural nets are at the point where for some uses (item recognition, or in more vulgar case, facial recognition) they can run realtime on a smart phone. I'm not gonna pretend I can predict it will stay this way forever but the current trend implies neural net based intelligent tools might become one of the most "available for everyone" tools there's ever been, presuming the AI community doesn't somehow magically flip from mostly-FOSS to mostly-proprietary (worth noting it's been mostly progressing towards being more open), presuming we're not talking about people becoming too poor to own a consumer laptop. Teal fucked around with this message at 10:03 on Dec 22, 2017 |
# ¿ Dec 22, 2017 10:00 |
|
Tei posted:I have a problem here. Facebook hasn't paid 19 billion for a (bad) IRC client, it paid 19 billion for a competing social media network with at the time cca. 0.6 billion users. The value of WhatsApp wasn't in the company, or the tool, but in the clients. Companies literally buy companies in sake of inheriting the clients all the time, and the plasticity of social networks is particularly limited as people usually hesitate to jump the fence from something their friends use. Facebook needs the people, so it can datamine their communication and show them ads. It didn't buy a robot, it bought a well grown (I'm sorry to put it as rudely as that, but in this context it's the most fitting analogy) spying network. This wasn't a purchase of software or a tool, it was purchase of a company, their know how, and most importantly, their market share. Facebook buying a copy of "Photoshop That Can Phone Less Relevant Details In Automatically" doesn't make my copy of that work any worse. If they buy out whomever was developing the tool, the community will fork the source code of the last free release and carry on from there (that is, if it was something worth keeping around). And yes of course, a bigger company will be able to get more data for their machine learner and big computation farms and all that, but that applied to shovels and bulldozers as well, except right now you can build a bulldozer in your living room and some of them are as good if not better than the best ones Facebook (who also happens to be contributing to the DIY effort themselves, but that just collapses the whole analogy irreparably) is toying with. Teal fucked around with this message at 17:16 on Dec 22, 2017 |
# ¿ Dec 22, 2017 17:07 |
|
Doctor Malaver posted:A friend who develops on MS Azure was telling me today how it's insane how many new services they announce on a weekly basis. Face recognition, face mood recognition, text summarizing, text analysis sentence by sentence... He tries to stay up to date but too much is going on. And it all costs like a fraction of a cent per use. And that's just Azure, AWS is bigger and better. Which reminds me MS Azure and AWS both throw considerable sums of completely free credit (just free off the bat, not free bonus to some other bought) at startups that show off promise, which is obviously an advertising scheme/market move, but still means that if you get lucky and are convincing, you can get a fuckton of compute (including the GPUs which is the stuff Neural Nets need) done for literally free.
|
# ¿ Dec 22, 2017 23:33 |
|
Coherent story is hard to do via AI because for understanding basic building blocks of storytelling like justice, redemption, wanderlust and wonder, you'd already have to be slowly nearing Artificial General Intelligence levels. This whole topic slowly turned from scary and feasible to really nice but currently not quite there. Obviously, you can always do something like Dishonored and quasi-script all the spectral facets of what flavor of sadistic murderer the player is, but the amount of content needed for that will quickly get unmanageable even on an AAA budget, and yeah, procedural generation and AI is helping and we're definitely gonna see more and more of it, but I think it will start from there; letting the designer order literally man-decades of content to their massive convoluted meta-script, and only slowly the AI will move upwards from making new textures and models to rooms to buildings to cities and only after that meaningful off-script quests and characters and whatnot. It's a question if you count Dwarf Fortress which is a weird case where you keep wondering if there's actual depth and thought to the world or if you're starting to hallucinate due to the brain damage you've suffered by the interface. There's poo poo like legends, entire story arcs, all of the world is generated procedurally, but then somebody uses gold worth more than their literal life to carve a picture of themselves eating cheese and you realize it's more of cleverly dressed noise than something coherent.
|
# ¿ Dec 26, 2017 00:38 |
|
Yuli Ban posted:And now witness this. Okay I don't like this article, and while I love what the guy is toying with, I hate the way he talks about it. Autoencoders are more like a glorified data compression algorithm that does it's earnest to figure out new patterns that will help it to fit in just a smidge more info on the stuff it's meant to learn into the horribly inadequate amount of memory it has. The side effect of that is that indeed, in ideal case, it does learn the "essence" of the entry data and finds the patterns that you'd struggle to uncover through classic engineery methods for you, and you can sift through the data in this new bastardized form and get a kind of "machine's notes" on it, and then for instance run clustering on that representation. It's not "mind of an AI", it's more like making a classroom full of of kinda special elemenetary graders write cliffnotes on a book and telling them "just describe whichever bits of each chapter found the most interesting" to all of them, and then averaging the results. It doesn't give insight on how AI works, or insight how eyesight works; it gives an insight on what's the lowest entropy description of a thing with a purposefully insufficient language for describing it.
|
# ¿ Dec 26, 2017 22:57 |
|
Tei posted:Let me use this space to whine about human memory. Is like a lovely compression algorithm. The critical difference between human memory and an autoencoder is that we build the representation on basis of abstract yet meaningful parallels and there's no proof this will ever work by feeding a "large enough" neural net based autoencoder of this type "enough data". The foundation of your memory of a piece of cake you ate yesterday is understanding of cocoa which fills in the flavour, the colour, the scent, the likely texture cocoa based cakes have. You don't need to remember the angles the slice had, because you've seen a whole cake before and have seen a circle sliced into wedges countless times before. You don't need to think about if it came served on porcelain or a patch of snow, because you know cakes come on platters, and those are usually made of porcelain. Feeding all those connections in a meaningful manner without "stubs" into a neural net might easily turn out to simply not work; even if you have the resources to make it huge and give it all the data and time it might possibly mean, NNs come without a guarantee of ever converging to the global minimum (e: of training error, which is in this case extremely hard to define); it might simply enough not figure out the right connections in stuff (and it's often the case, and the best you can do is shrug and keep trying new and new combinations of parameters and data representations and eventually something else to do). Teal fucked around with this message at 10:54 on Dec 27, 2017 |
# ¿ Dec 26, 2017 23:42 |
|
Tei posted:It don't seems to be a huge difference. Or maybe I am too dumb to understand any of this. It would be easier to formulate the differences in exact and specific terms if we had complete understanding of how our mind and memory works. For one thing; what the other guy said - one of the biggest hurdles of Deep Learning development that a lot of people is trying to figure a way around is that NNs are generally very spatially sensitive and for some specific input space (for instance XY raster image), if you want to distinguish an apple no matter where it is in the picture, you ideally need to show them a more or less uniform distribution of apples in all possible positions in the image - if all your samples had an apple in the bottom right corner, you'd hit a wonderful accuracy and all, but a new sample with an apple in top left corner would get misclassified. Similarly to that, our autoencoder will learn to put the characters to the cinematically common placements, but gonna be really weirded out once it encounters something hosed up or purposefully weird. Other than that, though, there's the whole "comprehension" versus "chinese room" problem; a lot of the neural net poo poo can look super impressive but if you go back and analyse the results and toy with some different or purposefully weird testing data, you quickly realize that it's less that you have some sorta amazing AI on your hands, and more that the problem you're presenting it with was actually loving stupid, and that it found out a really cheap way to pretend it comprehends while it's just a smooth brute force.
|
# ¿ Dec 28, 2017 00:58 |
|
Malcolm XML posted:this is not true That's a bit of a simplification; it does give decent spatial invariance in position, but if the position actually matters to you, or if some complex relation of the positions matters to you (for example, text), you'll have to figure out various hacky stuff that makes the whole deal often harder and harder to train. Yeah, the U model is all the rage, but I don't think it's quite where we gonna end at. quote:you don't need to be perfect to replace a bunch of repetitive jobs esp if you can control for things like rotation and scaling using such tricks as "lenses" and "orienting" like all the fruit classification systems do Yeah that's true, again, I'm less critiquing usefulness and practical use of today NNs and more of trying to explain (in probably quite badly worded way) how are not quite there when it comes to the idea of them dreaming up movies at the drop of the hat (not even conceptually yet).
|
# ¿ Dec 28, 2017 11:43 |
|
Owlofcreamcheese posted:The future is now Is this the leaked villain from the next Bond movie? Cybernetic prosthesis and a pet duck?
|
# ¿ Dec 28, 2017 11:46 |
|
Awful news for all of us who hoped their tomato picking job would be safe from automation. The presented success rate of 75% isn't that impressive though. How often do you guys drop a tomato you're attempting to pick, guys? Teal fucked around with this message at 17:33 on Jan 1, 2018 |
# ¿ Jan 1, 2018 17:30 |
|
Guys you're reading too much into the thing I said about dropping, I just really wanted to make a post where I get to say both tomato and automation
|
# ¿ Jan 1, 2018 19:26 |
|
Roomba lawnmowers are already a thing, and they're taking care of people's lawns already. Right now, you have to mark the areas it's not meant to mow down (flower patches, produce) with wires pinned to the ground. Next thing, it will be able to tell what to not-mow-down autonomously, without you having to mark it. Next thing after that, it might as well also learn to distinguish and pluck weed from areas it shouldn't mow, but can still reach (you could probably literally pull that off with single-motor shears on a single-motor flip out arm). Next thing after that, it will tweet "YOUR poo poo IS READY FOR PICKING, MEATBAG" and attach photos of the particularly ripe automatos when it encounters some. It will definitely be more of a niche thing for nerds and enthusiasts and the practical impact will be less "self sufficiency produce for them pyorple" and more whitecollar dads lazing in a recliner, watching the sucker go (and getting up to get it unstuck once in a while). I feel like the most socially benefit optimistic pitch I can do is that automato pickers will be super relevant for vertical farming and hydroponics. If it enables the designers to pack the stuff into not necessarily at all human-accessible racks allowing it to be stacked in stupidly dense rooms and buildings that will never have to facilitate for entry of a person shaped person and only gonna focus on density of the produce and efficient use of light and water, the productivity (which is already pretty impressive in current designs) will shoot through all kinds of roofs. So the hope here is that automatoes will be so cheap you'll easily be able to afford some real organic automato with the side of soylent even when living off your UBI. Teal fucked around with this message at 10:36 on Jan 2, 2018 |
# ¿ Jan 2, 2018 10:27 |
|
Hm, the automato picking machine will need a lot of work before it can pick tomatoes while simultaneously providing the sexappeal of a shirtless worker.
|
# ¿ Jan 2, 2018 20:36 |
|
Tasmantor posted:Vertical farming also gains in being able to be close to the final destination, don't have to ship it far if it was grown in the 'burbs. Will SOMEBODY PLEASE think of the truck drivers?
|
# ¿ Jan 3, 2018 12:59 |
|
Blue Star posted:Cant wait to be constantly misgendered by software as it scans my big ugly mannish trans woman face. Gonna be rad. I'd not be so worried; we had a demo of at the time apparently cutting-ege recognizer in hallway of our university a few months back and it tagged me as almost perfectly 50/50 impossible to gender even though I never made any effort to make that distinction difficult; I guess I had regular fat fucker face and long hair. I feel like gender-read will return a lot of "ambiguous" even among cispeople, let alone among transpeople who usually do try to give off specific cues. It did pin my age within 2 years though, and the emotion reading was pretty great too.
|
# ¿ Jan 6, 2018 11:10 |
|
paternity suitor posted:Ew, now I'm imagining some sort of app that scans people's reaction to your appearance and gives you a ranking at the end of the day so that you know what to wear...those shoes cost you 2 attraction points Rachel. Why not cut out the meaty middlemen and just train a machine that will tell you that you look like poo poo on its own, before you head out? I'm borderline ignorant of how to look good, but I wouldn't mind to point a camera at myself and get "Tie just looks like poo poo, dump it" "Shirt collar is crooked" "Socks don't match" Asking a fellow person how do you look is awkward as gently caress, but I would feel less bothered by taking advice from cold unfeeling heap of algebra to help me with choice between a paper sack and a noose.
|
# ¿ Jan 6, 2018 11:19 |
|
Tasmantor posted:Algorithms have been nothing but good for us so far! If only one could tell me how to dress Firstly, there's some basic aesthetics that vast majority of people can agree on; it's not like you have to appeal to Vogue to point out someone's shirt is inside out. Secondly, this is easily something you could end up being able to build on your own (or as a small community), feeding it a dataset of "looks you like" and "looks you don't like". Even today neural nets won't have a slightest issue learning basic things like clashing colors.
|
# ¿ Jan 6, 2018 13:52 |
|
Guavanaut posted:But like all purely aesthetic choices, what if wearing your shirt inside out becomes the new coolness? Thinking about it, being told by an AI to flip your shirt the right way out might be exactly the thing that might trigger that trend.
|
# ¿ Jan 6, 2018 14:07 |
|
Inescapable Duck posted:And not even bringing up how automated beauty standards based on celebrities and popular opinion are absolutely going to poo poo on minorities, racial and sexual. I get the feeling that kinda poo poo will get better rather than worse. Once somebody lays down a formula that, even if on basis of learning from a dataset, hands out a rating, there can be a fact driven resistance to the results along the lines of "hey, bitch, you clearly are not representing X enough" and techies will be lot more able to comprehend/accept that than some trash fashion magazine of the old.
|
# ¿ Jan 7, 2018 11:26 |
|
|
# ¿ May 2, 2024 22:27 |
|
Tasmantor posted:You don't like t-shirts and jeans, they are comfortable and cheap, they endure for a reason. There's a lot of people who opt out of social interaction altogether if they can. If you can ease at least one of the things they really don't enjoy doing and it ends up being what tips them between bothering and not bothering to be social, you did them service (and common good, unless you also happen to prefer some eugenic view that it's better to wait till they off themselves). You sound like somebody trying to very aggressively and dismissively lecture people about dealing with problems you've never experienced. If you have the good will and idea how to deal help the asocial (whom you definitely can't call simply a product of technology as they have been recognized pretty much all the way through recorded history) in a more organic way, by all means, go do that! But don't scream and flail because somebody else came up with a crutchy tech based solution. Teal fucked around with this message at 12:07 on Jan 7, 2018 |
# ¿ Jan 7, 2018 11:35 |