|
I'm old enough that I honestly wouldn't know that those names are all fake. JACKED LIKE A MAN
|
# ? Jan 25, 2018 00:50 |
|
|
# ? May 27, 2024 02:48 |
|
Cumpo is playing downtown this weekend.
|
# ? Jan 25, 2018 03:07 |
|
I don't get it.
|
# ? Jan 25, 2018 04:01 |
Metal Geir Skogul posted:I don't get it. The image is from this tweet: https://twitter.com/botnikstudios/status/955870327652970496 i.e. more predictive keyboard shenanigans! E: Or just neural network shenanigans? I thought Botnik Studios only did predictive text generation, huh.
|
|
# ? Jan 25, 2018 04:27 |
|
https://www.youtube.com/watch?v=r6zZPn-6dPY A demonstration that a single neural network trained by GAN can generate a variety of images. All images in the same panel coordinate are generated from the same latent variable, and therefore its meaning is preserved over different classes.
|
# ? Jan 26, 2018 00:08 |
|
Tunicate posted:https://www.youtube.com/watch?v=r6zZPn-6dPY With the low resolution and how fast the images morph I didn't notice it at first, but I paused and uh.. the images are actually pretty disturbing (well at least the animal ones)
|
# ? Jan 26, 2018 00:23 |
|
i would love that aesthetic for a horror game
|
# ? Jan 26, 2018 02:10 |
|
I've always thought this particular creature had achieved the best looking completely alien system of locomotion. https://www.youtube.com/watch?v=AUXc6mckGLE
|
# ? Jan 26, 2018 02:14 |
|
End of Shoelace posted:i would love that aesthetic for a horror game I mean, Neural Networks have nailed the uncanny valley effect. It's so close but yet so wrong.
|
# ? Jan 26, 2018 03:12 |
|
I could totally believe most of these
|
# ? Jan 26, 2018 15:15 |
|
Baddwurds is a genius self-contained joke; a perfect band name. Beachfeel is my favorite surf-themed chillwave artist I Love You, the Wait must be a Killing Joke cover band Mindless has a new favorite as of 15:41 on Jan 26, 2018 |
# ? Jan 26, 2018 15:38 |
|
Jonathan Mushboy. Scenemy. Goof Alibi. MANACE.
|
# ? Jan 26, 2018 17:09 |
|
I would totally see Benus Jackson.
|
# ? Jan 26, 2018 20:02 |
|
Boy/Boys play Pet Shop Boys in the style of Boy George and vice versa.
|
# ? Jan 26, 2018 20:18 |
|
Dave Dump McMan is an old favorite. Does it just show how immature I am that this procedurally generated text stuff (Harry Potter chapter, those Seinfeld and X-Files scripts, the recipes) are the funniest things in the world to me? I'm just laughing my rear end, unendingly. I guess this would also explain why I like so many of Clickhole's "random insanity" articles. Graviija has a new favorite as of 22:06 on Jan 26, 2018 |
# ? Jan 26, 2018 20:40 |
|
I think "random" humor is super hard to do well, but you can use "tricks" like having an NN or Markov generator spit weird poo poo at you, and it'll manage to pass the lame bar. Like imagine that Coachella roster written by hand, it would be incredibly try-hard and have very few actually funny band names. I called it constrained writing earlier in the thread, which I still think it is (they're very often heavily curated), but they do hit the funny bone more often than not. & Yeah, Woebin: You're right that it's not 100% handmade.
|
# ? Jan 26, 2018 20:49 |
|
Procedurally-generated "Um, actually..." posts.
|
# ? Jan 26, 2018 21:08 |
|
There's some great ones where they combine two corpora, such as the erowid recruiter (drug trip reports + recruiter emails): https://twitter.com/erowidrecruiter/status/560559080289222656 https://twitter.com/erowidrecruiter/status/947343560847831040 https://twitter.com/erowidrecruiter/status/841360717433434112 Probably curated as well, but they're excellent.
|
# ? Jan 26, 2018 21:19 |
|
IRC bots can and will be brutal. https://twitter.com/Tobyslop/status/687002587971858432
|
# ? Jan 26, 2018 21:35 |
Especially with markov chains curation is pretty much impossible not to do because so much of the generated content is complete nonsense and the actually funny bits are the little gems hidden in the noise. I'm actually not sure what the signal-to-noise ratio is with well-trained neural networks, I'd like to imagine that they create good stuff every time but the truth is probably less fantastic. I don't personally mind cherry-picking entries but at the same time I've got to admit that knowledge that an entry is heavily edited does diminish the funniness. Like Krankenstyle said, it's just really hard to do absurd humour well and procedural generation kinda bypasses the "this is trying too hard" problem entirely. And yeah, combining two source texts can result in amazing things, like those dinosaur plants posted earlier. Some markov chained stuff from my twitter "bots": https://twitter.com/chaingenerator/status/953201759090077696 https://twitter.com/MtGmarkov/status/945379488644427776
|
|
# ? Jan 26, 2018 21:38 |
|
https://twitter.com/deepdrumpf/status/728317897412579328 https://twitter.com/DeepDrumpf/status/788918639810478080 7c Nickel has a new favorite as of 08:15 on Jan 27, 2018 |
# ? Jan 27, 2018 08:03 |
|
Your Computer posted:With the low resolution and how fast the images morph I didn't notice it at first, but I paused and uh.. the images are actually pretty disturbing (well at least the animal ones) more procedural generation, but this time its fake celebrities! https://www.youtube.com/watch?v=XOxxPcy5Gr4 heres a fun game: play this video at 2x speed and pause anywhere; see what kind of faces you get https://www.youtube.com/watch?v=f8xSD4HO_8k&t=178s here's a slower video if you want to watch the morphing in action https://www.youtube.com/watch?v=36lE9tV9vm0&t=297s
|
# ? Jan 28, 2018 02:39 |
|
The podcast The F-Plus, which is usually dedicated to reading weird poo poo they find on the internet like a modern version of the classic Awful Link of the Day, did their latest episode about reading scripts generated by Botnik. Harry Potter and the Portrait of What Looked Like a Large Pile of Ash is pretty good but I think the procedurally-generated West Wing episode is my favorite. "But that's not going to be something the American president of America will sign in America! This is boring to me. Donna! Donna. Donna. Donna? Donna?! Donna, help Donna help Donna, help Donna." https://thefpl.us/episode/274
|
# ? Jan 28, 2018 03:07 |
|
End of Shoelace posted:heres a fun game: play this video at 2x speed and pause anywhere; see what kind of faces you get oh you know, just typical celeb photos
|
# ? Jan 28, 2018 04:02 |
|
Your Computer posted:oh you know, just typical celeb photos Elton John?
|
# ? Jan 28, 2018 06:00 |
|
Steven Hawking on a bad hair day?
|
# ? Jan 28, 2018 06:32 |
|
Phil Spector on a relatively GOOD hair day?
|
# ? Jan 30, 2018 20:37 |
|
John Oliver?
|
# ? Jan 30, 2018 21:00 |
|
This was an incredible idea, my GF and I have been reading this all day and lolling and some of the recipes are delicious as well
|
# ? Jan 30, 2018 21:02 |
|
7c Nickel posted:https://twitter.com/deepdrumpf/status/728317897412579328 State of the Union lookin good
|
# ? Jan 30, 2018 21:07 |
|
Oh, content: Mandelbulb videos (renders of the mandelbrot set fractal, except a 3D version) The best ones involve both zooming through it with the camera at different scales, while simultaneously tuning the parameters that generate the whole fractal to disturb the surface's location (the "level set"). Like below: https://www.youtube.com/watch?v=Yb5MRbgNKSk "The Intricacies of Mechanoid Eyeballs HD" https://www.youtube.com/watch?v=jYsbFreUMkg Even though there's a human involved, the shapes are all procedurally generated since the person tweaking the parameters really has no idea what the result is going to look like until they try it. Happy Thread has a new favorite as of 22:12 on Jan 30, 2018 |
# ? Jan 30, 2018 21:11 |
|
All the way back in the 1970's, the project TALESPIN was coming up with Aesop's-fable style moral tales about woodland creatures. The creator wanted to curate only the sensible stories out of his project, but the best ones were actually the "mis-spun" tales resulting from unspoken common knowledge not being inferred. Read the bolded parts below for the best mis-spun stories: quote:Mis—spun Tales Poor gravity Happy Thread has a new favorite as of 22:06 on Jan 30, 2018 |
# ? Jan 30, 2018 22:00 |
|
I don't understand how that is possible using 1970's computing technology. I don't believe it. Do you have any more information on this project? I can't find any.
|
# ? Jan 30, 2018 22:56 |
"Poor gravity had neither legs, wings, nor friends."Ariong posted:I don't understand how that is possible using 1970's computing technology. I don't believe it. Do you have any more information on this project? I can't find any. I could find this article: https://dl.acm.org/citation.cfm?id=1624452 Looks like it's readable here: https://www.ijcai.org/Proceedings/77-1/Papers/013.pdf The article is probably automatically scanlated from print, though, so the text has gems like this: "GLGRGB WAS VERY THIRSTY . GEORGE WANTED TO GET NEAR SOME wATER. GEURG E WALKED FROM HI S PATCH OF GROUND ACROSS THE MEADOW ThKOUGH THE VALLEY TO A RIVER BANK. " So it seems like a real thing, from mid-seventies? Very interesting! Hempuli has a new favorite as of 00:21 on Jan 31, 2018 |
|
# ? Jan 31, 2018 00:13 |
|
Ariong posted:I don't understand how that is possible using 1970's computing technology. I don't believe it. Do you have any more information on this project? I can't find any. We put a man on the moon with 1960's computing technology. This is just some string manipulation Getting logic-based AI to work is largely the same now as it always was. Classic AI was not data-driven so you didn't need terabytes of training data. The big AI winter happened as early as 1984, *after* the big hype about neural networks for solving problems died down. That winter slowly ended as people realized that there are still all sorts of applications for even the most limited AI (such as the dumber "big data" statistics based ones that are popular today, and that are hungry for as much high-speed input from the internet as they can process). That sort of AI was succeeding in an increasing amount of niche areas, towards ubiquity, and now there's the whole internet full of new opportunities to use it and show it off. There was nowhere for pretty generated images and Gaston lyrics to go where they'd have been quite as appreciated in the 1970's, versus now with twitter. The big difference that you see today is not that there's some giant body of new research everyone knows all about, or some massive code library that took decades for researchers to build up that you now can't build a product without, or even modern computer speeds -- mostly it's the suddenly increased domain of problems being tried. Also if you want more information about TALESPIN in particular, you can just Google any of the excerpts I quoted above to get full articles. Mostly old ones, so .pdfs without highlightable text. Happy Thread has a new favorite as of 05:01 on Jan 31, 2018 |
# ? Jan 31, 2018 04:48 |
|
It's easy to forget that ELIZA was created in the mid-60s. One of the best arguments against the transhumanist singularity dorks is the simple fact that things like AI and speech recognition have been stagnant for decades even with the exponential growth of computing power and memory.
|
# ? Jan 31, 2018 05:02 |
|
Guy Mann posted:It's easy to forget that ELIZA was created in the mid-60s. One of the best arguments against the transhumanist singularity dorks is the simple fact that things like AI and speech recognition have been stagnant for decades even with the exponential growth of computing power and memory. This article came out towards the end of the 80's AI winter: "Elephants don't play chess" The title of this famous article by Rodney Brooks comes from the fact that in a research setting most AI agents lived out their robotic lives inside of some abstract puzzle world like chess or checkers, nothing at all like the natural physical world that natural brains evolved to deal with. It's impossible to understand our instincts without the context where they came from. Elephants are considered smart but they don't do anything like playing chess. It highlighted what I think is the reason for that stagnancy in AI you mentioned as persisting today - there is a difference between intelligence (problem solving) and minds (people). Researchers mostly only try to create the former, because it's far more profitable to do problem solving super well and sell it to industry. There is very little push for actually making a mind. I've seen researchers who do focus on it, but there aren't many. They focus on the more reasonable problem of simulating animals, ecosystems, and nature instead of directly going for the grand challenge of language-capable human minds. They know the reality that we are nowhere near ready to even simulate lower animals yet, even ants, from a robotics control standpoint or social reasoning or otherwise. So where to begin towards that? Right now if we want to test an AI in the natural world instead of a chessboard we have to use a robot. That sucks for a variety of reasons. The robot costs a million dollars and has a poor understanding of self-preservation, and will happily shear its own arms off or throw itself down stairs if it misunderstands a goal, and the very first time that happens you're out a million dollars. You also can't afford to have an ecosystem of robots, or better yet a gene pool of them, swarming around by the millions, trying things out, living and dying and letting natural selection work out the best form of intelligence over generations. We simply do not have time to wait for that just to run a single experiment. Lastly, even with perfect robots, we have very limited ability to train them because we'd have to sculpt the world around them just so, with earthmovers and construction engineers and elaborate movie set artists and then tear it all down when it's time to tweak the scenario. It's much better to try out AIs in a virtual setting that resembles the natural world. But to this day, we don't really have that as an option at all. You might be thinking of beautiful and interactive video game worlds that are full of AIs, but for the most part those worlds are pretty limited too -- even as a player you usually can't do things like burrow a hole through the ground, a stone, or any individual polygon and re-shape it and re-work it for something else like our ancestors learned to do. You can't rip up the shirt you're wearing and use it to plug a leak. You can't re-purpose whatever you find in games, so neither can an AI. Minecraft is a game where I thought they would finally break this barrier since you can reorganize volumes of material around freely, which in turn should affect the AI's goals such as pathfinding and eventually have the AIs building structures and art. But the AI in that game is totally limited to just pathfinding and nothing else, and not even pathfinding that includes planning for future possibilities like moving material around to build a bridge or remove a wall - the AIs are simply forbidden from using the game's main mechanic! Only the human players are allowed to place blocks -- instead of leaving some "safe zones" where bots cannot touch your work, they just can't do anything anywhere. Only a few modders of Minecraft decided to try out AI-on-environment interaction (one mod procedurally generated novel cities that reflect the needs of the organisms who built them), or AI-on-AI interaction (which could have generated culture and competition, to further gives those cities meaning). Due to Minecraft being closed source those mods were lost to obscurity when the game updated. Due to inherent round-off error the silly blocks game can never simulate physically realistic things like rotating motions. But, for still being the game that brought procedural generation into the mainstream it's surprising the procedural cities idea did not take off. I personally switched from majoring in AI to majoring in Computer Graphics as I found out how crucial physics simulation was going to be towards AI's emergence. Most of my labmates took an additional step outside of CS entirely to the Applied Math department, once they realized most of today's physically simulated virtual reality stuff was actually happening over there. We don't have the tools yet to pursue AI until we can simulate the world a little more faithfully and flexibly. Most AI researchers are not even interested in making a mind, and for every AI university class there's a completely different umbrella of what they consider to be relevant to the topic, to the point where the course title "AI" has lost all meaning. Is it about advanced search trees? First order logic? Bayesian statistics? Language and animal reasoning? Tracking faces in videos? Simulating springy surfaces with iterative methods to elastically "snap" a smart selection tool? Evolving a gene pool to find solutions to TSP? Looking at a planner in PROLOG? Making a particle swarm to solve the scheduling problem? Using layers of autoencoders and then backpropagation / deep learning to try to blend images together? I have personally seen every single one of these topics squeezed into AI curriculums and barely a single one has anything to do with making a mind. More of them have to do with procedural generation, but that's a broader topic and easier threshold to pass. Happy Thread has a new favorite as of 06:47 on Jan 31, 2018 |
# ? Jan 31, 2018 05:28 |
|
From the neuroscience side, it's funny to see computer scientists rediscovering poo poo that's in the cortex.
|
# ? Jan 31, 2018 05:49 |
|
Noblesse Obliged posted:Hello You should probably ignore that.
|
# ? Jan 31, 2018 07:03 |
|
|
# ? May 27, 2024 02:48 |
|
Dumb Lowtax posted:We put a man on the moon with 1960's computing technology. This is just some string manipulation The fact that this is real is crazy.
|
# ? Jan 31, 2018 08:05 |