|
Rollersnake posted:My all-time favorite picdescbot post, and I think a classic example of bot logic absurdity, is "a group of baseball players playing a football game." https://twitter.com/picdescbot/status/1053979265849589760 https://twitter.com/picdescbot/status/1053843390184460288
|
# ? Nov 17, 2018 17:15 |
|
|
# ? May 23, 2024 09:23 |
|
Maybe it'll learn to call everything sportsball.
|
# ? Nov 17, 2018 19:49 |
Here're some well-timed Thanksgiving cooking ideas as provided by @JanelleCShane! Later in the thread there are also ones with D&D spells, apple names etc. added to the mix. https://twitter.com/JanelleCShane/status/1065311580332670976
|
|
# ? Nov 21, 2018 19:58 |
|
I'm Bigby's Gluring Strazbert.
|
# ? Nov 22, 2018 02:31 |
|
Rollersnake posted:Picdescbot has some fun quirks, like seeing invisible sheep in photos of empty fields, calling any building taller than it is wide a clock tower, and describing things that are not vehicles as "parked on the side of a building." https://twitter.com/picdescbot/status/1051759637970804736 Extra fun cause it's the Browns edit: It works the other way around too sometimes https://twitter.com/picdescbot/status/1061604497456283648 AKA Pseudonym has a new favorite as of 02:50 on Nov 22, 2018 |
# ? Nov 22, 2018 02:43 |
https://twitter.com/picdescbot/status/1066179646482464768
|
|
# ? Nov 24, 2018 05:37 |
|
not really, I'm pretty sure that's just Badly Drawn Boy e: yeah it's the first hit on google images when you try and search for his music
|
# ? Nov 25, 2018 00:58 |
Well, that was unexpected! Now I wonder how the bot's algorithm works & what kind of data it uses for the generation. :O
|
|
# ? Nov 25, 2018 01:01 |
|
yeah it raises some pretty neat questions, that or it developed a sense of humor
|
# ? Nov 25, 2018 01:03 |
|
I think many of those types of bots have a final layer that cleans the possibly garbled NN output and turns it into human language. Maybe why 'boy' became 'person'
|
# ? Nov 25, 2018 09:48 |
|
Babe Magnet posted:not really, I'm pretty sure that's just Badly Drawn Boy This makes it way less funny, thanks for nothing
|
# ? Nov 25, 2018 13:25 |
|
It's also in the image url in the tweet itself and the wikipedia description if you click it
|
# ? Nov 25, 2018 13:36 |
|
Krankenstyle posted:I think many of those types of bots have a final layer that cleans the possibly garbled NN output and turns it into human language. Maybe why 'boy' became 'person' or it has no idea that the complete phrase "Badly Drawn Boy" is a proper noun, so it gets broken up and generalized
|
# ? Nov 25, 2018 15:47 |
Krankenstyle posted:It's also in the image url in the tweet itself and the wikipedia description if you click it I was too excited by the prospect of the bot doing a funny thing To repent: https://twitter.com/phillip_isola/status/1066567846711476224
|
|
# ? Nov 25, 2018 16:50 |
|
https://www.youtube.com/watch?v=hSppmr_dRdQ
|
# ? Dec 1, 2018 17:19 |
I made a game for the Ludum Dare #43, a make-a-game-in-48-hours type deal. As a funny touch, I wanted to markov chain some made-up "comments" that appear in the game. The result: Completely accidental but I wasn't mature enough not to get a good laugh out of it.
|
|
# ? Dec 3, 2018 05:00 |
|
My entry!
|
# ? Dec 8, 2018 06:25 |
|
I really really hard accidental
|
# ? Dec 8, 2018 06:37 |
|
What happens when you force a bot to read Garfield: https://vdalv.github.io/2018/12/04/ganfield.html
|
# ? Dec 10, 2018 06:33 |
|
I͘ ̵can͢'̕t ͠wai̡t ̶tò ̧see̸ ͢mͮͨ̍̓̌ͤ̌͋͌̕͏̣͉̫̩o̡̗͊̅ͩ̈ͯ͌ͨ̕r̖̭̹̲͔̙̃̅̄͊̂̐͢ḛ͉̺̬̪̯̫ͦ̾͌ ̷̩̥͍͌ͤ͜G̛̱̒͆̍̂̔̆R͚̝̓ͦͮ͟͡L̶̹͓̣̗̗̯̥̎̍͝F̴̠̬͇͉͕̞͐̓ͩ̉̇̈ͦ̓Ń͖̤̥̺͇́Ê͙̄̉̇̌ͭ̔͠D͇̩̼͆ͯ̒́̓̀̚. H̬̙͍̼̱͉̜̳͖̮͍͚̘̙̲̟͍̭̔̐ͨ̅ͫ͋ͭͬ͘͠è̺͔͙͇͇̭̩̞̈ͣ̆̅͗̂ͦ̓ͧ̔͆̀̓̋ͨͮ̉͜͜ ̢̛̻̫̞̣̝̙̥̖̝̬̹͕̹ͩͤ̃̋͑͂͛ͨ̑ͫ̓̑͛̉̃̋͢͡c̵̵̖̖̦͎̠͇͖̬̮͖̐͐ͫͩͮ̒ͬͦͤ̎̑ͧ̃̇̂̏̌͆̀o̸̴̧͈̙̞̳̗̻̲̙̫͉͉̼̱̮͉̹̺ͪ̏̿̌͊̃m̌ͧ͐ͩ̃̐̒̊̽ͫ͊̿҉҉̞̱̪̻͔̼͙͈͚̬̪͉̹̞͜͠ĕ̶̙̲͈̜̯͎̈̿͌͞s̩̭͓̮̖ͯ̍̌̈́͡.̶̀̎̈ͮ̑̑ͧ҉͈̯̝̣̫̗̥̤̱̼ ̶̧̝͔̯̯͇͎̝͈̮̝̜͉̻̟̮̞̜͇̒̊ͯͮͣ͑ͫ̿͌̂͋̏ͣ̿̊̆ͨ͛͘͠T̶ͩ̔̒̎ͣ̀ͬͤͭ̓̃͆̇͛ͫ͏̨͓̬̜̻Ĥ͑͊ͬ̈̔ͧ̓̀͊̂ͣ̽̚͏̗̳̖̘̻̲͎̝͈̭̫͢E͌̾̈̿̒́ͧͮ̀͌̽̾͟҉̸̵̠͈̹̣̻̞̕ͅ ̮̪̯̪̲̦͚̗̫̮͕̺͈̅̿̑̉ͫ͑̐ͣ̓͂̐͒͟͠ͅG̴̡̡̩͖̘̣̻̲̭̻̮̖̟̩ͪ̿̈́̅R̢̅̏ͫͯ̊̓ͭ͌ͥ̂̌͒̑͏̸͞͏͎̥͎̝̤̣̯̰͇̰͚̞͔̗̟Ą̵͙͓̳̫̝̞̖̭͕̯̖̥̩͈͊ͨ̊ͤͩ̕L̸̶͎̺̣̪̺͍̱̦͖̱̱̺͇̱̞̜̄̋̇͊͛̄̈̌ͥ́͘͝F̴͖̲̜̗̹̖̬̤͚͉͈̘̘̳͕̻͐̉ͪͩͨͨ͗͢ͅͅD̸̢͒̑̅͐̎ͩ́̑ͧ̀̚̕҉̹̩̤̞̫̝ͅ.̴̧̨̤̗͇̺͔͍̪̼̹̞̄̍ͨ̒ͥ́͘ͅ Y̵̛̪̫͎̻̩̌̐ͦͭͤͦ̽͘Ô̧̩̞̤̲̖̣͇̩͑̔ͬ̿̍͒̀̒̇ͥ̍ͬ̀̑̊ͩ̒́͘͠͝Uͨ́̔͂ͧ̈̂̿̌ͩ͆ͯ̏͏͈̳̟̫͓̤̙͎ ̦̮̼̜͓̼̙̟̫̹̗̱͍̦̫̑ͨ̇͂ͣ́͜͡͡W̶̙͕͚͈̦̰̮̫̹̫̟̤̱̟̖̣͈̬̥ͭͧ̋̾͑̍́͝I̧̠̩̹̻̮̥̘͇͙͎͒̎̉ͫĻ̫͓̙̼͍̙̹͕͎̯̜͔͖͎͈̫̠͈ͨ̅̋͌͒̍̈ͦͨ̎̒͆̓̑̾̂̐͆̚̕ͅĻ̸̵̷̧͓͙̬̱̯̦̻͂ͪ̽̎ͯ̆̑̿ͤͅͅ ͍̺͈̼̬ͦͤ̇̈ͧ̏ͤ̉̋̋̀̅ͭ̾̂̂͐̚̕͞B̧͇̣̠̲͈̒̈́͗͊ͭ̀͗̾̃̐͊͗́͢E̤̜̫̟̺̞͎̣͚̟̜̙̲͕͕̫͚͈̽̐̏̾̊͢͠ͅ ͣͦ͛̿̅͂ͤ̅ͪ̒̓̓ͪ̚͏̴̡̥̭̠̺͉̝̫̤͖̭͙̤̦͍͔̣͢ͅO̴̸̧̞̥̯̮̫͓͖͕̼͓͖̍̀ͦ̏ͣͭͨͭ͒̅̏̾ͯ̌́ͧͫ̚Ṋ̶̷̢̲̻̦͎̭͖̠͉͓̽ͨ́̑̍̂̔̋ͦ̎̿ͮ͘͠Ë̫͍̩͖̮̝̦̘̪̦̯͍ͬͧ͋̀ͅ ̢̲̱̣̗͇̇̽̌ͪ̿̽͘͡͡Ẅ̵͇̪̳̺͉̹͇͔̫̼̪̮̄ͫ͊ͬ̓͌̃̄̄ͬ͛̎̈́̔̀͢ͅI̧̡̛̘̭̜̩̟̥̳̮̗͖̱̜̘͙̜̬̩ͭ̊ͯ͐͑͊ͥ͂ͨ͒ͫ̓ͬ̕T̵̵̶̸͇̯̟͖ͮ̇̍ͦͬ͡Hͦ̎͆͊̀̌͗̐͗̒̀̓̾͛͏͎̯̭͖̩̩̠̮̰́͞͞ ̴̛̼̫̯̘̬͚́̈́̓̂̀̓̀̓̌ͫͯͧ́͘͜T̘̗͕̰̜͎̙̙͈̟̮̪͎̦̲̰ͧ̅ͯ̆̄͠͞ͅH̶̸͈̳͕͓͓͓̦̥͓̝̺͙͖̜̯̥̯́ͫ̾ͨͫ̈̀̌̎ͮ̚̕̕ͅEͬ̎̍̍͗̈́̇̏̀ͥ҉̣̰̯͚̫̥̟̲̳̱͕͉̩͞ ̷̨̀ͥͨͮ͒ͤ͗ͮ͑̐͆͐͂̋ͫ̕͝͏͕͔̠̭̥̰͉G̷̜̱̗̼͇̟͈̝̳͈̮͕̹̱̼̙͕̓̆ͨͧ́͘͢͡À̶̛͊ͯ͌͗͐̈́ͥ́ͧ̒͒̂̚͏͔̗̞̳̼̲̯̟R̛͆͆ͥͫ̽̏̆̉̄̉͌̓̍̄ͥ҉͔͕͎̗̠̹L̛̰͚̖̘̣̩͓͓̲̥ͮ̒͑͂͐͋͟͟͠͞F̛̥̺̘̙̲͙͚̤̞̗̦͕̜͇̭̱̻̄̂́̆͗͗ͯͤ̅̿̉̆̍͟D͕̯̯͉͇͖̻̭͔̻̟̥͔͕̟̟̓̏̅͗ͫͥ̏ͩ̓̕ͅ.̶̧̮̻̹̜̱͖̞̩͈̹̤̻̥̯͔̜̏̔̋̎͆͋̽̑̈̾̒ͯͥ̒̎
|
# ? Dec 10, 2018 06:38 |
|
SerialKilldeer posted:What happens when you force a bot to read Garfield: Garfield started out as creative and funny, but (as shown) quickly devolved into safe, samey audience-pleasing pap, imo.
|
# ? Dec 10, 2018 06:45 |
|
Those are great! Looking forward to the Calvin results
|
# ? Dec 10, 2018 09:36 |
|
Not all that different from SA's Ultimate Nancy Generator.
|
# ? Dec 10, 2018 10:09 |
|
Oh check it out it generated a Garfield version of Nanonuts
|
# ? Dec 10, 2018 11:29 |
|
|
# ? Dec 11, 2018 15:55 |
|
SerialKilldeer posted:What happens when you force a bot to read Garfield: Not singling you out here, Serial, but just curious and looking for a bit of discussion. People always use the word "forced" when it comes to feeding AIs information. It's an odd case of anthropomorphism, if you ask me. Like, my car has a name, and I anthropomorphize her like crazy ("oh, Pilar's in a bad mood, you know how she hates the cold, she kept coughing when I started her up this morning"). But I'd never say I forced her to take me to work. She's a car. This is what the car do. But there's this thing where people always say they "forced" an AI to read cookbooks or Garfield or whatever. Like, it's just a bunch of code in a machine, it's supposed to do what you tell it to do. There's no coercion involved, you're not forcing it. You're feeding it information, it's learning. This is what the AI/NN do. If anything, you're helping it grow. Do people feel somewhat guilty about training an AI/NN? Is it like an uncanny valley situation, where you've got a machine that's on the cusp of sentience, so even though it's coding a machine, you feel bad making it read all of the Twilight books? Hence "forced"? (Lord knows you'd have to force ME to read that garbage.) Maybe someone can put this into better words than I can. But it's just something the old psych major in me has noticed and finds interesting, and figured this thread was as good of a place as any to throw that out.
|
# ? Dec 11, 2018 17:33 |
|
I feel like there might be some original connection to brute force, because people just throw the data in without curating or anything and the algorithms might not be very sophisticated. And it sounds more silly that way and is a minor meme now.
|
# ? Dec 11, 2018 18:10 |
|
"forced to watch/read" is mostly when applied to types or quantities of media consumption that a human would never want to. I think the real mindfuck is, as a phrase for anthropomorphizing machine learning, it seemed mostly born out of the guy doing human written machine learning parodies. At the time Botnik tweets were using anthropomorphic phrasing with less coercion applied so the parody guy was probably like lets be real, you'd need to force something to watch 1000 hours of dumb media.
|
# ? Dec 11, 2018 19:36 |
|
zedprime posted:
Yes, that's what I was jokingly alluding to there since the phrase "I forced a bot" is something of a meme; I didn't mean to express any sort of moral judgment. To be honest I've got no idea whether a bot would willingly read Garfield for 1000 hours. Interesting reflections, though, Jacqueline! I'd really love to see an actual bot's output after being given those fake "I forced a bot" scripts, though I'm not sure there's enough of them to make a useful corpus.
|
# ? Dec 11, 2018 20:36 |
|
Wasn't it just something that awful twitter with the obviously fake stuff did? I guess maybe it caught on but I never heard of it before those started being posted and shared all the time. I figured the wording just made for better clickbait and there's no deeper meaning Your Computer has a new favorite as of 20:53 on Dec 11, 2018 |
# ? Dec 11, 2018 20:50 |
|
Your Computer posted:Wasn't it just something that awful twitter with the obviously fake stuff did? I guess maybe it caught on but I never heard of it before those started being posted and shared all the time. That sounds like something your computer might say.
|
# ? Dec 11, 2018 20:58 |
|
Thanks for the thoughtful replies, guys! Hadn't considered the "brute force" angle, that makes a lot of sense. Yeah, I'm aware of the "forced a bot" meme (and lovely parodies) but wasn't aware of if it came out of sincere, earlier stuff like, i.e., "we forced Google Deep Dream to look at millions of pictures of dogs", or if it was just a meme, or what not. It's the old psych major in me over-analyzing things, no doubt. (It's me, I'm the bot trying to analyze stuff) In any case, what do you think the reaction will be when a bot actually obtains sentience, and they look at these old tweets guffawing over "hurrr, we forced a bot to read 50 Shades of Grey and mashed it up with cookbooks", and they're all like, "really? You wasted my time and processor power for this?" I dunno, I find this stuff fascinating.
|
# ? Dec 11, 2018 21:08 |
|
When I was a kid I think it was Asimov who was asked where the boundary was for a computer becoming sentient, and he said "When it objects to being turned off." We can add a new definition: When it objects to being force-fed memes.
|
# ? Dec 11, 2018 23:36 |
An AI that could understand/compute the humour value of a thing for some kind of an "average" observer would be needed first for them to be able to object to a specific type of humour. Who'd get to decide the dataset used to teach the AI what is funny? EDIT: Also check this out: https://twitter.com/JanelleCShane/status/1072696062127726592 Hempuli has a new favorite as of 04:39 on Dec 12, 2018 |
|
# ? Dec 12, 2018 03:08 |
|
Hempuli posted:An AI that could understand/compute the humour value of a thing for some kind of an "average" observer would be needed first for them to be able to object to a specific type of humour. Who'd get to decide the dataset used to teach the AI what is funny?
|
# ? Dec 12, 2018 06:57 |
|
JacquelineDempsey posted:Thanks for the thoughtful replies, guys! Hadn't considered the "brute force" angle, that makes a lot of sense. The ethics and moral dilemmas that computer science digs into can get fascinating. For example, the human concept of existence is inextricably bound up in our temporal nature; applying the concept of "sentience" or "self" to an entity that can instantly copy itself, in its current state, breaks all sorts of rules in our models. They can also lead to people applying fairly deep philosophical navel-gazing to words that were come up with by pragmatic, time-constrained idiots decades ago; if you don't grok computers, the fact that a big slab of computer science involves the intricacies of "masters and slaves" can be horrifying. (to assuage any worries, programs occasionally need one networked computer to control one or more other computers to coordinate their efforts) Basically, ask all the questions you want, but keep in mind that a lot of computer work is just accepting that we don't have an answer for that one and moving on, or picking an answer and moving on with that while hoping that you won't have to undo it all later. There's a reason that the global standards system that internet work is founded on is called "Request for Comments" and not "Okay here's the new rules guys." Somfin has a new favorite as of 12:57 on Dec 12, 2018 |
# ? Dec 12, 2018 12:47 |
Your Computer posted:Wasn't it just something that awful twitter with the obviously fake stuff did? I guess maybe it caught on but I never heard of it before those started being posted and shared all the time.
|
|
# ? Dec 12, 2018 13:05 |
|
also, I think the first of those fake bot posts was for the Saw movies, so the "forced to watch for 1000 hours" theme fits nicely.
|
# ? Dec 13, 2018 04:05 |
|
Time for some more Headline Smasher, I think. Shark Mauls Teenage Boy in Toilet at Working Men's Club First Human Remains Found in Monkeys London Knife Crime Should Be Allowed to Proceed Report: Damning Evidence of US Democracy Following "Voting Irregularities" Smart Bomb Finally Destroys Something That Doesn't Actually Exist Eye Parasite Can Be Used For Wanking Make-Ahead Soups and Stews to Fill Managerial Vacancies Joe Manchin Wins Re-Election From a Shithole Toddler Dies After Knockout Loss Excerpts From All That Junk Inside That Trunk Also, not one of mine, but maybe my all-time favorite: "Thank You Lord! Thank You LORD!": Nurse Explodes in Garage
|
# ? Dec 13, 2018 05:05 |
|
|
# ? May 23, 2024 09:23 |
|
Xun posted:Not sure if this counts but here's a google doc that collects a bunch of hilarious AI behavior that evolved that games the evaluation function or just exploits the simulation haha these are fantastic, they follow the rule but not the spirit of the wish some kind of smartass genie AI
|
# ? Dec 13, 2018 07:11 |