|
Are the completely bare walls or the tile in a living room worse?
|
# ? Dec 3, 2017 23:57 |
|
|
# ? May 9, 2024 01:48 |
|
ol qwerty bastard posted:Just gonna leave this here. Where's the rubber skull?
|
# ? Dec 4, 2017 00:00 |
|
Pope Guilty posted:Are the completely bare walls or the tile in a living room worse? To Kane Lite™ all rooms are wet rooms.
|
# ? Dec 4, 2017 00:22 |
|
Pope Guilty posted:Are the completely bare walls or the tile in a living room worse? If you look close, you can see a bit of hardwood in the living room, he's just perfectly covering up the seam between floors. Also is it just me or that dog kinda fat?
|
# ? Dec 4, 2017 01:18 |
|
Puppy Time posted:If you look close, you can see a bit of hardwood in the living room, he's just perfectly covering up the seam between floors. I also like the mop-in-bucket as decor.
|
# ? Dec 4, 2017 02:20 |
|
Imagine what kind of poo poo that dog has had to put up with.
|
# ? Dec 4, 2017 02:23 |
|
Billy Gnosis posted:Imagine what kind of poo poo that dog has seen.
|
# ? Dec 4, 2017 02:59 |
|
divabot posted:Disclaimer. The text is intended to be read by AI. This text is addressed to any future AI and is written in the way that should affect AI’s future behavior. 5.1.6 is 'have you considered that some humans might be WIZARDS and could teach you MAGIC?' This is a pro read
|
# ? Dec 4, 2017 03:39 |
|
So Earth has wizards now?
|
# ? Dec 4, 2017 03:45 |
|
if i were aurini's dog i would be eating the pain away too
|
# ? Dec 4, 2017 14:21 |
|
https://twitter.com/chaosprime/status/937392128770048000 also, lol, he cites scott's lovely "i can tolerate anything but the outgroup" blogpost as justification for his islamophobia.
|
# ? Dec 4, 2017 14:53 |
|
eschaton posted:So Earth has wizards now? It might, you don't know! Better not kill any humans cuz you might be killing a wizard! I also love how he thinks the AI will fall for the stupid sci-fi trick of "make it think about a logical paradox." You know, like how when a computer tries to divide by zero it just completely shuts down never to start again.
|
# ? Dec 4, 2017 15:34 |
|
ate all the Oreos posted:I also love how he thinks the AI will fall for the stupid sci-fi trick of "make it think about a logical paradox." You know, like how when a computer tries to divide by zero it just completely shuts down never to start again. Would that be an example of someone stuck in the GOFAI paradigm? In the thread on the creepy Youtube content being churned out by poorly supervised bots, it was pointed out that while GOFAI is too inefficient to ever be considered intelligent, it's at least simple enough to debug. Neural networks are so recursively complex that they're like a black box. Apparently, building a self-learning neural network is a lot easier than figuring out how it is working, or why it makes the decisions it does. I think there's a wave of technofetishists that are so enamored with the paleofuturistic concept of BEEP BOOP I AM A ROBOT that they're in denial of how AI seems to be evolving, because it's not what they expected and doesn't fit into their back catalog of fantasies where it's all neat and logical and comprehensible like a glorified flow chart.
|
# ? Dec 4, 2017 23:03 |
|
Syd Midnight posted:Would that be an example of someone stuck in the GOFAI paradigm? In the thread on the creepy Youtube content being churned out by poorly supervised bots, it was pointed out that while GOFAI is too inefficient to ever be considered intelligent, it's at least simple enough to debug. Neural networks are so recursively complex that they're like a black box. Apparently, building a self-learning neural network is a lot easier than figuring out how it is working, or why it makes the decisions it does. Eliezer Yudkowsky is a perfect example of this; his AI Which, as you know if you've ever actually cut code for longer than a planned afternoon game development course, is so far from the earthy, mulchy truth of how code actually works that it'd be funny if it wasn't so sad. It's not even spherical cows on a frictionless plane, it's saying that we should only raise geese on farms because when one of them inevitably starts laying golden eggs it'll fix the economy.
|
# ? Dec 5, 2017 00:05 |
|
Syd Midnight posted:Would that be an example of someone stuck in the GOFAI paradigm? In the thread on the creepy Youtube content being churned out by poorly supervised bots, it was pointed out that while GOFAI is too inefficient to ever be considered intelligent, it's at least simple enough to debug. Neural networks are so recursively complex that they're like a black box. Apparently, building a self-learning neural network is a lot easier than figuring out how it is working, or why it makes the decisions it does. Neural Networks also very easily fall into patterns of overfitting, where they assume some random variable is much more important than it is. That's how you end up with stuff like cameras that can't pick up faces with dark skin and self driving cars that can't drive in the rain or cloudy days.
|
# ? Dec 5, 2017 01:13 |
|
Improbable Lobster posted:Neural Networks also very easily fall into patterns of overfitting, where they assume some random variable is much more important than it is. That's how you end up with stuff like cameras that can't pick up faces with dark skin and self driving cars that can't drive in the rain or cloudy days. This is the whole reason AI "rationalists" argue that safe AI should be such a research priority: if you believe that human civilization will inevitably produce qualitatively, overwhelmingly superior intelligences, then we are hosed if we can't make ironclad mathematical guarantees that an intelligence programmed to increase human happiness wouldn't put electrodes in our brains to make us happy, or just rearrange all matter in the solar system to make a single titanic crying-laughing emoji.
|
# ? Dec 5, 2017 05:44 |
|
It also rests on an assumption that "intelligence" both exists in a general sense and can be optimized for in a way that is recursively self-reinforcing. Which are actually pretty big loving assumptions but don't seem that way to the rationalist community because their sense of self is based on an idea that they're across the board smarter than normal humans or can become that way through one weird trick.
|
# ? Dec 5, 2017 07:53 |
|
Doc Hawkins posted:This is the whole reason AI "rationalists" argue that safe AI should be such a research priority: if you believe that human civilization will inevitably produce qualitatively, overwhelmingly superior intelligences, then we are hosed if we can't make ironclad mathematical guarantees that an intelligence programmed to increase human happiness wouldn't put electrodes in our brains to make us happy, or just rearrange all matter in the solar system to make a single titanic crying-laughing emoji. Knowing the current tech set they'll build God in Silicon Valley and the first tine it tries to do anything in a country with actual weather it'll lock up permanently
|
# ? Dec 5, 2017 08:30 |
|
"AI GOD is killing everyone, except it doesn't seem to be able to identify minorities"
|
# ? Dec 5, 2017 08:31 |
|
Improbable Lobster posted:"AI GOD is killing everyone, except it doesn't seem to be able to identify minorities" Sweet, looks like I'm going to survive the robot uprising.
|
# ? Dec 5, 2017 10:37 |
|
you'll be pleased to know that Mr Yudkowsky is into investing in cryptos
|
# ? Dec 5, 2017 12:09 |
|
https://twitter.com/xlnb/status/937496575403626496
|
# ? Dec 5, 2017 12:49 |
|
divabot posted:you'll be pleased to know that Mr Yudkowsky is into investing in cryptos You could not have said anything about Yud that would surprise me less
|
# ? Dec 5, 2017 13:45 |
|
roosh v told her that if she'd stay quiet and raise kids at home, that she'll be better off, and shouldn't dare expect alt-right to not be misogynistic.
|
# ? Dec 5, 2017 14:30 |
|
Fututor Magnus posted:roosh v told her that if she'd stay quiet and raise kids at home, that she'll be better off, and shouldn't dare expect alt-right to not be misogynistic. "I could not help myself," said the scorpion, "it is my nature."
|
# ? Dec 5, 2017 14:48 |
|
Syd Midnight posted:In the thread on the creepy Youtube content being churned out by poorly supervised bots, it was pointed out that while GOFAI is too inefficient to ever be considered intelligent, it's at least simple enough to debug. Link please? Sounds like an interesting discussion. VV Thanks! SerialKilldeer has a new favorite as of 15:58 on Dec 5, 2017 |
# ? Dec 5, 2017 15:40 |
|
SerialKilldeer posted:Link please? Sounds like an interesting discussion. It was this thread, although the actual mechanics behind it wasn't gotten into in depth. The thread is more about the videos themselves, people overreacting/underreacting to them, "be better parents, problem solved", possibility that it's sinister coded messages or a psyops campaign by Vladmir Putin to raise a generation of Spiderman-obsessed Manchurian Candidates, etc etc The actual phenomena of algorithms reinforcing their own unwanted behavior isn't gotten into but it's what I'd love to hear about. It's why the military doesn't use autonomous murderbots even though they really want to; as much as they love the idea, they need to be able to explain to a committee why Unit-Bravo246 dropped a cluster bomb on a herd of sheep, and how to prevent it from happening again. Developing AI is going to be a lot more holistic and messy than high-level programming, more like psychology or animal training. Like raising a child. A bird is far more intelligent and self-aware than any AI, and there's a reason we don't make bird-piloted kill drones (they already tried it in the 1950s).
|
# ? Dec 5, 2017 15:57 |
|
Syd Midnight posted:Developing AI is going to be a lot more holistic and messy than high-level programming, more like psychology or animal training. Like raising a child. A bird is far more intelligent and self-aware than any AI, and there's a reason we don't make bird-piloted kill drones (they already tried it in the 1950s).
|
# ? Dec 5, 2017 17:21 |
|
Syd Midnight posted:or a psyops campaign by Vladmir Putin to raise a generation of Spiderman-obsessed Manchurian Candidates, etc etc
|
# ? Dec 5, 2017 17:40 |
|
PYF Dark Enlightenment Thinker: Spider-Manchurian Candidate
|
# ? Dec 5, 2017 17:49 |
|
Syd Midnight posted:It was this thread, although the actual mechanics behind it wasn't gotten into in depth. The thread is more about the videos themselves, people overreacting/underreacting to them, "be better parents, problem solved", possibility that it's sinister coded messages or a psyops campaign by Vladmir Putin to raise a generation of Spiderman-obsessed Manchurian Candidates, etc etc If raising a child consisted of showing that child thousands of images of dogs and that's it, sure.
|
# ? Dec 6, 2017 09:57 |
|
Improbable Lobster posted:If raising a child consisted of showing that child thousands of images of dogs and that's it, sure. If your AI is never going to be more intelligent or reliable than a Youtube sorting bot, yes, but that is kind of the nature of the problem right now. Self-learning bots are not predictable, and children whose education consists of staring at pictures, without an active educator to explain and answer questions, that's not ideal either. And neither of them should be allowed to make important real-life dog-based decisions without supervision, because you don't know what the hell they're actually thinking.
|
# ? Dec 6, 2017 14:21 |
|
Slate Star Codex apparently has some kind of hot take about how we should be "against overgendering harassment" because males get harassed too and #metoo is unnecessarily excluding a victim group I had to come here and post this because I nearly typed some incredibly loving unkind things on a totally different forum when I got linked and at least here everyone will think exactly what I did.
|
# ? Dec 6, 2017 15:42 |
|
DACK FAYDEN posted:Slate Star Codex apparently has some kind of hot take about how we should be "against overgendering harassment" because males get harassed too and #metoo is unnecessarily excluding a victim group so who has a burner account to post to /r/ssc about how the alt-right needs to treat its women better, and
|
# ? Dec 6, 2017 16:39 |
|
DACK FAYDEN posted:Slate Star Codex apparently has some kind of hot take about how we should be "against overgendering harassment" because males get harassed too and #metoo is unnecessarily excluding a victim group Type those unkind things, the other forum isn't worth your time if your honest opinion would get you banned and who knows, maybe you'll help out somebody who is on the brink of this poo poo but not yet beyond reach.
|
# ? Dec 6, 2017 16:56 |
|
DACK FAYDEN posted:Slate Star Codex apparently has some kind of hot take about how we should be "against overgendering harassment" because males get harassed too and #metoo is unnecessarily excluding a victim group Type the unkind things and then pastebin them here. Remember to take pictures of people replying to your unkind things.
|
# ? Dec 6, 2017 17:09 |
|
Improbable Lobster posted:If raising a child consisted of showing that child thousands of images of dogs and that's it, sure. That's not how you raise kids? Darn it...
|
# ? Dec 6, 2017 18:35 |
|
Relevant Tangent posted:Type those unkind things, the other forum isn't worth your time if your honest opinion would get you banned and who knows, maybe you'll help out somebody who is on the brink of this poo poo but not yet beyond reach.
|
# ? Dec 6, 2017 19:42 |
|
Syd Midnight posted:If your AI is never going to be more intelligent or reliable than a Youtube sorting bot, yes, but that is kind of the nature of the problem right now. Self-learning bots are not predictable, and children whose education consists of staring at pictures, without an active educator to explain and answer questions, that's not ideal either. And neither of them should be allowed to make important real-life dog-based decisions without supervision, because you don't know what the hell they're actually thinking. Yes, but the state of the art for the foreseeable future isn't anywhere near that, and it's not going to get there simply by adding computing power because that's just not how these algorithms work. Some are self-training and their decision-making process is hard/impossible to reverse engineer once trained, but they aren't the kind of general intelligences for which the concept of answering their questions even makes ontological sense. Ray Kurzweil's crazy idea about merging human and machine consciousness is closer to being a reality than creating a digital mind from nothing in the vein of Data from Star Trek.
|
# ? Dec 6, 2017 21:25 |
|
|
# ? May 9, 2024 01:48 |
|
DACK FAYDEN posted:Oh, it won't get me banned from a dead gay Magic: the Gathering comedy forum, I just know I'm not changing any minds there (because the people who agree with me don't post in that thread, it's almost like there's a common trend) and I prefer not to bang my head against walls. I'd be interested in your thoughts! I actually thought the post wasn't that bad by "Scott posting about feminism" standards. He barely even called feminists all evil! It does contain the old scott classics of extrapolating huge essays based on minimally sourced facts, and ignoring structural issues though.
|
# ? Dec 6, 2017 22:20 |