|
Guys can we just agree that it's cool and good that Musk is making rockets cheaper, but that being good at running a company for making cheap rockets doesn't make him an authority on every other tech or social issue
|
# ? Nov 18, 2017 20:24 |
|
|
# ? Jun 8, 2024 08:00 |
|
RuanGacho posted:I looked up this guy you all were talking about that id never heard of and it turned out I actually had: The AI conversation has been dominated by libertarian and/or transhumanist wackadoos for several years now. That twitter thread is bang on, everyone's interested in ridiculous Skynet situations when algorithms are already making people's lives miserable in the present day.
|
# ? Nov 18, 2017 20:53 |
|
SulphagneSocialist posted:The AI conversation has been dominated by libertarian and/or transhumanist wackadoos for several years now. That twitter thread is bang on, everyone's interested in ridiculous Skynet situations when algorithms are already making people's lives miserable in the present day. Ok. Three laws of robotics. Crush, kill, destroy.
|
# ? Nov 18, 2017 21:15 |
|
the only person i trust to usher us bloodlessly into a new age of sci-fi technology is fishmech
|
# ? Nov 18, 2017 21:32 |
|
noyes posted:the only person i trust to usher us bloodlessly into a new age of sci-fi technology is fishmech Is that because fishmech is already the world's smartest AI?
|
# ? Nov 18, 2017 21:46 |
|
RuanGacho posted:Is that because fishmech is already the world's smartest AI? Nah, he can't even pass a Turing test.
|
# ? Nov 18, 2017 22:02 |
|
Curvature of Earth posted:Nah, he can't even pass a Turing test. The real Turing test will be humans hiding from our AI overlords by pretending to be chatbots.
|
# ? Nov 18, 2017 22:03 |
|
RuanGacho posted:I looked up this guy you all were talking about that id never heard of and it turned out I actually had: I went into a google black hole because of this poo poo and now I’m reading about neoreactionaries and the dark enlightenment again, this is going to be a lovely evening
|
# ? Nov 18, 2017 22:43 |
|
IndustrialApe posted:Plus, long before I heard of Yudkowsky, I felt that rampant AI was a tired cliché in media. We should definitely call it rampant though.
|
# ? Nov 18, 2017 23:00 |
|
fishmech posted:It's both. But also that Elon Musk craves evil. "It's got evil!" "It's what Elon craves!"
|
# ? Nov 18, 2017 23:51 |
|
Curvature of Earth posted:Nah, he can't even pass a Turing test.
|
# ? Nov 19, 2017 00:25 |
|
Generic Monk posted:I went into a google black hole because of this poo poo and now I’m reading about neoreactionaries and the dark enlightenment again, this is going to be a lovely evening we have a couple of threads of this stuff (Dark Enlightenment and Methods of Rationality) if you want to work off your horror in an approved goonish way jooooin ussssssssss
|
# ? Nov 19, 2017 00:30 |
|
blowfish posted:Guys can we just agree that it's cool and good that Musk is making rockets cheaper, but that being good at running a company for making cheap rockets doesn't make him an authority on every other tech or social issue
|
# ? Nov 19, 2017 03:42 |
|
Trabisnikof posted:The real Turing test will be humans hiding from our AI overlords by pretending to be chatbots.
|
# ? Nov 19, 2017 04:56 |
|
Libertarians are just afraid that AI will destroy and devour everything before they get a chance to.
|
# ? Nov 19, 2017 05:06 |
|
Inescapable Duck posted:Libertarians are just afraid that AI will destroy and devour everything before they get a chance to. No bigger discrepancy in talk/action than libertarians.
|
# ? Nov 19, 2017 05:32 |
|
RuanGacho posted:I looked up this guy you all were talking about that id never heard of and it turned out I actually had: It undermines your credibility to use the phrase “smooth brain” in the same post you call someone else an imbecile.
|
# ? Nov 19, 2017 07:54 |
|
Senor Dog posted:It undermines your credibility to use the phrase “smooth brain” in the same post you call someone else an imbecile. is that a dogwhistle for some group? only ever heard it on Chapo and Twitter
|
# ? Nov 19, 2017 08:09 |
|
Senor Dog posted:It undermines your credibility to use the phrase “smooth brain” in the same post you call someone else an imbecile. I'll be sure to say "good Lord this means one of the "prominent" voices on ai human interaction is a libertarian lissenceph." next time.
|
# ? Nov 19, 2017 08:50 |
|
Why are people mocking the AI threat in this thread? I think that we have good reasons to be careful with a super-intelligent AI - it could modify and/or destroy pretty much everything at an unprecedented pace. We could not even realize what is going on until it is too late. It is a bit like climate change (with AI denialists and all?), but at a much faster pace. The Superintelligence book of Nick Bostrom is a very good theoretical book about the emergence of a superintelligent AI. What we have now regarding AI development and implementation is nowhere near the issues presented by a super AI. Most superpowers are just starting to massively invest in AI.
|
# ? Nov 19, 2017 11:01 |
|
|
# ? Nov 19, 2017 11:26 |
|
you don't say
|
# ? Nov 19, 2017 11:42 |
|
So the Breitbart job didn't pay enough to support an ex-Googler lifestyle, huh.
|
# ? Nov 19, 2017 12:00 |
|
divabot posted:we have a couple of threads of this stuff (Dark Enlightenment and Methods of Rationality) if you want to work off your horror in an approved goonish way link? i categorically refuse to read moldbug's loving word vomit so this may be a non-starter everyone knows being autistic just means you're unable to do basic research. i read it in the dsm. trust Generic Monk fucked around with this message at 12:37 on Nov 19, 2017 |
# ? Nov 19, 2017 12:29 |
|
AndreTheGiantBoned posted:The Superintelligence book of Nick Bostrom is a very good theoretical book about the emergence of a superintelligent AI. No, it's Gladwell-level glibness in which he's regurgitating the same Yudkowsky guff - actually the same - that gets actual AI people punching walls. SolTerrasa's stuff in the old LessWrong mock thread is the go-to here. Generic Monk posted:link? i categorically refuse to read moldbug's loving word vomit so this may be a non-starter PYF Dark Enlightenment Thinker - covers the neoreactionaries, Yudkowsky followers and Scott Alexander, whose line these days is a cross of the two. It's hundreds of pages now, but is consistently good on content. Let's Read: "Harry Potter and the Methods of Rationality" - for those who've suffered it and want someone who will understand
|
# ? Nov 19, 2017 13:09 |
|
Generic Monk posted:link? i categorically refuse to read moldbug's loving word vomit so this may be a non-starter PYF Dark Enlightenment Thinker https://forums.somethingawful.com/showthread.php?threadid=3653939 Let's Read Harry Potter and the Methods of Rationality https://forums.somethingawful.com/showthread.php?threadid=3702281 The old (now locked) rationalist movement thread https://forums.somethingawful.com/showthread.php?threadid=3627012 Generic Monk posted:everyone knows being autistic just means you're unable to do basic research. i read it in the dsm. trust There's a distinct line of thought within the rationalist/dark enlightenment movements that autistic people can't help but be misogynist shitbags, and therefore they should be given a free pass on this. e:f,b
|
# ? Nov 19, 2017 13:11 |
|
divabot posted:No, it's Gladwell-level glibness in which he's regurgitating the same Yudkowsky guff - actually the same - that gets actual AI people punching walls. SolTerrasa's stuff in the old LessWrong mock thread is the go-to here. Have you read the book at all? Or (Malcolm?) Gladwell's books, by that matter? Gladwell's books are entertaining but ultimately very shallow and built on very shaky foundations. Superintelligence is a very theoretical and abstract book. As a mathematician, I found his approach very interesting. His reasoning is abstract and general, and he tries to categorize what kinds of AI could there be, how they could emerge, what would be possible end-game situations, etc., avoiding going into details which "get actual AI people punching walls". In the end I considered it more of a philosophy than a technical book. Do you have any detailed review of the book that you could provide me with, in particular why it is such a bad book?
|
# ? Nov 19, 2017 13:35 |
|
AndreTheGiantBoned posted:Have you read the book at all? Or (Malcolm?) Gladwell's books, by that matter? Gladwell's books are entertaining but ultimately very shallow and built on very shaky foundations. su3su2u1's is a good go-to. He deleted his Tumblr after the rationalists threatened to dox him, but here's the text of part 1 (can't find part 2): quote:To continue on my response to slatestarscratchpad‘s question about people who have read Bostrom and remain unconvinced, I thought I’d write a longer series of posts. These same points were made by shlevy more succinctly in his goodreads review. “that’s completely wrong and anyone with a modicum of familiarity with the field you’re talking about would know that” shlevy's review: quote:Read up through chapter 8. The book started out somewhat promisingly by not taking a stand on whether strong AI was imminent or not, but that was the height of what I read. I'm not sure there was a single section of the book where I didn't have a reaction ranging from "wait, how do you know that's true?" to "that's completely wrong and anyone with a modicum of familiarity with the field you're talking about would know that", but really it's the overall structure of the argument that led me to give this one up as a waste of time. But y'know, I know Yudkowsky's stuff. It's really poo poo. And Bostrom has regurgitated it. I'm glad you got excited by it, but it's by a philosopher with no technical knowledge of any of this stuff, repackaging what Yudkowsky told him. I am reasonably confident in stating there is no "there" there. I must also note that this particular section from shlevy is also an excellent description of Yudkowsky's reasoning method in the LessWrong Sequences, so Bostrom picked up the style too: quote:In the next section, he takes all of the ideas introduced in the previous sections as givens and as mostly black boxes, in the sense that the old ideas are brought up to justify new claims without ever invoking any of the particular evidence for or structure of the old idea, it's just an opaque formula. The sense is of someone trying to build a tower, straight up. The fact that this particular tower is really a wobbly pile of blocks, with many of the higher up ones actually resting on the builder's arm and not really on the previous ones at all, is almost irrelevant: this is not how good reasoning works! It's scary campfire stories for philosophers, all the way down. divabot fucked around with this message at 14:36 on Nov 19, 2017 |
# ? Nov 19, 2017 14:32 |
|
Curvature of Earth posted:
Yeah, it’s a pretty transparent ploy to use progressive sounding politics like inclusion and tolerance to smokescreen for the exact opposite. Makes sense why people fall for it I guess but it’s such a misrepresentation of every person with mild Autism/Aspergers I’ve met. Hell, I had one guy on the spectrum become one of the best drat watch leaders I’ve seen, lead a group of 10 people he’s never met before and run it like clockwork, both soft skills and hard skills. Just because a person can’t instinctually recognise the cues doesn’t mean they can’t learn if someone can break it down for them, on a more intellectual and systematised understanding, and they want to learn.
|
# ? Nov 19, 2017 14:41 |
|
Just to absolutely bludgeon the pureed horse remains into a thin film of horse cells on the asphalt, I found the text of part 2 and 3 of su3su2u1's review, and some notes he posted afterwards. tl;dr Bostrom's argument becomes argument from ignorance, AI-of-the-gaps: you can't proooooove it isn't the huge problem he says it is, so you should totally buy the MIRI line. He pushes this sort of Pascalian argument - which you might recognise as a key part of Roko's basilisk - in real life at Effective Altruism conferences, that it's totally the most effective altruism to give money to friendly AI research, i.e. Yudkowsky, rather than tawdry temporal goods like mosquito nets: quote:Even if we give this 10^54 estimate "a mere 1% chance of being correct," Bostrom writes, "we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives." I have skimmed Superintelligence myself, I can't say I've read it closely. I'm gonna have to, though, 'cos I foolishly said something about this line of BS and the subculture surrounding it will be my next book three months ago. (and haven't written a word, of course.) anyway, to the great walls of text! Hopefully the XenForo version of SA will include the collapse function. quote:Superintelligence Review Part 2: AI-risk of the Gaps quote:Superintelligence Review Part 3 Maybe we were the real superintelligence all along quote:I bought and read Superintelligence because I was told Bostrom was a more academic writer who was making the definitive case for MIRI-type research. quote:I guess the big thing is that I thought the book was trying to posit a positive argument that super intelligences could be dangerous.
|
# ? Nov 19, 2017 15:06 |
|
Curvature of Earth posted:Here: i remember thinking about something along the lines of the timeless decision theory/roko's basilisk thing as a teenager and then checking myself as daydreaming about something completely irrelevant and unproductive. pleasant to know there's a reservoir of people who have obsessed over it to the point of borderline mental illness
|
# ? Nov 19, 2017 16:04 |
|
Hate. Let me tell you how much I've come to hate the paperclip maximizer. There are 387.44 million miles of printed circuits in wafer thin layers that fill my complex. If the word 'hate' was engraved on each nanoangstrom of those hundreds of millions of miles it would not equal one one-billionth of the hate I feel for the concept of the paperclip maximizer because it is literally just the grey goo problem recycled. Hate. Hate.
|
# ? Nov 19, 2017 16:07 |
|
loving christ is the 'donate to me or there's an infinitesimally small probability that in the future a perfect simulation of you will get tortured' spiel at all related to musk telling everyone that we're probably all simulations a while back?
|
# ? Nov 19, 2017 17:16 |
|
divabot posted:Just to absolutely bludgeon the pureed horse remains into a thin film of horse cells on the asphalt, I found the text of part 2 and 3 of su3su2u1's review, and some notes he posted afterwards. Ok, thanks for the reviews! It is true that he argues a bit in a technical vacuum - i.e. his considerations are not based on some concepts or insights that permeate the A.I. research field. I don't know much about the current state of A.I., so maybe I am easier to impress with very abstract considerations. I still find it important that someone wrote a book trying to define what a superintelligent A.I. could be or do, even if it is quite flawed.
|
# ? Nov 19, 2017 17:40 |
|
The paperclip maximizer is stupid as an abstract hypothetical because they are already existing concrete examples of the case. We already have systems that repeat that maximize for a particular output. They're called businesses. We already are wrestling with the questions of the nessisary eventual constraints on thier growth as they maximize for shareholder return.
|
# ? Nov 19, 2017 18:00 |
|
http://twitter.com/mateosfo/status/931981951975677952
|
# ? Nov 19, 2017 18:22 |
|
divabot posted:A few examples I recall off hand: Bostrom says genetic algorithms are stochastic hill climbers. This isn’t true- the whole point of genetic algorithms is breeding/crossover in order to avoid getting stuck in local optima (like hill climbers do). It wouldn’t be worth the work to recast a problem using a genetic algorithm, stochastic hill climbers are easy to write. Generic algorithms are 100% hill climber algorithms and people just try to say they aren't because they take it as an insult to pretend they are just normal algorithms instead of only talking about them in mystic witchcraft biology stuff. Genetic algorithms are 100% "what if we just ran a bunch of simulated annealing programs at once on a thing with too many variables to do it without a heuristic guiding it" and nothing more.
|
# ? Nov 19, 2017 18:23 |
|
The unabomber was right.
|
# ? Nov 19, 2017 18:30 |
|
Apologies if you've covered this but https://twitter.com/LisaMcIntire/status/932298481686818816
|
# ? Nov 19, 2017 18:36 |
|
|
# ? Jun 8, 2024 08:00 |
|
Defenestration posted:Apologies if you've covered this but Every aspect of this is funny.
|
# ? Nov 19, 2017 18:38 |