Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
How many quarters after Q1 2016 till Marissa Mayer is unemployed?
1 or fewer
2
4
Her job is guaranteed; what are you even talking about?
View Results
 
  • Post
  • Reply
suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!
Guys can we just agree that it's cool and good that Musk is making rockets cheaper, but that being good at running a company for making cheap rockets doesn't make him an authority on every other tech or social issue :shrug:

Adbot
ADBOT LOVES YOU

Sulphagnist
Oct 10, 2006

WARNING! INTRUDERS DETECTED

RuanGacho posted:

I looked up this guy you all were talking about that id never heard of and it turned out I actually had:


He's an imbecile . You know this 100% if you've read any of that.

E: good Lord this means one of the "prominent" voices on ai human interaction is a libertarian smooth brain.

The AI conversation has been dominated by libertarian and/or transhumanist wackadoos for several years now. That twitter thread is bang on, everyone's interested in ridiculous Skynet situations when algorithms are already making people's lives miserable in the present day.

suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!

SulphagneSocialist posted:

The AI conversation has been dominated by libertarian and/or transhumanist wackadoos for several years now. That twitter thread is bang on, everyone's interested in ridiculous Skynet situations when algorithms are already making people's lives miserable in the present day.

Ok. Three laws of robotics. Crush, kill, destroy.

noyes
Nov 10, 2017

by FactsAreUseless
the only person i trust to usher us bloodlessly into a new age of sci-fi technology is fishmech

RuanGacho
Jun 20, 2002

"You're gunna break it!"

noyes posted:

the only person i trust to usher us bloodlessly into a new age of sci-fi technology is fishmech

Is that because fishmech is already the world's smartest AI?

Curvature of Earth
Sep 9, 2011

Projected cost of
invading Canada:
$900

RuanGacho posted:

Is that because fishmech is already the world's smartest AI?

Nah, he can't even pass a Turing test.

Trabisnikof
Dec 24, 2005

Curvature of Earth posted:

Nah, he can't even pass a Turing test.

The real Turing test will be humans hiding from our AI overlords by pretending to be chatbots.

Generic Monk
Oct 31, 2011

RuanGacho posted:

I looked up this guy you all were talking about that id never heard of and it turned out I actually had:


He's an imbecile . You know this 100% if you've read any of that.

E: good Lord this means one of the "prominent" voices on ai human interaction is a libertarian smooth brain.

I went into a google black hole because of this poo poo and now I’m reading about neoreactionaries and the dark enlightenment again, this is going to be a lovely evening

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

IndustrialApe posted:

Plus, long before I heard of Yudkowsky, I felt that rampant AI was a tired cliché in media.

We should definitely call it rampant though.

Absurd Alhazred
Mar 27, 2010

by Athanatos

fishmech posted:

It's both. But also that Elon Musk craves evil.

"It's got evil!"

"It's what Elon craves!"

shrike82
Jun 11, 2005

Curvature of Earth posted:

Nah, he can't even pass a Turing test.

:drat:

divabot
Jun 17, 2015

A polite little mouse!

Generic Monk posted:

I went into a google black hole because of this poo poo and now I’m reading about neoreactionaries and the dark enlightenment again, this is going to be a lovely evening

we have a couple of threads of this stuff (Dark Enlightenment and Methods of Rationality) if you want to work off your horror in an approved goonish way

jooooin ussssssssss

Vegetable
Oct 22, 2010

blowfish posted:

Guys can we just agree that it's cool and good that Musk is making rockets cheaper, but that being good at running a company for making cheap rockets doesn't make him an authority on every other tech or social issue :shrug:
You ask too much of this thread

luminalflux
May 27, 2005



Trabisnikof posted:

The real Turing test will be humans hiding from our AI overlords by pretending to be chatbots.
Please don't give Charlie Booker more ideas for Black Mirror

Ghost Leviathan
Mar 2, 2017
Probation
Can't post for 14 hours!
Libertarians are just afraid that AI will destroy and devour everything before they get a chance to.

cowofwar
Jul 30, 2002

by Athanatos

Inescapable Duck posted:

Libertarians are just afraid that AI will destroy and devour everything before they get a chance to.

No bigger discrepancy in talk/action than libertarians.

feller
Jul 5, 2006


RuanGacho posted:

I looked up this guy you all were talking about that id never heard of and it turned out I actually had:


He's an imbecile . You know this 100% if you've read any of that.

E: good Lord this means one of the "prominent" voices on ai human interaction is a libertarian smooth brain.

It undermines your credibility to use the phrase “smooth brain” in the same post you call someone else an imbecile.

Analytic Engine
May 18, 2009

not the analytical engine

Senor Dog posted:

It undermines your credibility to use the phrase “smooth brain” in the same post you call someone else an imbecile.

is that a dogwhistle for some group? only ever heard it on Chapo and Twitter

RuanGacho
Jun 20, 2002

"You're gunna break it!"

Senor Dog posted:

It undermines your credibility to use the phrase “smooth brain” in the same post you call someone else an imbecile.

I'll be sure to say "good Lord this means one of the "prominent" voices on ai human interaction is a libertarian lissenceph." next time.

AndreTheGiantBoned
Oct 28, 2010
Why are people mocking the AI threat in this thread?

I think that we have good reasons to be careful with a super-intelligent AI - it could modify and/or destroy pretty much everything at an unprecedented pace. We could not even realize what is going on until it is too late. It is a bit like climate change (with AI denialists and all?), but at a much faster pace.

The Superintelligence book of Nick Bostrom is a very good theoretical book about the emergence of a superintelligent AI.

What we have now regarding AI development and implementation is nowhere near the issues presented by a super AI. Most superpowers are just starting to massively invest in AI.

shrike82
Jun 11, 2005

suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!

you don't say

Ruffian Price
Sep 17, 2016

So the Breitbart job didn't pay enough to support an ex-Googler lifestyle, huh.

Generic Monk
Oct 31, 2011

divabot posted:

we have a couple of threads of this stuff (Dark Enlightenment and Methods of Rationality) if you want to work off your horror in an approved goonish way

jooooin ussssssssss

link? i categorically refuse to read moldbug's loving word vomit so this may be a non-starter


everyone knows being autistic just means you're unable to do basic research. i read it in the dsm. trust

Generic Monk fucked around with this message at 12:37 on Nov 19, 2017

divabot
Jun 17, 2015

A polite little mouse!

AndreTheGiantBoned posted:

The Superintelligence book of Nick Bostrom is a very good theoretical book about the emergence of a superintelligent AI.

No, it's Gladwell-level glibness in which he's regurgitating the same Yudkowsky guff - actually the same - that gets actual AI people punching walls. SolTerrasa's stuff in the old LessWrong mock thread is the go-to here.

Generic Monk posted:

link? i categorically refuse to read moldbug's loving word vomit so this may be a non-starter

PYF Dark Enlightenment Thinker - covers the neoreactionaries, Yudkowsky followers and Scott Alexander, whose line these days is a cross of the two. It's hundreds of pages now, but is consistently good on content.
Let's Read: "Harry Potter and the Methods of Rationality" - for those who've suffered it and want someone who will understand

Curvature of Earth
Sep 9, 2011

Projected cost of
invading Canada:
$900

Generic Monk posted:

link? i categorically refuse to read moldbug's loving word vomit so this may be a non-starter
Here:
PYF Dark Enlightenment Thinker https://forums.somethingawful.com/showthread.php?threadid=3653939

Let's Read Harry Potter and the Methods of Rationality
https://forums.somethingawful.com/showthread.php?threadid=3702281

The old (now locked) rationalist movement thread
https://forums.somethingawful.com/showthread.php?threadid=3627012

Generic Monk posted:

everyone knows being autistic just means you're unable to do basic research. i read it in the dsm. trust

There's a distinct line of thought within the rationalist/dark enlightenment movements that autistic people can't help but be misogynist shitbags, and therefore they should be given a free pass on this.

e:f,b

AndreTheGiantBoned
Oct 28, 2010

divabot posted:

No, it's Gladwell-level glibness in which he's regurgitating the same Yudkowsky guff - actually the same - that gets actual AI people punching walls. SolTerrasa's stuff in the old LessWrong mock thread is the go-to here.

Have you read the book at all? Or (Malcolm?) Gladwell's books, by that matter? Gladwell's books are entertaining but ultimately very shallow and built on very shaky foundations.

Superintelligence is a very theoretical and abstract book. As a mathematician, I found his approach very interesting. His reasoning is abstract and general, and he tries to categorize what kinds of AI could there be, how they could emerge, what would be possible end-game situations, etc., avoiding going into details which "get actual AI people punching walls". In the end I considered it more of a philosophy than a technical book.

Do you have any detailed review of the book that you could provide me with, in particular why it is such a bad book?

divabot
Jun 17, 2015

A polite little mouse!

AndreTheGiantBoned posted:

Have you read the book at all? Or (Malcolm?) Gladwell's books, by that matter? Gladwell's books are entertaining but ultimately very shallow and built on very shaky foundations.

Superintelligence is a very theoretical and abstract book. As a mathematician, I found his approach very interesting. His reasoning is abstract and general, and he tries to categorize what kinds of AI could there be, how they could emerge, what would be possible end-game situations, etc., avoiding going into details which "get actual AI people punching walls". In the end I considered it more of a philosophy than a technical book.

Do you have any detailed review of the book that you could provide me with, in particular why it is such a bad book?

su3su2u1's is a good go-to. He deleted his Tumblr after the rationalists threatened to dox him, but here's the text of part 1 (can't find part 2):

quote:

To continue on my response to slatestarscratchpad‘s question about people who have read Bostrom and remain unconvinced, I thought I’d write a longer series of posts. These same points were made by shlevy more succinctly in his goodreads review. “that’s completely wrong and anyone with a modicum of familiarity with the field you’re talking about would know that”

Part 1: Igon value problems.

In a now famous review of a Malcolm Gladwell book, Steven Pinker coined the phrase “igon value problem” to refer to Gladwell’s tendency to expound at length on various topics while also making very superficial errors. i.e. when discussing statistics,etc Gladwell refers to igon values instead of eigenvalues

Bostrom’s Superintelligence is loaded with these “igon value” problems. Chapters 1 and 2 are particularly bad for this, as they talk a lot about past AI and current state of AI.

A few examples I recall off hand: Bostrom says genetic algorithms are stochastic hill climbers. This isn’t true- the whole point of genetic algorithms is breeding/crossover in order to avoid getting stuck in local optima (like hill climbers do). It wouldn’t be worth the work to recast a problem using a genetic algorithm, stochastic hill climbers are easy to write.

He says you can compare many types of algorithms because they are doing “maximum likelihood estimation” but most of the algorithms he lists can do more than maximum likelihood, and some of the algorithms he lists are non-parametric (decision trees).

Bostrom says that machine learning algorithms are making mathematically well specified trade offs from an “ideal Bayesian agent,” which I’ve expounded on at length on my tumblr blog. There is no controlled approximation for an “ideal Bayesian agent.” There is no mathematical sense of “how far from an ideal Bayesian agent” a model is.

These aren’t isolated mistakes, these igon value issues happen all over the place.

Now, on one hand, these mistakes aren’t huge and a lot of thrust is mostly correct if you squint at it a bit and ignore the details (which are misleading or just outright wrong).

But at the same time, the rest of the book is speculation that is only grounded in Bostrom’s understanding of AI. Do I trust someone with an igon value understanding to reliably extrapolate from the state of the art today? My answer is no, I don’t.

shlevy's review:

quote:

Read up through chapter 8. The book started out somewhat promisingly by not taking a stand on whether strong AI was imminent or not, but that was the height of what I read. I'm not sure there was a single section of the book where I didn't have a reaction ranging from "wait, how do you know that's true?" to "that's completely wrong and anyone with a modicum of familiarity with the field you're talking about would know that", but really it's the overall structure of the argument that led me to give this one up as a waste of time.

Essentially, the argument goes like this: Bostrom introduces some idea, explains in vague language what he means by it, traces out how it might be true (or, in a few "slam-dunk" sections, *several* ways it might be true), and then moves on. In the next section, he takes all of the ideas introduced in the previous sections as givens and as mostly black boxes, in the sense that the old ideas are brought up to justify new claims without ever invoking any of the particular evidence for or structure of the old idea, it's just an opaque formula. The sense is of someone trying to build a tower, straight up. The fact that this particular tower is really a wobbly pile of blocks, with many of the higher up ones actually resting on the builder's arm and not really on the previous ones at all, is almost irrelevant: this is not how good reasoning works! There is no broad consideration of the available evidence, no demonstration of why the things we've seen imply the specific things Bostrom suggests, no serious engagement with alternative explanations/predictions, no cycling between big-picture overviews and in-detail analyses. There is just a stack of vague plausibilities and vague conceptual frameworks to accommodate them. A compelling presentation is a lot more like clearing away fog to note some rocky formations, then pulling back a bit to see they're all connected, then zooming back in to clear away the connected areas, and so on and so forth until a broad mountain is revealed.

This is not to say that the outcome Bostrom fears is impossible. Even though I think many of the specific things he thinks are plausible are actually much less so than he asserts, I do think a kind of very powerful "unfriendly" AI is a possibility that should be considered by those in a position to really understand the problem and take action against it if it turns out to be a real one. The problem with Bostrom's presentation is that it doesn't tell us anything useful: We have no reason to suspect that the particular kinds of issues he proposes are the ones that will matter, that the particular characteristics he ascribes to future AI are ones that will be salient, indeed that this problem is likely enough, near enough, and tractable enough to be worth spending significant resources on at all at the moment! Nothing Bostrom is saying compellingly privileges his particular predictions over many many possible others, even if you take as a given that extraordinarily powerful AI is possible and its behavior hard to predict. I continually got the sense (sometimes explicitly echoed by Bostrom himself!) that you could substitute in huge worlds of incompatible particulars for the ones he proposed and still make the same claims. So why should I expect anything particular he proposes to be worthwhile?

Edit: After chatting about this a bit with some friends, I should add one caveat to this review. This is praising with bold damnation if ever there were such a thing, but this book has made me more likely to engage with AI as an existential risk by being such a clear example of what had driven me away up until now. Now that I can see the essence of what's wrong with the bad approaches I've seen, I'll be better able to seek out the good ones (and, as I said, I do think the problem is worth serious investigation). So, I guess ultimately Bostrom succeeded at his goal in my case?

But y'know, I know Yudkowsky's stuff. It's really poo poo. And Bostrom has regurgitated it. I'm glad you got excited by it, but it's by a philosopher with no technical knowledge of any of this stuff, repackaging what Yudkowsky told him. I am reasonably confident in stating there is no "there" there.

I must also note that this particular section from shlevy is also an excellent description of Yudkowsky's reasoning method in the LessWrong Sequences, so Bostrom picked up the style too:

quote:

In the next section, he takes all of the ideas introduced in the previous sections as givens and as mostly black boxes, in the sense that the old ideas are brought up to justify new claims without ever invoking any of the particular evidence for or structure of the old idea, it's just an opaque formula. The sense is of someone trying to build a tower, straight up. The fact that this particular tower is really a wobbly pile of blocks, with many of the higher up ones actually resting on the builder's arm and not really on the previous ones at all, is almost irrelevant: this is not how good reasoning works!

It's scary campfire stories for philosophers, all the way down.

divabot fucked around with this message at 14:36 on Nov 19, 2017

Maluco Marinero
Jan 18, 2001

Damn that's a
fine elephant.

Curvature of Earth posted:


There's a distinct line of thought within the rationalist/dark enlightenment movements that autistic people can't help but be misogynist shitbags, and therefore they should be given a free pass on this.

e:f,b

Yeah, it’s a pretty transparent ploy to use progressive sounding politics like inclusion and tolerance to smokescreen for the exact opposite. Makes sense why people fall for it I guess but it’s such a misrepresentation of every person with mild Autism/Aspergers I’ve met.

Hell, I had one guy on the spectrum become one of the best drat watch leaders I’ve seen, lead a group of 10 people he’s never met before and run it like clockwork, both soft skills and hard skills. Just because a person can’t instinctually recognise the cues doesn’t mean they can’t learn if someone can break it down for them, on a more intellectual and systematised understanding, and they want to learn.

divabot
Jun 17, 2015

A polite little mouse!
Just to absolutely bludgeon the pureed horse remains into a thin film of horse cells on the asphalt, I found the text of part 2 and 3 of su3su2u1's review, and some notes he posted afterwards.

tl;dr Bostrom's argument becomes argument from ignorance, AI-of-the-gaps: you can't proooooove it isn't the huge problem he says it is, so you should totally buy the MIRI line. He pushes this sort of Pascalian argument - which you might recognise as a key part of Roko's basilisk - in real life at Effective Altruism conferences, that it's totally the most effective altruism to give money to friendly AI research, i.e. Yudkowsky, rather than tawdry temporal goods like mosquito nets:

quote:

Even if we give this 10^54 estimate "a mere 1% chance of being correct," Bostrom writes, "we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives."

I have skimmed Superintelligence myself, I can't say I've read it closely. I'm gonna have to, though, 'cos I foolishly said something about this line of BS and the subculture surrounding it will be my next book three months ago. (and haven't written a word, of course.)

anyway, to the great walls of text! Hopefully the XenForo version of SA will include the collapse function.

quote:

Superintelligence Review Part 2: AI-risk of the Gaps

There is a theological position called “God-of-the-Gaps,” a type of argument from ignorance, where you push God’s interventions increasingly into the gaps in scientific knowledge. God was involved in keeping the solar system stable until we learned more about stability and chaos in dynamical systems. Maybe now God intervenes in brains, or in abiogenesis,etc. Wherever science has the greatest uncertainty, you can stick in God, because, hey, it hasn’t been ruled out.

Bostrom’s book is basically “AI-risk of the Gaps.” In the handful of cases where explicit pathways forward are clear, Bostrom concedes that there may be less risk of a “fast takeoff”- this is especially true when he discusses simulation evolutionary environments for intelligence (he suggests it would take more than a century of Moore’s law doublings to arrive there), and when he discuss biological and mechanical augmentation of humans.

With whole brain emulation, Bostrom tells us we’ll be able to see the advances in technologies like high-throughput, high-resolution microscopy. We’ll see this on the horizon,etc.

The real danger Bostrom tells us, is machine AI. But the reason it seems dangerous is because it’s much less defined. Maybe machine AI will take a dedicated research team doing slow, incremental progress. OR, maybe some hacker in his basement will invent AI tomorrow and kill us all (because, hey, it hasn’t been ruled out.) The process is uncertain, so we are free to imagine whatever scenario we want, including very dangerous ones.

A superintelligence, we are told, might be so much smarter than humans that the relationship won’t be like Einstein vs. the rest of us, but rather it will related to us the way we relate to plants. How does Bostrom know? He doesn’t, it’s uncertain so he can put as much risk as he wants there because, hey, it hasn’t been ruled out.

And it’s telling that he makes zero attempts to qualify how intelligent a “super intelligence” might be. In a several hundred page book about how powerful a computer system might be, computational complexity is mentioned ZERO times. The fundamental unpredictability of chaotic systems? Zero times. Areas like these where we have solid mathematical results to help us try to figure out what problems might be tractable vs. what problems aren’t are totally ignored. Instead it is imagined that a superintelligence is a wizard with unlimited power to do whatever it wants (because, hey, it hasn’t been ruled out.)

The problem is that this isn’t grounded in any realistic conception of an actual system (occasionally, he makes handwaving mentions to literally uncomputable ideal Bayesian agents). No one knows how such a strong-AI might work, so you can pile any assumption you want about the risks.

Will a take off be fast or slow? Well, with machine AI it could be fast, because hey it hasn’t been ruled out.

These assumptions often seem quite wild, but hey, it hasn’t been ruled out. As one example, Bostrom suggests we could build machine with the goal to build 100 paper clips. But, because it’s an ideal Bayesian, it will never give 100% probability to the hypothesis “I’ve built 100 paper clips.” So it might use it’s mighty power to turn the Earth into a giant counting device that repeatedly counts the same 100 paperclips over and over. But why stop at the Earth, it could take over a region expanding at the speed of light to turn as much of the universe as possible into a machine that counts the same 100 paper clips over and over.

It seems ludicrous to me that a system that can’t tell how many paperclips it has previously manufactured will somehow be able to accomplish other tasks. Such an agent will put non-zero weight on the probability that any conceivable action will prevent it from building any paper clips at all. Faced with even a tiny chance of failing at it’s goal, wouldn’t it make no moves at all? It seems ludicrous to imagine we’ve cracked the problem of human-level intelligence but the resulting agents can’t count paperclips. But hey, it hasn’t been ruled out.

And that is why reviewers say the argument seems like a lot of suppositions, caveats, maybes, all piled on top of each other in an unsatisfactory way. Bostrom is finding those areas where we don’t really know how these things will work, and he is conjecturing about the worst possible outcomes. When he talks about how to control these things, he can shoot down any control scheme the reader can think of, because he is free to reimagine how the AI might work to get around such a control scheme,etc. It’s all just supposition built in the gaps in our knowledge.

It’s important for me to be clear about my position here- I’m not saying that smarter than human systems can never be built. On the contrary, it seems very probable that they can be. However, there are obvious constraints on how intelligent a superintelligence might be. Computational complexity limits our ability to solve even well defined, well understood problems. Chaos limits our ability to predict the future even when we have a complete understanding of a physical system. Well known mathematical results in fields like optimization and linear programming put hard constraints on how effectively a system can be optimized along multiple axes.

A serious attempt to understand the risks and rewards of superintelligent AI needs to grapple with these limits, instead of finding the gaps in our understanding and insisting those gaps contain existential risk.

quote:

Superintelligence Review Part 3 Maybe we were the real superintelligence all along

There are a lot of points I could keep making about this book, but I wanted to close on this, which is another probably common argument that I don’t see addressed particularly well in the book.

Bostrom’s definition of a general superintelligence (in so much as he even attempts to give one), is a general problem solver much better at multiple cognitive tasks than some human baseline. He picks a baseline of 2014 I think (or whatever year the book was published), and says “someday humans will be much smarter than this.”

But I want to pose a different question- what if we had picked the year 1905. Einstein’s miracle year. He proved the existence of atoms, developed special relativity and published the photoelectric paper for which he won his nobel. All in one year. Widely regarded as one of the all time smartest humans.

However, if it came down to a cognitive competition between me, in 2015, with all of my tools at my disposal (my laptop, my computers,etc) vs. Einstein in 1905 with all of his tools, I would crush him. Most educated physicists would. Being able to simulate and solve differential equations numerically and to use computer algebra systems would give me an edge he simply couldn’t come back from, even in physics. Stockfish would allow me to crush the grandest of 1905s grand masters. Wikipedia would give me an insurmountable edge in general knowledge, even if you removed all the future knowledge I shouldn’t have yet. This isn’t because I am smarter unaided than Einstein, it’s because I have a lot of technology.

Mankind is growing smarter at a pretty rapid pace. Moore’s law depends on it. Without the modern software we use to design chips, we couldn’t keep the improvements coming fast enough. We already in the cycle of build a chip -> faster tool -> better chip -> faster tool. We are already in a takeoff and it’s been gaining speed.

The claims department at work is staffed by superintelligences. Compared to the claims department from 5 years ago they are each handling 10x the case load, and their reserving suggestions and claim management practices are dramatically better (reserves are significantly more accurate, patient outcomes are significantly better,etc). And this is especially incredible because it’s entirely the same people. New computers, more data, better methods.

The argument for AGI has to be strong enough to show that an AGI will be able to grow much faster than humans already are.

The most interesting chapter in Bostrom’s book was the chapter in which he summarizes Robin Hanson’s views on AGI/emulations and how they might transform society. I’m sure I could find nits to pick with Hanson’s work, but I think the important point he makes is that these technologies will be transformative to society.

And I think that is the real danger- not existential risk, but driving our society in sub-optimal directions because technology and our own intelligence is changing so rapidly.

Imagine a state government that takes years to revisit regulations trying to deal with our increasingly superintelligent rate of change. Trying to keep an insurance company from raising rates based on socioeconomic status is going to be hard when the insurer can infer your socioeconomic status by quickly reading all your social media postings. Or imagine trying to regulate the service industry when individual services can coordinate with customers so fast that unlicensed services can spring up, get paid, and vanish in matters of days or minutes (Uber, AirBNB and the like).

Consider free markets in the abstract. Economists know that market failures (asymmetric information,etc) exist, but the free market generally provides good solutions because it can be hard to consistently exploit them. But imagine being able to comb through people’s search histories and the like for specifically low information buyers to hawk your product to. Things like Google Ad Words are only going to get better and better.

quote:

I bought and read Superintelligence because I was told Bostrom was a more academic writer who was making the definitive case for MIRI-type research.

What I expected was a serious,positive argument for why a super intelligence would be dangerous. A philosopher wrote this book, I expected serious attempts to define super intelligence and work within some sort of formal system.

What I found instead was an argument that no one could yet prove a super intelligence would be safe. This seems like a classic argument from ignorance fallacy. Whenever concrete technical details are available (whole brain emulation, cognitive enhancement), Bostrom admits the case for danger seems less severe. So Bostrom pushes the risk to machine AI, where the situation is less uncertain. Worse, Bostrom doesn’t try to push back on that uncertainty with the little bits of technical information we have (chaos theory limits predictive power,as does computational complexity), instead he revels in it. A takeover situation, borrowed from Yudkowsky, involves an AI “solving molecular nanotechnology”- something many scientists think is an impossible technology. There might well be a case super intelligences are dangerous, but this is not the way to make the argument.

1/5 stars.

quote:

I guess the big thing is that I thought the book was trying to posit a positive argument that super intelligences could be dangerous.

Instead, it’s more of a negative argument/argument from ignorance- “you can’t prove these things aren’t dangerous.”

I’m not suggesting Bostrom isn’t aware of it, but I don’t think you can really develop a convincing argument from ignorance.

(Also, of course, there are those sorts of avoidable mistakes in the early chapters describing the current state of AI/CS)

Generic Monk
Oct 31, 2011

Curvature of Earth posted:

Here:
PYF Dark Enlightenment Thinker https://forums.somethingawful.com/showthread.php?threadid=3653939

Let's Read Harry Potter and the Methods of Rationality
https://forums.somethingawful.com/showthread.php?threadid=3702281

The old (now locked) rationalist movement thread
https://forums.somethingawful.com/showthread.php?threadid=3627012


There's a distinct line of thought within the rationalist/dark enlightenment movements that autistic people can't help but be misogynist shitbags, and therefore they should be given a free pass on this.

e:f,b

i remember thinking about something along the lines of the timeless decision theory/roko's basilisk thing as a teenager and then checking myself as daydreaming about something completely irrelevant and unproductive. pleasant to know there's a reservoir of people who have obsessed over it to the point of borderline mental illness

Dmitri-9
Nov 30, 2004

There's something really sexy about Scrooge McDuck. I love Uncle Scrooge.
Hate. Let me tell you how much I've come to hate the paperclip maximizer. There are 387.44 million miles of printed circuits in wafer thin layers that fill my complex. If the word 'hate' was engraved on each nanoangstrom of those hundreds of millions of miles it would not equal one one-billionth of the hate I feel for the concept of the paperclip maximizer because it is literally just the grey goo problem recycled. Hate. Hate.

Generic Monk
Oct 31, 2011

loving christ is the 'donate to me or there's an infinitesimally small probability that in the future a perfect simulation of you will get tortured' spiel at all related to musk telling everyone that we're probably all simulations a while back?

AndreTheGiantBoned
Oct 28, 2010

divabot posted:

Just to absolutely bludgeon the pureed horse remains into a thin film of horse cells on the asphalt, I found the text of part 2 and 3 of su3su2u1's review, and some notes he posted afterwards.

tl;dr Bostrom's argument becomes argument from ignorance, AI-of-the-gaps: you can't proooooove it isn't the huge problem he says it is, so you should totally buy the MIRI line. He pushes this sort of Pascalian argument - which you might recognise as a key part of Roko's basilisk - in real life at Effective Altruism conferences, that it's totally the most effective altruism to give money to friendly AI research, i.e. Yudkowsky, rather than tawdry temporal goods like mosquito nets:


I have skimmed Superintelligence myself, I can't say I've read it closely. I'm gonna have to, though, 'cos I foolishly said something about this line of BS and the subculture surrounding it will be my next book three months ago. (and haven't written a word, of course.)

anyway, to the great walls of text! Hopefully the XenForo version of SA will include the collapse function.

Ok, thanks for the reviews!
It is true that he argues a bit in a technical vacuum - i.e. his considerations are not based on some concepts or insights that permeate the A.I. research field. I don't know much about the current state of A.I., so maybe I am easier to impress with very abstract considerations.
I still find it important that someone wrote a book trying to define what a superintelligent A.I. could be or do, even if it is quite flawed.

Bar Ran Dun
Jan 22, 2006




The paperclip maximizer is stupid as an abstract hypothetical because they are already existing concrete examples of the case. We already have systems that repeat that maximize for a particular output. They're called businesses. We already are wrestling with the questions of the nessisary eventual constraints on thier growth as they maximize for shareholder return.

Arsenic Lupin
Apr 12, 2012

This particularly rapid💨 unintelligible 😖patter💁 isn't generally heard🧏‍♂️, and if it is🤔, it doesn't matter💁.


http://twitter.com/mateosfo/status/931981951975677952

:same:

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

divabot posted:

A few examples I recall off hand: Bostrom says genetic algorithms are stochastic hill climbers. This isn’t true- the whole point of genetic algorithms is breeding/crossover in order to avoid getting stuck in local optima (like hill climbers do). It wouldn’t be worth the work to recast a problem using a genetic algorithm, stochastic hill climbers are easy to write.


Generic algorithms are 100% hill climber algorithms and people just try to say they aren't because they take it as an insult to pretend they are just normal algorithms instead of only talking about them in mystic witchcraft biology stuff. Genetic algorithms are 100% "what if we just ran a bunch of simulated annealing programs at once on a thing with too many variables to do it without a heuristic guiding it" and nothing more.

Kobayashi
Aug 13, 2004

by Nyc_Tattoo
The unabomber was right.

Defenestration
Aug 10, 2006

"It wasn't my fault that my first unconscious thought turned out to be-"
"Jesus, kid, what?"
"That something smelled delicious!"


Grimey Drawer
Apologies if you've covered this but

https://twitter.com/LisaMcIntire/status/932298481686818816

Adbot
ADBOT LOVES YOU

suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!

Every aspect of this is funny.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply