|
Holy geez there's really sixty plus pages of people trying to explain that one bad post about metagaming play-pretend future cyber-extortion to each other here, that's awful, someone explain the cryonics thing to me and why these guys think they're going to freeze their brains and omnipotent future people with apparently nothing better to do will preserve them throughout infinity like the precious timeless jewels that they are.
|
# ¿ Jan 4, 2015 07:02 |
|
|
# ¿ May 18, 2024 04:58 |
|
bartlebyshop posted:It has to be true that cryonics works otherwise death is inescapable and that's just too spooky. I mean I figured that I'm just curious as to the elaborate future timeline wherein your great-great-great-grandson from one generation short of the era where they finally do work out the technology to unfuck frozen dead brains from olden tymes is going to pay to keep your dead rear end on ice instead of selling it to Purina for puppy chow. I don't even keep my old VHS tapes. Like is it one of those things where the millennium is always one year off and cyber-Christ's kingdom of heaven is prophecied to arrive in our lifetimes, or do they actually have a hosed-up speculative timeline for all this nonsense like the Kurzweil guys?
|
# ¿ Jan 4, 2015 07:13 |
|
I predict that in the next 20 years someone is going to invent an absolutely incredible Jetson's remake about our eternal monarchist holodeck bitcoin future where Zynga has cured death.
|
# ¿ Jan 4, 2015 07:27 |
|
OK being indolent hyperrich and using all your money to turn yourself into an eternal ice mummy I can understand, they're just bitching out on having their wives and servants cryogenically preserved alive alongside them and that's fine, for losers.
|
# ¿ Jan 4, 2015 07:47 |
|
I bet you're paying for your army of terra cotta warriors with welfare money too, aren't you?
|
# ¿ Jan 4, 2015 09:04 |
|
Sham bam bamina! posted:A brain that goes days without oxygen and is then frozen, fracturing its tissues, has not sustained irreversible damage. Yeah but see when we invent the nanites we'll be able to inject them into rotten-rear end dead tissue and they'll just get to work with tiny screwdrivers and tubes of krazy glue like in Osmosis Jones
|
# ¿ Jan 5, 2015 06:12 |
|
bartlebyshop posted:More the former than the latter. A lot of them think strong/Godlike AI is happening in the next century so you wouldn't have to hope your 10xth grandkid cares about you. If modern medicine advances enough that they all live to 150 then their kid would of course want to resurrect daddy. Undoubtedly they also have some tortured logic where future generations, being more perfect utilitarians, will want to resurrect the frozen dead to increase overall utilons. hey I just got around to reading this and it was pretty good, thanks
|
# ¿ Jan 5, 2015 12:16 |
|
Germstore posted:If there isn't some really huge breakthrough we have at best another 1000x increase in transistor density left. Where do they think the processing power for god AIs is going to come from? No you see transistors don't matter, technology finds a way to grow exponentially, the arc of history bends towards infinity technology As every techno-historian knows, past performance is the guarantee of future returns. Also innovation works a lot like research points in 4X games A Wizard of Goatse fucked around with this message at 14:09 on Jan 5, 2015 |
# ¿ Jan 5, 2015 14:02 |
|
Crewmine posted:i'm glad that the hive brain of humanity will be ready by 2040, I was a little concerned that every robot in existence is an actual retard but this graph has renewed my faith in scientific endeavour Allow me to enlighten you with a quote by some 1940s guy saying that some day we may hope to have computers a mere tenth the size of ENIAC; this proves that the part of prognosticating about the future that makes you look dumb to future generations is the part where you are not wildly, baselessly optimistic about very specific fantasy scenarios Back for more later but I've got to take my flying car to the food pill factory on Venus
|
# ¿ Jan 5, 2015 14:41 |
|
I might like this one a little better, where Kurzweil compensates for the tiny tiny sampling range of the era where computers have existed at all by showing the clear historical trend from the Cambrian era to the telephone... and beyond A Wizard of Goatse fucked around with this message at 16:55 on Jan 5, 2015 |
# ¿ Jan 5, 2015 16:52 |
|
Dzhay posted:Looks more like an exponential curve on a log chart, which is even "better". The architecture being fundamentally totally different is probably gonna be a pretty significant barrier to making a machine 'think like a human', though, you can get a more or less functional purpose-built mimic like Siri but you're not going to emulate adrenalgland.exe on transistors much more effectively than you're going to get a software-based engine simulation that'll drive you to Walgreens. Your endocrine system is not reducible to bits, nor is much of the rest of you, far as we can tell, save for isolated neurons. The singularitarian/transhumanist stuff relies totally on the belief that there is no such thing as a difference of quality, only quantity, so if you only dream up enough logic gates you will have a person, a civilization, God, etc. A Wizard of Goatse fucked around with this message at 17:08 on Jan 5, 2015 |
# ¿ Jan 5, 2015 17:01 |
|
Applewhite posted:What's even the point of building a human-like AI anyway? Isn't the whole point of robots is that they're too stupid to care that they're trapped doing boring, repetitive work all the time? It seems like building a human AI is just an overly complicated way of accomplishing exactly the same thing humans do every time we reproduce. So nerds can fill the void left by the absence of God and, ideally, download their souls and live forever in His all-knowing benevolent embrace in technoheaven.
|
# ¿ Jan 5, 2015 17:06 |
|
Germstore posted:We could produce the silicon today if we knew what configuration to create it in, but that's like saying we could dump a bunch of neurons in a protein soup and get intelligence. The only process that we know of to create intelligence took almost 4 billion years (or an omnipotent force, pbuh). We have developed intelligent artificial lifeforms and it took a shitload less time than billions of years, we were doing it just fine on a platform we know with absolute certainty can support "human-like intelligence" before we even knew what electricity was because this reinventing-the-wheel bleepy bloopy robot poo poo is for chumps who aren't interested in useful innovation and only really care about becoming immortal data. A Wizard of Goatse fucked around with this message at 17:39 on Jan 5, 2015 |
# ¿ Jan 5, 2015 17:17 |
|
Nolanar posted:It looks to me like the singularity hit when we got writing and the wheel simultaneously a few thousand years ago in a single event. I wonder how the chart would look if we split those multi-event events into their component data points. Where is, like, metallurgy. Y'know, the thing we named half our technological eras after? How is 'art' a single definable point and is the chauvet cave his idea of an early city? So many questions, only the cybertronic superbrain could fathom the ways of singularitarian science. A Wizard of Goatse fucked around with this message at 22:46 on Jan 5, 2015 |
# ¿ Jan 5, 2015 22:16 |
|
Jimson posted:Me and my buddy used to have these talks back and forth with each other, where I would make some comment about how I was gonna break his nose, or some poo poo, and he always responded with. And what would I be doing that whole time? More or less, When your sitting there trying to punch me in the face, whats to stop me from just hitting you or knocking you around. Yudkowsky's got this whole AI-in-a-box "thought experiment" that basically proposes that even under totally controlled conditions a godputer can always hypnotize a human with sophistry to get whatever it wants because they're just so very very smart. He claims to have proven this himself, via roleplaying the AI in chatrooms with one of his true believers roleplaying the computer technician trying to keep Skynet on leash. He will never, ever release the chatlogs, because he hates fun.
|
# ¿ Jan 6, 2015 08:15 |
|
Sham bam bamina! posted:You forgot the part where he lost repeatedly to people outside his weird fancult and had to stop offering the challenge. lmao I missed that when'd it happen, linku Was he still just trying his whole schtick of 'the AI promises to burn you in effigy a squintillion times and then in a Twilight Zone reversal you realize that it was the AI keeping you in the box all along oh noooooo'
|
# ¿ Jan 6, 2015 10:25 |
|
quote:I didn't like the person I turned into when I started to lose. Oh poo poo you done gone and unleashed the Beast
|
# ¿ Jan 6, 2015 10:59 |
|
Namarrgon posted:I don't think I understand the challenge. So you sit in a private chat with Yud. He roleplays an AI and you roleplay a random schmuck that knows they shouldn't let the AI out? This seems trivial so I'm getting unsure if I'm missing something. You just aren't brilliant and rational enough a Bayesian to be fast-talked into thinking you're actually imaginary and the roleplay is real A Wizard of Goatse fucked around with this message at 13:00 on Jan 6, 2015 |
# ¿ Jan 6, 2015 12:55 |
|
Dr Cheeto posted:Some people argue that faster-than-light spacecraft will not be constructed within the next few decades, but let me talk about what color upholstery we should put on those bad boys. P.S. none of us are physicists and my interior design background consists of spraypainting my recliner so it looks like it was dipped in gold
|
# ¿ Jan 17, 2015 22:06 |
|
So, basically, everything's reducible to a single number representing all its, like, ability points, and if you make the numbers fight the bigger number will always win. Since AIs have the biggest, most made-up numbers, they can do anything!
|
# ¿ Jan 21, 2015 20:22 |
|
Toph Bei Fong posted:If the computer is running a simulation of a "better" universe then... that universe will replace this universe? the computers running inside the simulation will somehow be able to transcend the laws of physics in this one while running inside the simulation? What if the whole universe is, like, an electron in an atom inside a really heavily modded build of SimCity, man
|
# ¿ Jan 22, 2015 06:52 |
|
SolTerrasa posted:I'm so misunderstood~ It's still not to late to trap him in a room with a sixth-grade bully and give him some lessons in perspective A Wizard of Goatse fucked around with this message at 17:31 on Jan 23, 2015 |
# ¿ Jan 23, 2015 17:22 |
|
Pavlov posted:See, I know there's a lot of things I could say about Yudkowsky himself, but I'd rather try to convince this guy without having to make a personal attack. I know I've seen people posting stuff where Yud manages to explain an idea (poorly) and then completely contradict himself by the time he's finished. If this guy fancies himself a Rationalist, I think that's the kind of thing that would help convince him. Guy isn't a hypocrite or a huckster or an idiot, he's just a third-rate theologian whose proof of the existence of computergod amounts to that we had abacuses in 1900 and now we have iPhones; therefore in another hundred years or less everything we know about physics or observed reality will be so much tribal superstition and we'll have pulled off a Civilization tech win where all your dreams will, naturally, come true It's not inconsistent, it's just unsupported and insupportable because it relies on a sneering equivalence between any observations available to modern man and the witch doctor blaming evil spirits for making the harvest fail, and an interpretation of Bayes theory very close to that if you can phrase an argument in the form of a sufficiently large made-up number then it must be true.
|
# ¿ Jan 28, 2015 03:48 |
|
Dr Cheeto posted:Does he ever even update his priors? Like, for all the sloppy blowjobs he gives Baye's he seems to be pretty bad at taking advantage of its greatest strength. He's not like out-and-out hostile to the scientific method, it's more that whenever the science doesn't support his fantasies he takes the lofty position of the man from the year 40,000 softly chuckling to himself about how those savages used to believe the world worked. See also: computer simulation that effectively creates another, larger universe in order to get extra processing power; cryonics; how the human brain works; how AIs might work; Moore's Law as a more absolute law of physics than the mere properties of electrons. He doesn't feel the need to actually support his extremely specific and wrong claims because history will inevitably vindicate him and prove everyone else wrong without further effort on his part; being wrong and an idiot to the 21st century is no biggie because if folks from the 21st century are so smart why aren't they god robots A Wizard of Goatse fucked around with this message at 21:13 on Jan 28, 2015 |
# ¿ Jan 28, 2015 20:57 |
|
I'm glad that we've finally gotten to the real juicy dirt on this guy, namely that he programs badly (?)
|
# ¿ Jan 29, 2015 03:38 |
|
Lottery of Babylon posted:No, no, you see, the alt text obviously means we should make fun of Roko and those rationalwiki meanies who were foolish enough to create and spread the dangerous basilisk, not the poor innocent Lesswrongers they terrorized. Holy poo poo lmao
|
# ¿ Jan 29, 2015 23:31 |
|
Imagine being the other guy in the "wow you think that accomplished mathematician is a genius... do you think I'm a genius too?" conversation, just knowing whatever response you give is going to be the basis for a thousand-word blog entry
|
# ¿ Jan 31, 2015 05:35 |
|
Why did you think anyone would want to read all that
|
# ¿ Feb 2, 2015 07:13 |
|
corn in the bible posted:Mostly Yudkowsky seems like every freshman Comp Sci major I've ever seen. p much, except most of the compsci majors grow up eventually
|
# ¿ Feb 2, 2015 07:56 |
|
LemonDrizzle posted:The clarification in the second paragraph is perfectly reasonable and true, though - you should be sceptical of seemingly inexplicable experimental results, especially if they seem to contradict established theories or principles. He writes like a pompous twat, but there's nothing really objectionable there. The only thing he's wrong about is that it's perfectly acceptable to say "I don't believe this result" or "I won't believe this result until it's replicated." Yeah he seems just sort of oblivious to the idea that respected scientific ideas get tested more than once, and wants to invent controlling for variables and peer review as his own personal Grand Theory of Thinkology complete with terrible 'cool'-sounding catchphrase.
|
# ¿ Feb 4, 2015 01:00 |
|
su3su2u1 posted:I'm very drunk, but had an insight... This implies ever promising a real deliverable in the first place, so long as the terminators aren't stomping on your bones MIRI's doin' its job
|
# ¿ Feb 5, 2015 06:03 |
|
Most of the people ITT read it and you care enough about it to point out when it's updated so I'll bet you do to, stop pretending you're cooler than your friends
|
# ¿ Feb 19, 2015 23:06 |
|
Namarrgon posted:Yeah, but, after the first one launches, it doesn't really matter what was promised, you still don't really have a reason to launch anyway except spite. A lot of MAD involved deliberately cultivating a culture of systemic pointless spite (or its bureaucratic equivalent, cutting informed human choice out of the loop as much as possible just in case someone might grow a conscience at the last minute) because if the enemy calculated that your side wouldn't be evil enough to reflexively launch the counterattack then an overwhelming first strike becomes much more attractive and low-risk than waiting around for you to come to the same conclusion about them. If it's really a bluff they might recognize and call the bluff, you fail to kill them, they win. The only way for MAD to work is to be absolutely committed to retaliation from the outset much like the best way for the AI-box scenario to work is to not give a poo poo about stupid simulated-self hypotheticals from the outset. A Wizard of Goatse fucked around with this message at 23:24 on Feb 26, 2015 |
# ¿ Feb 26, 2015 23:09 |
|
So what the gently caress is the point of this thing, I stopped reading a few links in but is it just the world's most longwinded way for the authors to jerk off or what yeah that was the point where I bailed
|
# ¿ Feb 27, 2015 07:18 |
|
A Man With A Plan posted:Oh hey a few more posts in the LessWrong thread! I think the idea Yudkowsky has (although I might be getting this from all the other kurzweilian apocalypse cultists who're constantly cribbing off each other) is that if you're a good boy then one day the godputer will torrent an .iso of your mind and your digisoul will live forever in machine heaven as part of it. Whether that means you can then go forth and play god in the real world a la Joseph Smith or it's just simulated whores forever for your emulation is up to your own personal stroke fantasy because generally by that point the AI's power level has gone over 9000 and it's consumed the entirety of the universe's resources and moved on to creating other, bigger universes via technomagic. Presumably any 'normal people' whose definition of 'normal' involves a physical body or living brain have been rendered down for their raw materials by this point. A Wizard of Goatse fucked around with this message at 07:38 on Feb 27, 2015 |
# ¿ Feb 27, 2015 07:36 |
|
Sham bam bamina! posted:
Which is a pretty difficult and useful thing to be able to do in an exhaustive or persuasive manner, yes, and one that all the people making noise about their brilliant ideas that the whole stupid world is too stupid to appreciate fail at
|
# ¿ Feb 27, 2015 19:28 |
|
you don't post it in this thread where people spent pages bitching about a man's coding chops, you talk about the funny thing but you don't post it quote:This essay is an outré, madness, a tragic, cruel fantasy, an eruption of inner rage, on how the oppressed desperately dream of being the oppressor.
|
# ¿ Feb 27, 2015 19:43 |
|
Friendly Tumour posted:they're the exact reason why transhumanism will never become a force of change in the world and i hate hate hate hate please do elaborate on how transhumanism could become a force of change in the world
|
# ¿ Mar 1, 2015 01:35 |
|
bartlebyshop posted:Transhumanism is like libertarianism in that it could probably get a lot farther in the world if all its fans just stopped talking about it and embarrassing themselves. I'm pretty sure the libertarians could at least succeed in creating a mercenary oligarchy, the basic short-term goals of libertarianism are practically achievable just nobody in their right mind wants to Friendly Tumour posted:this is gbs not dnd
|
# ¿ Mar 1, 2015 01:38 |
|
|
# ¿ May 18, 2024 04:58 |
|
transhumanism is libertarianism if lassiez-faire economics meant everyone's a nine-dicked cyberdragon and no loving duh
|
# ¿ Mar 1, 2015 03:34 |