|
SolTerrasa posted:No, I used monkeypatching. Their approach reduces to "can I prove that if I cooperate, the other agent will cooperate?" So FairBot examines the memory of the other bot, then patches in guaranteed cooperation to all those instances, then checks if the other bot would cooperate, then cooperates if it does. Pretty boring, but it works and I cannot fathom why you'd try it their way instead. So its a typical MIRI paper: They give up on the practical problem as impossible in their introduction. Instead, they have roughly 20 pages of a really silly formal system, with math showing trivial things (if you write most of their theorems out in English words, they seem trivial). Their formalism is of 0 practical importance, and does more to obscure then enlighten. Meanwhile, the practical problem can be solved in a fairly straightforward fashion if you are a bit clever about it. Its like that 100 page decision theory paper Yud wrote- 100 pages of totally unnecessary background, and he fails to formalize his decision theory.
|
# ? Nov 22, 2014 06:33 |
|
|
# ? May 4, 2024 09:44 |
|
Keep building the bamboo runways until the cargo planes come, MIRI.
|
# ? Nov 22, 2014 07:08 |
|
SolTerrasa posted:Pretty boring, but it works and I cannot fathom why you'd try it their way instead. With wooden rifles, wooden planes, we all wait for John Frum. e: f;b
|
# ? Nov 22, 2014 07:53 |
|
During a recent Wikipedia Wander, I came across some surprising information about a very simple modification you can do to your body that on average confers an extra thirteen years of life: become a eunuch. So, presumably, Mr. Yud and all his death-fearing followers have already elected to undergo this procedure, since it would certainly be the rational thing to do. Or is he still intent on lording it over us puny-brained mortals with his more highly evolved poly relationships?
|
# ? Nov 22, 2014 08:44 |
|
ol qwerty bastard posted:During a recent Wikipedia Wander, I came across some surprising information about a very simple modification you can do to your body that on average confers an extra thirteen years of life: become a eunuch.
|
# ? Nov 22, 2014 12:33 |
|
BobHoward posted:Some fun-hating reddit mod went nuclear on that subthread. I take it the Big Yud had a little meltdown there? Here you go http://i.imgur.com/uvJkRoT.jpg
|
# ? Nov 22, 2014 14:38 |
So do you think this is motivated by terror of the Basilisk concept or by his increasing irritation at this ridiculous thing being used to pick fun of his Harry Potter/sophomore blogging audience?
|
|
# ? Nov 22, 2014 16:11 |
|
Nessus posted:So do you think this is motivated by terror of the Basilisk concept or by his increasing irritation at this ridiculous thing being used to pick fun of his Harry Potter/sophomore blogging audience? Can't it be both?
|
# ? Nov 22, 2014 16:22 |
|
Nessus posted:So do you think this is motivated by terror of the Basilisk concept or by his increasing irritation at this ridiculous thing being used to pick fun of his Harry Potter/sophomore blogging audience? I think he's probably a still a little scared of the basilisk, but mostly I think he's just afraid Less Wrong will never live this down which it won't ever
|
# ? Nov 22, 2014 16:47 |
|
Lesswrong would probably find it way easier to live it down if Eliezer didn't have a hilarious meltdown every time someone on the internet mentioned it.
|
# ? Nov 22, 2014 17:07 |
|
How do you type the sentence "let me post this handy histogram of contributors to the RationalWiki article" without recoiling in horror at what you have become
|
# ? Nov 22, 2014 21:03 |
|
Lottery of Babylon posted:How do you type the sentence "let me post this handy histogram of contributors to the RationalWiki article" without recoiling in horror at what you have become
|
# ? Nov 22, 2014 21:27 |
|
I feel kind of bad for David Gerard. He wrote the definitive article about Roko's Basilisk not just to mock the idea, but to reassure frightened LessWrong members. He wrote some very even-handed articles about LessWrong and Yudhowsky themselves. Gerard is himself a member of LessWrong and is inclined to sympathize with them. And what does he get in return? Shrieking accusations of propaganda and sabotage.
|
# ? Nov 22, 2014 22:22 |
|
SubG posted:By being the particular kind of douchebag who has a histogram of the contributors to a RationalWiki article in the first place. He didn't have a histogram in the first place. He made it for that post with bash scripting.
|
# ? Nov 22, 2014 22:29 |
|
|
# ? Nov 24, 2014 02:22 |
|
Is the most recent elementary Yud feed? Fake edit: it is 30% memes Real edit: fake miri is the suspect in the murder of and ai engineer. Sherlock keeps saying " ai doesn't exist, at least in the way you're talking about. " who in this thread is writing for this show? Serious Cephalopod fucked around with this message at 07:22 on Nov 25, 2014 |
# ? Nov 25, 2014 07:12 |
|
Serious Cephalopod posted:fake miri is the suspect in the murder of and ai engineer. Sherlock keeps saying " ai doesn't exist, at least in the way you're talking about. Except then at the end of the episode it turns out that maybe it does? Also, the AI in that episode was really irritating to me; They go on about it passing the Turing test, but half the time it sounds about as convincing as ELIZA. And how did it always know when someone was talking to it and never respond when people talked to themselves or each other?
|
# ? Nov 25, 2014 07:46 |
|
Tiggum posted:Except then at the end of the episode it turns out that maybe it does? Also, the AI in that episode was really irritating to me; They go on about it passing the Turing test, but half the time it sounds about as convincing as ELIZA. And how did it always know when someone was talking to it and never respond when people talked to themselves or each other? At the end of the episode it felt to me that Sherlock coming up with an answer and the machine responding with "I don't understand..." was meant to distinguish his emotional issues from actual machine level emotionlessness, which Sherlock thinks he admires. The machine only was in the presence of multiple people at the end of the episode, right? Could just be a directional mic thing. Also, while dealing with typical tv writing makes it hard to tell, so far it looks to me like everyone who is responding to the computer like it's human is deluding themselves, but in a very human and easy way. It's hard to tell right next to a Turing test where the interviewer knows he's taking with a machine.
|
# ? Nov 25, 2014 08:00 |
|
http://www.overcomingbias.com/2014/12/ai-boom-bet-offer.html We've got Robin Hanson putting his money where his mouth is on AI FOOM. "Recently, lots of people have been saying 'this time is different', and predicting that we'll see a huge rise in jobs lost to automation, even though we’ve heard such warnings every few decades for centuries." He's willing to bet at 20:1 odds that the percentage of the US economy in the computer hardware/software sector won't rise above 5%, from its current position around 2%, before 2025. No takers, unsurprisingly, not even Big Yud. At Google on Thursday I hears Ray Kurzweil talk. It was at one of those confidential meetings, so no quotes and no video and no context, but one thing he said was to reaffirm his belief that we're on track for the singularity by 2030. He also seems to believe that that's the consensus opinion among AI people, which is... Well, probably not a lie so much as an indication of what sorts of people choose to talk to him. I'm inclined to be generous to Kurzweil because he only seems to make predictions he'll be able to verify, which I admire. I wonder if someone could convince Hanson to extend his bet another five years and get Kurzweil to take him up on it. I wonder if Kurzweil knows about the crazy side of singulatarians.
|
# ? Dec 6, 2014 21:19 |
|
Kurzweil is his own crazy side of singularitarians, though.
|
# ? Dec 6, 2014 21:25 |
|
BobHoward posted:Kurzweil is his own crazy side of singularitarians, though. Yeah? I mean, I obviously think that if he makes a prediction about AI specifically it's likely to be wrong, but I thought his singularity was a lot less science fiction than Big Yud's. I thought it was mostly about accelerating hardware capabilities and falling costs. Maybe I'm wrong, I was never a huge fan of the guy. What makes him nuts?
|
# ? Dec 6, 2014 21:27 |
|
A singularity is a sign that your model doesn't apply past a certain point, not infinity arriving in real life
|
# ? Dec 6, 2014 21:31 |
|
SolTerrasa posted:Yeah? I mean, I obviously think that if he makes a prediction about AI specifically it's likely to be wrong, but I thought his singularity was a lot less science fiction than Big Yud's. I thought it was mostly about accelerating hardware capabilities and falling costs. Maybe I'm wrong, I was never a huge fan of the guy. What makes him nuts? He's literally a guy who pops hundreds of pills a day and injects other supplements directly into his bloodstream in some kind of self-designed program to extend his life long enough so that he can get his brain uploaded into a computer (a tech he's been predicting for a long time now) and thereby become immortal. It's become clear over the years that this is his religion. He has an extreme fear of death and he's too rational to go for supernatural religion, so he's desperately casting about for a technological afterlife. In other words he's so invested in the desire for a tech/biotech singularity to happen that he allows his desire to override his rationality - when Kurzweil talks about accelerating HW capabilities and falling costs and so forth, you always have to be aware that he may have fooled himself into putting a ridiculously rosy interpretation on things. He also likes to conflate biological evolution, cultural developments, and technological developments into a kind of inevitable march-of-progress that will result in Thing X by Year Y, where X is something related to being able to upload his brain or extend his life, and Y fits on the timeline to keep Kurzweil alive long enough to see that day.
|
# ? Dec 6, 2014 21:49 |
|
SolTerrasa posted:Yeah? I mean, I obviously think that if he makes a prediction about AI specifically it's likely to be wrong, but I thought his singularity was a lot less science fiction than Big Yud's. I thought it was mostly about accelerating hardware capabilities and falling costs. Maybe I'm wrong, I was never a huge fan of the guy. What makes him nuts? I don't know about nuts, but we could spend all night enumerating all the poo poo wrong with that.
|
# ? Dec 7, 2014 02:20 |
|
SubG posted:
You could make exactly the same graph in the 1700's, yet somehow there wasn't a singularity then. It's also completely trivial because you can make the graph look like whatever you want just by what you choose to be an "event", since "Time vs Time" is so easy to manipulate. Lottery of Babylon fucked around with this message at 03:04 on Dec 7, 2014 |
# ? Dec 7, 2014 02:34 |
|
Lottery of Babylon posted:You could make exactly the same graph in the 1700's, yet somehow there wasn't a singularity then. It's also completely trivial because you can make the graph look like whatever you want just by what you choose to be an "event", since "Time vs Time" is so easy to manipulate.
|
# ? Dec 7, 2014 02:40 |
Projecting that trend line forwards gives zero (or negative, lol) "time til next event"s - and by this graph that happened 20-30 years ago. Why are we waiting for the singularity if it happened in the early 90s?
|
|
# ? Dec 7, 2014 03:55 |
|
SolTerrasa posted:Yeah? I mean, I obviously think that if he makes a prediction about AI specifically it's likely to be wrong, but I thought his singularity was a lot less science fiction than Big Yud's. I thought it was mostly about accelerating hardware capabilities and falling costs. Maybe I'm wrong, I was never a huge fan of the guy. What makes him nuts? Kurzweil gave a talk when I was attending undergrad, and I stayed after for a conversation he had with some professors. His talk was fine, but while interacting with the professors he went full-crackpot pretty fast. Uploading brain's in 10 years (this was significantly more than 10 years ago), living forever, etc.
|
# ? Dec 7, 2014 04:32 |
|
SolTerrasa posted:At Google on Thursday I hears Ray Kurzweil talk. It was at one of those confidential meetings, so no quotes and no video and no context, but one thing he said was to reaffirm his belief that we're on track for the singularity by 2030. He also seems to believe that that's the consensus opinion among AI people, which is... Well, probably not a lie so much as an indication of what sorts of people choose to talk to him. I'm inclined to be generous to Kurzweil because he only seems to make predictions he'll be able to verify, which I admire. This sums up Kurzweil completely:
|
# ? Dec 7, 2014 04:44 |
|
BobHoward posted:He's literally a guy who pops hundreds of pills a day and injects other supplements directly into his bloodstream in some kind of self-designed program to extend his life long enough so that he can get his brain uploaded into a computer (a tech he's been predicting for a long time now) and thereby become immortal. It's become clear over the years that this is his religion. He has an extreme fear of death and he's too rational to go for supernatural religion, so he's desperately casting about for a technological afterlife. The pills and supplements thing doesn't sound too weird for someone wanting to min/max their health if its backed by medical information. If it's all "Well medical science doesn't want to admit that X herb can prevent arterial plaque but injecting it into your blood is a common practice among those enlightened enough to know about it" then I'm amazed he hasn't destroyed his liver yet. It's kind of like a weird hard science version of what Bruce Lee did. Lee got hardcore into healthy eating and more or less lived his life by only eating what was essential for his body and to maintain his physical regimen. Kurzweil seems more like a guy who thinks some pills are all it takes to live at peak health. pentyne fucked around with this message at 13:10 on Dec 7, 2014 |
# ? Dec 7, 2014 13:07 |
|
pentyne posted:The pills and supplements thing doesn't sound too weird for someone wanting to min/max their health if its backed by medical information. If it's all "Well medical science doesn't want to admit that X herb can prevent arterial plaque but injecting it into your blood is a common practice among those enlightened enough to know about it" then I'm amazed he hasn't destroyed his liver yet. Someone dumped a list from one of his books here. http://www.reddit.com/r/skeptic/comments/1ypitt/ray_kurzweils_supplement_regimen/ It's mostly a lot of faddish supplements. And no, this isn't excusable as min/maxing health. You might think there's real medical information behind these kinds of things, but you'd be wrong. Anything marketed as a "dietary supplement" in the USA is highly suspect. A certain U.S. Senator from Utah legislatively invented supplements as a kind of not-a-drug-honest! which the FDA can't regulate so long as the manufacturers don't go too far with explicit medical claims on the label. (Guess who is known to receive money from supplement manufacturers?) Since innuendo and plausibly deniable marketing done at arm's length through pop health media serves just as well to convince people of medicinal action, they've basically got a free pass to sell snake oil. Kurzweil is truly an easy con for anything which holds promise of immortality. As a thread-relevant aside: Kurzweil's other immortality related obsession is with bringing his long-dead dad back to life. Much like Yudkowsky, he believes this kind of resurrection will be possible via sufficiently advanced AI running some kind of dad-simulation based on his dad's notebooks, letters, and whatever other ephemera Kurzweil has sitting in a vault.
|
# ? Dec 7, 2014 19:36 |
|
BobHoward posted:As a thread-relevant aside: Kurzweil's other immortality related obsession is with bringing his long-dead dad back to life. Much like Yudkowsky, he believes this kind of resurrection will be possible via sufficiently advanced AI running some kind of dad-simulation based on his dad's notebooks, letters, and whatever other ephemera Kurzweil has sitting in a vault. And this I just find sad. Even if it worked, this won't actually bring someone back, just a superficial replica. There's no selfless quest here to save someone they loved, just an attempt to patch over a hole in their own life.
|
# ? Dec 7, 2014 19:49 |
|
Moddington posted:And this I just find sad. Even if it worked, this won't actually bring someone back, just a superficial replica. There's no selfless quest here to save someone they loved, just an attempt to patch over a hole in their own life. Yup. Even more sad: note the disconnect between this and "I must live long enough for the Upload". It implies that he doesn't really believe dad-AI is a meaningful way to live after death, but he can't let himself fully acknowledge it because that would require accepting that his dad is dead forever.
|
# ? Dec 7, 2014 19:59 |
|
Moddington posted:And this I just find sad. Even if it worked, this won't actually bring someone back, just a superficial replica. There's no selfless quest here to save someone they loved, just an attempt to patch over a hole in their own life. It's even sadder when I picture this high intelligence devoted to what it knows is just a delusion, actively manipulating Kurzweil so that he's pacified and avoids critical thought that might shatter the illusion. Off in his own little world where he pretends to have everything he truly wanted.
|
# ? Dec 7, 2014 20:03 |
|
Black Mirror S02E01: Be Right Back
|
# ? Dec 7, 2014 20:19 |
|
BobHoward posted:As a thread-relevant aside: Kurzweil's other immortality related obsession is with bringing his long-dead dad back to life. Much like Yudkowsky, he believes this kind of resurrection will be possible via sufficiently advanced AI running some kind of dad-simulation based on his dad's notebooks, letters, and whatever other ephemera Kurzweil has sitting in a vault. All that'll end up happening is you'll lose some body parts while creating some horrifying, non-human thing. You won't get what you want, and you'll end up worse than when you started. (And you'll end up marked as a human sacrifice in a huge government plot.) Don't do it bro. BobHoward posted:Yup. Even more sad: note the disconnect between this and "I must live long enough for the Upload". It implies that he doesn't really believe dad-AI is a meaningful way to live after death, but he can't let himself fully acknowledge it because that would require accepting that his dad is dead forever. fade5 fucked around with this message at 00:06 on Dec 8, 2014 |
# ? Dec 7, 2014 23:58 |
fade5 posted:Seriously, it really is amazing just how loving scared people are of death, and the lengths they go to in order to try and avoid/escape it. Just accept it; everyone loving dies. It's an integral part of not just the human condition, but life itself; there's no avoiding it. What I don't think a lot of these guys really think through is that if people did become immortal it would have to change society quite drastically. Probably a good capsule version here would be that if you invented an easy-to-use immortality serum, I hope you like your current job title, because the people above you may have those jobs forever.
|
|
# ? Dec 8, 2014 00:13 |
|
fade5 posted:Dude don't go down that road, it's been tried before. Uh that is not a valid argument because anime is not like real life. Now, let me explain how AI is foretold by that ending of Tsuikime
|
# ? Dec 8, 2014 00:15 |
|
I think fade was joking. Though it was in service of a correct argument. If the AI is somehow able to magically "resurrect" someone it's not really bringing them back. It's creating a facsimile of them, maybe even a perfect copy, but unless you both believe that souls exist and the AI is able to manipulate them the copy will never be something more than a copy. If they just came out and said that the AI is a magic soul catcher then we could finally just define them completely as a religion.
|
# ? Dec 8, 2014 00:46 |
|
|
# ? May 4, 2024 09:44 |
PresidentBeard posted:I think fade was joking. Though it was in service of a correct argument. If the AI is somehow able to magically "resurrect" someone it's not really bringing them back. It's creating a facsimile of them, maybe even a perfect copy, but unless you both believe that souls exist and the AI is able to manipulate them the copy will never be something more than a copy. If they just came out and said that the AI is a magic soul catcher then we could finally just define them completely as a religion.
|
|
# ? Dec 8, 2014 00:51 |