|
WilliamAnderson posted:Look who noticed Big Yud now... Whoever loses, we win.
|
# ? Nov 21, 2014 06:48 |
|
|
# ? May 4, 2024 16:36 |
|
LaughMyselfTo posted:Whoever loses, we win. The alt-text is gold. "I'm working to bring about a superintelligent AI that will eternally torment everyone who failed to make fun of the Roko's Basilisk people."
|
# ? Nov 21, 2014 06:52 |
|
Neoreactionaries, why are you neoreactionary?. First comment contains poster's SAT score.
|
# ? Nov 21, 2014 07:09 |
|
bartlebyshop posted:Neoreactionaries, why are you neoreactionary?. More importantly, they make mention of the fact that they read Calvin and Hobbes and Encyclopedia Brown as a kid.
|
# ? Nov 21, 2014 07:27 |
bartlebyshop posted:Neoreactionaries, why are you neoreactionary?. " Like so many who fancied ourselves prodigies (I got a 1600 on my SAT, I read Calvin and Hobbes, Encyclopedia Brown..." Vanguard of the counter-revolution. Nerdlings who think they're smart because, as children, they read stuff they think jocks didn't.
|
|
# ? Nov 21, 2014 07:28 |
|
If I thought Encyclopedia Brown made me smart, I'd probably be a neoreactionary too.
|
# ? Nov 21, 2014 07:49 |
|
It's a good thing these intellectual titans weren't subject to the dysgenic effects of poor people mixing into their bloodlines.
|
# ? Nov 21, 2014 08:00 |
|
Almost nobody didn't read Calvin and Hobbes. It was a nationally syndicated comic strip. It isn't some kind of special magical signifier of being intelligent, it's a comic strip.
|
# ? Nov 21, 2014 08:35 |
|
He made a new thing; How To Write Intelligent Characters. I would quote my favourite part but that would be all of it and it is very short.
|
# ? Nov 21, 2014 14:30 |
|
Namarrgon posted:He made a new thing; How To Write Intelligent Characters. quote:you should read standard books of writing advice like “How to Write Science Fiction and Fantasy" by Orson Scott Card. quote:Level 1 Intelligent characters. Writing characters with an inner spark of life and optimization; not characters that do super-amazing clever things, but characters that are trying in routine ways to optimize their own life in a reasonably self-aware fashion. quote:Every Level 1 Intelligent character wants to toss your precious plot out the window and will seize any available chance to do so. quote:Level 1 Intelligent characters will often have done some equivalent of having read the same books you have, which requires that you give them plots which cannot be solved just by having read similar books. quote:If the character does something novel or unexpected using widely available tools, the surrounding civilization must be such that other people wouldn’t have thought of it already. quote:Thanks to the Illusion of Transparency, the best way to construct a mystery is to have some latent fact about the story, known to you, that is not spelled out explicitly in the text, and then make absolutely no effort to conceal this fact, except that you never literally say it out loud.
|
# ? Nov 21, 2014 15:22 |
|
Tiggum posted:If I'm reading this right, he's saying that your characters should be as much like you as possible? I think what he's going for is what the teeveetorps call being "genre savvy" - a character knowing what kind of story they're in and how it tends to go, in other words having "read the same books you have" and thus knowing what to expect from a story inspired by them. It's still really dumb. HMS Boromir fucked around with this message at 15:33 on Nov 21, 2014 |
# ? Nov 21, 2014 15:29 |
|
Tiggum posted:I haven't read that book, but I have read several of Card's Ender books, and I would not take his advice. Naw, it's actually a decent advice book, if a bit dated-- a lot of the story conventions it discusses haven't really been done much in at least twenty years, if not more. But it's got good advice on stuff like keeping the purple out of your prose and how to handle exposition. He wrote the thing long before he disappeared up his own rear end, and his writing-as-craft skills aren't really in question.
|
# ? Nov 21, 2014 15:36 |
|
Tiggum posted:Ugh. Why does every fanfic/NaNoWriMo writer seem to think that their characters are sentient agents working against them? You're writing it, idiot, it's all you. It's not uncommon to hear actual writers say things like "I originally planned for Character X to do Y, but when the time came Character X decided to do Z instead." By which of course they don't actually mean that Character X is an independent agent with some mystical metafictional narrative-influencing powers rebelling against the author. It's just a way of saying that by the time they got around to writing a certain scene, they realized that the way they had written that character so far had made it clear that it would be out of character for them to do whatever the original plan was, so the author had them do something else instead that would suit their personality better. Bad writers hear good writers say this sort of thing and imitate it because it makes them sound like better writers and makes the writing process sound more magical. But since they don't understand what the original statements actually meant, the things they say when they try to copy them end up making no sense at all, like implying that a sufficiently "intelligent" character will realize they're in a book and declare war on the author, or saying that their upper-crust schoolteacher character "chose" to actually have been a lower-class prostitute all along instead.
|
# ? Nov 21, 2014 16:40 |
|
Lottery of Babylon posted:like implying that a sufficiently "intelligent" character will realize they're in a book and declare war on the author, To be fair, this could happen in the same way you described as reasonable - but only if the story were so hopelessly poorly written that you could not possibly live in it without realizing exactly what it was.
|
# ? Nov 21, 2014 16:59 |
|
LaughMyselfTo posted:Whoever loses, we win. the fight begins
|
# ? Nov 21, 2014 17:08 |
|
god bless the self-unaware
|
# ? Nov 21, 2014 17:18 |
|
And so far everyone's calling him out on it.
|
# ? Nov 21, 2014 17:26 |
|
I am Yud's complete lack of paragraph breaks.
|
# ? Nov 21, 2014 17:35 |
|
Meanwhile, over in real AI research land, a team has mapped out the mind of a worm (Caenorhabditis elegans) and put it into a Lego robot... The resulting robot behaves like a worm. https://www.youtube.com/watch?v=YWQnzylhgHc http://www.i-programmer.info/news/105-artificial-intelligence/7985-a-worms-mind-in-a-lego-body.html
|
# ? Nov 21, 2014 17:48 |
|
That led me to this on the HPMOR subreddit quote:Harry is dropped into the canon of your favourite fictional universe. How quickly does he munchkin himself into omnipotence? Among a ton of jerking off about how awesome quote:Warhammer 40k: Harry doesn't stand a chance. He's killed for being a psyker before he can talk.
|
# ? Nov 21, 2014 17:51 |
|
Spoilers Below posted:Meanwhile, over in real AI research land, a team has mapped out the mind of a worm (Caenorhabditis elegans) and put it into a Lego robot... The resulting robot behaves like a worm. Someone get that poor worm a proper body How significant is this? I mean, a worm is a really simple organism, but the fact that we can do this even on a basic level seems really neat.
|
# ? Nov 21, 2014 20:50 |
|
It strikes me as indicative of Yud's worldview that instead of e-mailing Munroe, which lends itself to authenticating identity of the participants and the integrity of the message, he used reddit, and just assumed that Munroe would see his message.
|
# ? Nov 21, 2014 21:02 |
|
The Lord of Hats posted:How significant is this? I mean, a worm is a really simple organism, but the fact that we can do this even on a basic level seems really neat. Not significant until it starts killing people.
|
# ? Nov 21, 2014 21:12 |
|
Lightanchor posted:Not significant until it starts killing people.
|
# ? Nov 21, 2014 21:23 |
Soon that worm will begin boot-strapping its own mental architecture. We need to lock this down now or the day of the Nematode is upon us.
|
|
# ? Nov 21, 2014 21:30 |
|
This will bring us to the cyborg sandworm apocalypse. We must all become cyberpunk Fremen.
|
# ? Nov 21, 2014 21:35 |
|
The Lord of Hats posted:Someone get that poor worm a proper body That's actually an interesting thought - would body dysmorphia become a thing with brain uploading and the like? What (additional) psychological issues might this lot's dreams of becoming eight-penised cyberdragons give them?
|
# ? Nov 21, 2014 23:00 |
|
The Lord of Hats posted:Someone get that poor worm a proper body If you can do it with 302 neurons, you can do it with 100 billion (theoretically). Then again, good luck finding a computer with the necessary processing power in the next few decades.
|
# ? Nov 22, 2014 00:28 |
|
Big Yud posted:I'm a bit sad that Randall Monroe seems to possibly have jumped on this bandwagon - since it was started by people who were playing the role of jocks sneering at nerds, the way they also sneer at effective altruists, and having XKCD join in on that feels very much like your own mother joining the gang hitting you with baseball bats. On the other hand, RationalWiki has conducted a very successful propaganda campaign here. So it's saddening but not too surprising if Randall Monroe has never heard hinted any version but RationalWiki's. I hope he reads this and reconsiders. I love that he immediately jumps to jocks when complaining about his "persecution", when it is very obvious that everyone who cares about this at all is nerdy as all hell. (Also lol at the "propaganda campaign" consisting of explaining what these people actually believe)
|
# ? Nov 22, 2014 00:37 |
|
RPATDO_LAMD posted:If you can do it with 302 neurons, you can do it with 100 billion (theoretically). Then again, good luck finding a computer with the necessary processing power in the next few decades. The Lord of Hats posted:Someone get that poor worm a proper body
|
# ? Nov 22, 2014 00:57 |
|
Some fun-hating reddit mod went nuclear on that subthread. I take it the Big Yud had a little meltdown there?
|
# ? Nov 22, 2014 03:29 |
|
This might be the dumbest paper I've ever read (its the only Yud arxiv paper): http://arxiv.org/abs/1401.5577 In it they note that they can make programs that will cooperate on the one-shot prisoner's dilemma, if they can see each other's source code ahead of time. So if players are allowed to coordinate, coordination problems go away? WHO'D HAVE THUNK IT?
|
# ? Nov 22, 2014 03:39 |
|
su3su2u1 posted:This might be the dumbest paper I've ever read (its the only Yud arxiv paper): Someone's never tried to plumb the depths of math.GM...
|
# ? Nov 22, 2014 03:40 |
|
su3su2u1 posted:This might be the dumbest paper I've ever read (its the only Yud arxiv paper): I was thinking about that paper today, apropos of nothing, and I realized that it's nontrivial to implement in a practical way! It's one of those things where you can theorize about it with no effort at all, but if you try to write the actual code then you have more concerns. For example you need to be able to avoid the infinite loop scenario where you check the code of the other agent which checks your code which checks its code, etc. I wonder if Yud actually wrote it.
|
# ? Nov 22, 2014 04:18 |
|
SolTerrasa posted:I wonder if Yud actually wrote it. Ahaha. Ahahahaha. Ahahahahahahahahahahahahahaha.
|
# ? Nov 22, 2014 04:22 |
|
SolTerrasa posted:I was thinking about that paper today, apropos of nothing, and I realized that it's nontrivial to implement in a practical way! It's one of those things where you can theorize about it with no effort at all, but if you try to write the actual code then you have more concerns. For example you need to be able to avoid the infinite loop scenario where you check the code of the other agent which checks your code which checks its code, etc. I wonder if Yud actually wrote it. I don't think they have any actual working code? It just seems like a bunch of super-trivial solutions to the prisoner's dilemma. i.e. "fair Bot" whose definition is "cooperate with anything that will provably cooperate with you." Prudent Bot "cooperate with anything that will cooperate with you UNLESS it will cooperate even if you defect." Then they discuss that while these are in-principle unexploitable, real world implementations are probably exploitable. If they had dealt with the constraints of the real world, they might have stumbled on to something interesting in terms of how to actually go about doing the proving. I suppose thats why its gone nearly a year with no citations- there is nothing there. su3su2u1 fucked around with this message at 04:50 on Nov 22, 2014 |
# ? Nov 22, 2014 04:42 |
|
su3su2u1 posted:I suppose thats why its gone nearly a year with no citations- there is nothing there.
|
# ? Nov 22, 2014 05:04 |
|
su3su2u1 posted:I don't think they have any actual working code? It just seems like a bunch of super-trivial solutions to the prisoner's dilemma. i.e. "fair Bot" whose definition is "cooperate with anything that will provably cooperate with you." Prudent Bot "cooperate with anything that will cooperate with you UNLESS it will cooperate even if you defect." Then they discuss that while these are in-principle unexploitable, real world implementations are probably exploitable. They must have done something. I'm sitting around waiting for my mapreduces to finish so I wrote up working code for FairBot and DefectBot in 25 lines of python, and three of those are comments, and seven more are tests. I believe that they have wrong beliefs about the probability of a particular kind of singularity, but they aren't all a bunch of idiots, they could do this. I just don't see any actual evidence that they were interested in solving the one incredibly obvious mildly interesting recursion problem.
|
# ? Nov 22, 2014 05:24 |
|
SolTerrasa posted:They must have done something. I'm sitting around waiting for my mapreduces to finish so I wrote up working code for FairBot and DefectBot in 25 lines of python, and three of those are comments, and seven more are tests. I believe that they have wrong beliefs about the probability of a particular kind of singularity, but they aren't all a bunch of idiots, they could do this. I mean... you say that they "must have done something", but LOOK AT THE PAPER! If they did do something, why isn't it referenced anywhere in the paper? Did you write FairBot in the way they suggest- treating agents as formulas in Peano arithmetic and searching for equivalence proofs in towers of formal systems? Naively, I'd expect that to be totally impractical. Of course they need the silly methodology because they insist on provability because they claim the recursion problem is intractable. su3su2u1 fucked around with this message at 05:52 on Nov 22, 2014 |
# ? Nov 22, 2014 05:49 |
|
|
# ? May 4, 2024 16:36 |
|
su3su2u1 posted:I mean... you say that they "must have done something", but LOOK AT THE PAPER! If they did do something, why isn't it referenced anywhere in the paper? No, I used monkeypatching. Their approach reduces to "can I prove that if I cooperate, the other agent will cooperate?" So FairBot examines the memory of the other bot, then patches in guaranteed cooperation to all those instances, then checks if the other bot would cooperate, then cooperates if it does. Pretty boring, but it works and I cannot fathom why you'd try it their way instead.
|
# ? Nov 22, 2014 06:16 |