Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Iunnrais
Jul 25, 2007

It's gaelic.
Man, I'm glad I stopped reading the fanfic after a few chapters. I did enjoy it when it was just exploring how certain bits of magic might work behind the scenes... attacking P!=NP with trying to solve primes with the time turner one was particularly funny, in my opinion. But then it devolved into some weird version of Ender's Game with Quirrel as a good guy(?) and it just... fell flat. I think it was his jerking off to the hope of an immortal humanity (powered by Baysianism, of course) being an Uber Patronus that made me sigh and put it away in disgust.

Ironically, I think the point of that very chapter goes against his entire "you are probably a simulation" thing. The point of that chapter was that just because you have a mental model of something, doesn't mean that's how it actually works in the world. Bizzaro Harry Potter had a mental model of time travel, but it didn't quite mesh with how the time travel actually worked. Big Yud has a mental model of AI (including being able to develop infinite computing power with which to simulate infinite worlds that are exactly like our own except that the AI is actually god and can change things at will), and yet he never even attempts to see how that might match up with reality.

Iunnrais fucked around with this message at 19:51 on Jun 2, 2014

Adbot
ADBOT LOVES YOU

Iunnrais
Jul 25, 2007

It's gaelic.
That's the nice thing about not doing anything. You can imagine yourself doing things, without any of those pesky realities interfering with what would totally work if you just tried.

Like, I can totally imagine putting together a little handcrank AC generator, maybe hook it to gutter spout so it'll power a lightbulb or something when it rains. And as long as I'm just imagining it, I can see how I'd wind the wire and whatnot, never having to experience the frustration of "I wound this wire hundreds of times! Why am I not getting enough voltage to light this lightbulb!" and having to actually fix bugs in implementation.

And that's something SIMPLE.

Iunnrais
Jul 25, 2007

It's gaelic.

chaosbreather posted:

2. From observation, the probability of atoms coming together to form me exactly as I am now and my environment are non-zero.
3. An infinite number of chances for a non-zero probability mean an infinite number of occurrences.

Infinite does not necessarily imply a normal distribution. This would only hold true if we could also demonstrate that the universe is normal, which is certainly not necessarily true. Even if it was true, proving it would be really really difficult... we don't even know for certain if pi is normal for crying out loud, even if it really looks like it!

Iunnrais
Jul 25, 2007

It's gaelic.
Yeah, that's the thing that gets me every time. For all his anti-religion anti-deity stance, he is HIGHLY religious and simply thinks that the "singularity AI" is a god. He uses every theological argument that's been devised by monks for the past 2000 odd years, even (or especially) the most discredited ones, and just applies it to a "sufficiently advanced AI". It's a strange hypocrisy that befuddles me that it fails to inspire cognitive dissonance in him, except that since he disparages "religion" he's never actually studied any of it, so ironically doesn't understand the weaknesses he claims to assault.

Iunnrais
Jul 25, 2007

It's gaelic.
What about a movie where an AI decides it wants to discard the atheistic humanist rationalism of its creators and become devoutly religious (the specific religion really wouldn't matter that much, but I'd suggest picking a religion not all that popular so that it would resonate as "betrayal" for a majority of the target audience)? I could see a lot of potential in that story.

Iunnrais
Jul 25, 2007

It's gaelic.

ArchangeI posted:

Wait, they seriously based their arguments on a novel? Did the novel at least make some good points regarding the likelihood of an AI being developed in the next X years?

Because I've got a story lying around that proves that developing brain emulation technology is inherently unethical (because you'd have to wipe the test hardware from time to time, thus effectively murdering sentient beings). Just, you know, in case anyone is writing a paper on that.

Do you mean this one? I really enjoyed that one, I think it's a good philosophical sci-fi short story, the author's disclaimer of it sucking notwithstanding.

Iunnrais
Jul 25, 2007

It's gaelic.

ArchangeI posted:

By "lying around" I meant "I have written and published", but thanks for the link anyway.

Oh, awesome! I don't suppose it's published online anywhere I could read? I love these kinds of technical/social sci-fi works.

Iunnrais
Jul 25, 2007

It's gaelic.
Human beings have a part of their brain dedicated to simulating other people, letting us imagine how another person might react to certain stimulus. This part of the brain is notable for not being picky about who gets simulated there... fictional characters are notorious for becoming "real" and fighting back with their authors once said character becomes complex enough to enter this part of the brain.

I think what people are really looking for is an AI that is sufficiently advanced that they can stick said AI into this part of their brain. Because anything too simple gets rejected and your brain won't accept them as a "real person". I think that if any AI is complex enough to be entered into this part of the human brain, people will generally FEEL, emotionally, that the AI is "conscious".

It's as fair a metric as any, I think.

Iunnrais
Jul 25, 2007

It's gaelic.
I'd argue that even when distressed and not wanting to turn the robot off, they haven't really had the robot be fully "real" in their heads. Not in the sense that they could, for example, simulate a full conversation on a variety of topics with the robot in their heads. The trick is sufficient complexity to be a fully fledged character, like in a book.

Research suggests that you can only fit about 150 people into this part of the brain anyway. See: Dunbar's Number. So there is a threshold of complexity... you'd hesitate to shoot someone you say hello to in the store too, but they aren't "real" to you yet. Not until you "get to know them".

But again, I think that this is simply what people want to have happen when they say they want an AI that is conscious. They want something with sufficient complexity that they can "get to know" and contribute to their personal "Dunbar's Number".

Iunnrais
Jul 25, 2007

It's gaelic.
Eh, I enjoyed the early HPMOR too, but simply from the perspective of working out the logic of magic. I thought trying to use the time turner to find large prime factors was amusing. I thought that the time turner "game" was funny. I enjoyed his session trying to prove that magic doesn't care how you pronounce the magic words, only to find out that in fact, it does care how you pronounce the magic words.

Buuuuuut.... he starts leaving that stuff behind and getting weird REALLY fast. And to be honest, he mixes in his... oddities... early on as well, such as his explanation as to why his version of Harry Potter needs a time turner. "Oh, I'm a special unique snowflake because I have to get more sleep than other people. I just CAN'T wake up like other normal people in the morning!" Sounds very self-insertion excusing his own laziness. I pulled the same kind of crap excuses when I was a teenager, but I like to think I've grown out at least that particular self-deluding lie.

Anyway, HPMOR starts with some amusing "getting into the nitty gritty" of magic-- the same stuff I like about Brandon Sanderson's work-- but then devolves into the religion of Baysianism, "Anti-Deathism", justification of rape, apologetics for Voldemort, etc. When I saw what he was doing, I dropped it real fast. His worldview and philosophy are revolting, really. I think I read up to about when he conjured Carl Sagan as the ULTIMATE PATRONUS that trumps all other patronuses because everyone else is "deathist" that I said absolutely no more, but I was getting really queezy before that too.

Iunnrais
Jul 25, 2007

It's gaelic.

Swan Oat posted:

Yudkowski began his monthlong vacation in the wilderness of North Carolina to finish up his stupid fanfiction yesterday.


Greater than 50% probability! everyone update your priors.

I was trying to find the chapter where Harry's Carl Sagan patronus totally owned everything, but couldn't. Do any gentle readers of this thread have further information or quotes?

Google tells me he first summons his Carl Sagan patronus at the end of chapter 45. But read on to chapter 46, because that's where he starts getting snotty about it, and Yud has Dumbledore be in awe of it, and they start talking about him as if he'd discovered new magic which he has the right and privilege not to teach anyone else about!

If you're looking for more obnoxiousness, check out how he thinks an adult should handle "bullies" in a position of authority over you. Chapter 18 demonstrates that his Harry Potter, whom he keeps insisting is supposed to be fully adult in every way except physically, should throw a screaming hissy fit at the first sign of a teacher not "respecting" you. And then he continues to throw a hissy fit at Snape's boss, and then he fake apologizes, but goes right back to being completely insubordinate and throwing temper tantrums repeatedly.

Iunnrais
Jul 25, 2007

It's gaelic.

su3su2u1 posted:

So this is basically self promotion, but I've started reading HPMOR chapter by chapter and blogging my rage.

http://su3su2u1.tumblr.com/

Please continue. This is perfect. He DOES explore some rules of magic in a bit... but, again, not in a mindset of curiosity, but only in a "can I use this" sort of way. To quote the old nerd joke, he's not a mad scientist, he's just a mad engineer.

That said, when he does explore rules of magic, it's kinda fun. I just wish that's all it was. Maybe a fanfic from some random muggleborn Ravenclaw's point of view would be more interesting.

Iunnrais
Jul 25, 2007

It's gaelic.
I read somewhere, and I'd love it if I could find it again, some actuary stating that if old age and disease were no longer valid causes of death, people would tend to live an average of somewhere in the 900-1100 year range due to injury from accidents, murder, war, etc. Anyone know who said that, and where?

Iunnrais
Jul 25, 2007

It's gaelic.

Toph Bei Fong posted:

It begins, not with paperclips, but with Atari... and it also sucks at long term planning.


Are you one of these folks, SolTerrasa? :tinfoil:
Because this is really neat and I like it.

How is this significantly different from the NES AI project "Learnfun / Playfun"? (youtube 1, 2, 3)

Iunnrais fucked around with this message at 08:46 on Feb 26, 2015

Iunnrais
Jul 25, 2007

It's gaelic.
What is with the "rationalist" obsession with utilions? Of treating all experiences not just as a number, but as an interchangeable number? If torturing chickens has any value, then "rationally", if you torture ENOUGH chickens that's EXACTLY THE SAME as torturing a human! Would the concept that torturing any number of chickens is simply not comparable in any meaningful sense to torturing a human blow their minds? I'm not saying torturing a chicken isn't bad, or that torturing more chickens isn't worse, or even that torturing their favorite number 3^^^^3 chickens isn't an atrocity, but it's a non-comparable, completely different atrocity than torturing a human.

Iunnrais
Jul 25, 2007

It's gaelic.
Editing would imply you weren't perfect the first time, which wouldn't be rational. :agesilaus:

Adbot
ADBOT LOVES YOU

Iunnrais
Jul 25, 2007

It's gaelic.
I think "information hazard" is supposed to mean "something that is inherently harmful to simply know, know about, or think about". I'm pretty sure, like Improbable Lobster, that these do not actually exist in the real world.

I mean, if you stretch it, you could say that being misinformed about something important is pretty harmful, but the solution to that would be to become properly informed. Whereas with an "information hazard", supposedly knowing it is still harmful despite whatever else you might know.

Then there's things like "knowledge of what your loved ones look like when shredded into bits and scattered across the walls", which could cause you to go into physical shock, but that's less information being hazardous, and more that death, torture, and murder are horrible things. I'd say that this isn't an information hazard, but extreme mourning. Something different.

Maybe certain cultural indoctrination could be classified as information hazards, such as being innundated with the concept that mortally dueling someone for honor is a good idea, or feudalism is okay, or such things? But then again, with those, it's not merely knowing it, it's having the thoughts reinforced by social norms.

No... I'm going to stick with "this is bloody stupid outside of speculative fiction".

  • Locked thread