Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
su3su2u1
Apr 23, 2014
I feel like this thread has fixated on the Basilisk. This is a mistake, because Less Wrong is such an incredibly target rich environment.

Here is a wayback machine link to Yudkowsky's autobiography, in which he claims to be a "countersphexist," which is a word he made up to describe a superpower he ascribes to himself. He can rewrite his neural state at will, but it makes him lazy. He also defeats bullies in grade school with his knowledge of the solar plexus, and has a nice bit about how Buffy of the eponymous show is the only one he can empathize with. http://web.archive.org/web/20010205221413/http://sysopmind.com/eliezer.html

Here is Yudkowsky suggesting that the elite REALLY ARE BETTER http://lesswrong.com/lw/ub/competent_elites/ Among the many, many money shots are these quotes: " So long as they can talk to each other, there's no point in taking a chance on outsiders[non-elites] who are statistically unlikely to sparkle with the same level of life force." and "There's "smart" and then there's "smart enough for your cognitive mechanisms to reliably decide to sign up for cryonics"

Here is Yudkowsky, lead AI researcher failing to understand computational complexity (specifically what NP hard means). http://lesswrong.com/lw/hbd/new_report_intelligence_explosion_microeconomics/8vr1

Here is Yudkoswky again failing at computational complexity. http://lesswrong.com/lw/vp/worse_than_random/ He spends an entire post arguing P=BPP (randomization cannot improve an algorithm). Notice he never mentions "BPP" even though its what he is talking about. Big Yud isn't the type to NOT use jargon, so its pretty clear he doesn't even know it. He also didn't spend even 10 seconds googling randomized algorithms before writing the post, or he would have discovered the known cases where randomized algorithms improve things.

And of course, LessWrong is full of standard transhumanisms. They love Eric Drexler's nanotech vaporware, cryonics,etc. A better man than I could have a blast using LessWrong's "fluent Bayesian" arguments to refute LessWrong's own crazy positions.

Adbot
ADBOT LOVES YOU

su3su2u1
Apr 23, 2014

outlier posted:

And rightly so - it touches on a lot of interesting subjects: Bayesian theory, AI, decision making, futurism.

Unfortunately, its so committed to a silly world view that its wrong on basically all fronts.

The appeal is that Yudkowsky claims to be teaching you the secret knowledge the idiot professionals don't know (he "dissolves" the philosophical question of free will, he gives you the ONE TRUE QUANTUM MECHANICS INTERPRETATION, he gives you THE BEST EVER DECISION THEORY, etc) Unfortunately, his arguments look good because his audience is unlikely to know much about the topics being presented, and they take his word for it because they get to walk away with a feeling of superiority (hahaha, those dumb physicists don't even understand physics as good as me 'cuz I used my rationalist brain and I read 10 pages about it on the internet).

He can explain the simplest case of Bayes theorem but cannot actually use it to do CS (he has no presented no code anywhere). His views of science are the cargo-cult behavior of someone has watched and cheerlead for science, but never actually DONE it.

Yudkowsky is an AI researcher who doesn't understand computational complexity. Thats like saying you are a boxer who doesn't understand punching, or a baker who doesn't 'get' dough. He is failing on a basic level at everything except getting Peter Thiel to give him money.

If he seemed any less earnest, I'd just assume he were a brilliant con man.

su3su2u1
Apr 23, 2014

SolTerrasa posted:

You would be shocked how many people who are nominally "AI researchers" don't hate Yudkowsky for claiming to be one of them. Hell, Google donates to MIRI. People I've stood next to at conferences have had passably positive opinions of the guy! I can't believe it.

All that is required to get a company like google to donate is to know the low level HR person who picks charitable causes. Probably someone in google HR has joined his robocult.

I AM surprised it gets past due diligence though.

"Wait.. this guy takes money for his AI research non-profit, and then spends all his time writing blog posts?"
"Well, there is also his Harry Potter fan fiction."

su3su2u1
Apr 23, 2014

SolTerrasa posted:

Not that he's published, not that I know of, but I personally wouldn't doubt him on it.

He has never contributed to any open source projects, nor has there ever been any publicly available code that he has written. He has never worked as a programmer, or taken a CS course. He has learned to 'signal' competence in order to bilk money out of CS people- its how he gets speaking engagements and how he solicits money.

When he talks about math, he manages to keep it together for long stretches, but occasionally a howler slips in and you realize he doesn't understand the first thing of what he is talking about. The same for physics. Similarly, he routinely confuses CS concepts (its very clear from his discussions of the busy beaver function and solomonoff induction that he doesn't understand what it means for something to be computable).

Here is a direct quote from Yudkowsky, where he very clearly has no idea about computational complexity (but that doesn't stop him from drawing sweeping conclusions about physics!)

BigYud posted:

Nothing that has physically happened on Earth in real life, such as proteins folding inside a cell, or the evolution of new enzymes, or hominid brains solving problems, or whatever, can have been NP-hard. Period. It could be a physical event that you choose to regard as a P-approximation to a theoretical problem whose optimal solution would be NP-hard, but so what, that wouldn't have anything to do with what physically happened. It would take unknown, exotic physics to have anything NP-hard physically happen. Anything that could not plausibly have involved black holes rotating at half the speed of light to produce closed timelike curves, or whatever, cannot have plausibly involved NP-hard problems. NP-hard = "did not physically happen". "Physically happened" = not NP-hard

As a simple, obvious counterexample, I once solved a 3 stop traveling salesman problem in my head. If any non-CS people want an explanation of how incredibly wrong this is, let me know and I'll try to go into more detail.

The proper model of Yudkowsky is con-man with a decent vocabulary. He has learned to fake it well enough to bilk money from the rubes, but nothing he says holds up to any real scrutiny.

Hate Fibration posted:

He has some coauthor credits with a mathematician crony of his that works in formal logic and theoretical computer science, I know that much.

Only one unpublished, but submitted manuscript (having read it, I'd be incredibly surprised if it gets through review. Most academic papers don't repeatedly use the phrase 'going meta'). His only actual (non-reviewed) publications have been through "transhumanist" vanity prints and through his own organization. The thing that kills me- if I donated money to a research institute and MORE THAN A DECADE LATER it had only even submitted one paper to review (ONE! EVER!) but the lead investigator had been able to write hundreds of pages of the worst Harry Potter fanfic ever, I'd be outraged. Instead, the Lesswrong crowd seems grateful.

su3su2u1 fucked around with this message at 05:18 on May 14, 2014

su3su2u1
Apr 23, 2014

Reflections85 posted:

Being a non-compsci person, could you explain why that is incredibly wrong? Or link to a source that could explain it?

So the way that computer scientists and mathematicians talk about computational complexity is by imagining very simple types of computers, called turing machines. A turing machine is a reader/writer that gets fed a tape, and has a set of rules about what to do when it sees various symbols. i.e. it could have a rule that says "if you see an A on change it to a B and move the tape three spots left", or "if you see an x, change it to a y and move one slot right." You can imagine implementing an algorithm by designing a turing machine feeding it inputs on the tape, and designing it so that it rewrites the tape to the output of your algorithm. This is a very condensed explanation, but turing machine probably has a nice long article on wikipedia.

Now, we say a problem is in class "P" if you can make a turing machine that can solve the problem in polynomial time. Polynomial time means that the number of steps the turing machine has to take is a polynomial of the input. So imagine a turing machine algorithm to add one to a binary number. The input tape contains the binary number, and starting from the right, the turing machine changes any 1 to a 0, and as soon as it sees a 0, it changes it to a 1 and stops. The maximum number of operations you'd need to run this turing machine is N, where N is the length of the number (also the length of the tape). If you have a turing machine that takes N^2 steps, thats also in P, because N^2 is a polynomial in N.

BUT, we could imagine a different type of turing machine. This turing machine can have more than one command for a given symbol (i.e. if you see an A change it to a B move two steps left, OR if you see an A change it to a D and move one step right). At each step, the turing machine picks whichever step will lead to the turing machine finishing faster (its the best possible guesser). I'm sure there is a longer explanation of non-deterministic turing machines on wikipedia. A problem is said to be NP if one of these non-deterministic turing machines can solve it in polynomial time. Because we can turn a deterministic turing machine into a non-deterministic one simply by adding more instructions, anything that is in P is also in NP.

A problem is called "NP-hard" if its at least as hard as the hardest problems in NP. It is an open problem as to whether or not these NP-hard problems are actually in P, but almost everyone believes they aren't. Generally its believed their scaling is intrinsically worse than polynomial (so something like e^N)

This is a long wind up, but the important take away is this: computational classes like P and NP are about how the number of operations you need to do to solve a problem scale with the input of the problem. Low input length (low N) NP-hard problems can be quite simple to solve. Also, if you have lots of computers and lots of time, you might be able to brute force an NP-hard problem simply by waiting long enough. A famously NP-hard problem is the traveling salesman problem. A salesman has to travel to X locations and then home again, and he wants to travel the overall shortest distance, the goal is to find that shortest route. Now, with 3 cities you have to check 6 possible paths, something you can add up in your head. For 4 cities, there are 24 routes, so you can still do everything with a pencil and paper in just a few minutes. But in general, the number of routes scales like n! (n*(n-1)*(n-2)...) so it doesn't take very many cities before its impractical to check every route, even with a computer.

With that in mind, lets revisit Big Yud's quote

Big Yud posted:

Nothing that has physically happened on Earth in real life, such as proteins folding inside a cell, or the evolution of new enzymes, or hominid brains solving problems, or whatever, can have been NP-hard. Period. It could be a physical event that you choose to regard as a P-approximation to a theoretical problem whose optimal solution would be NP-hard, but so what, that wouldn't have anything to do with what physically happened. It would take unknown, exotic physics to have anything NP-hard physically happen. Anything that could not plausibly have involved black holes rotating at half the speed of light to produce closed timelike curves, or whatever, cannot have plausibly involved NP-hard problems. NP-hard = "did not physically happen". "Physically happened" = not NP-hard

His assertion that "nothing that has physically happened on Earth in real life... can be NP-hard" is obviously laughable. There are lots of NP hard problems that are trivially easy for small n. Even for large N, clever computer scientists with lots of computing power have found solutions (back in the mid 2000s I saw a paper that solved the traveling salesman problem for every city in Sweden, something like ~70,000 cities).

His next assertion "proteins folding in a cell, or evolution of enzymes[couldn't be NP-hard]" is more interesting, because conceivably these might be large N operations. Simple life on Earth has been around for like ~4 billion years. There are at least 10^30 or so microbes on Earth and they undergo reproduction a few times an hour. Thats a lot of 'computing cycles' for evolution to work on, and my estimate may well be low. Nature doesn't care if it can solve a problem in polynomial time because it has lots of time. It also doesn't have to solve the problem in full generality, it can find a few good solutions and reuse them. i.e. it doesn't have to find every possible enzyme, it has to find a few that work.

The next bit has the sentence "It would take unknown, exotic physics to have anything NP-hard physically happen." This is just incoherent, problems are NP-hard, "things-happening" are not. Further, there is a fairly-well known NP-hard problem called the "numerical sign problem" that crops up in physics. As far as anyone knows, quantum field theory is a correct physical theory and it has an NP-hard computational problem that makes doing simulations very difficult (one of the reasons lattice QCD is difficult.)

Trying to equate NP-hard with "did not physically happen" is obviously stupid to anyone who knows this stuff, because again, they can solve low N NP-hard problems in their head.

su3su2u1 fucked around with this message at 08:34 on May 14, 2014

su3su2u1
Apr 23, 2014

Peel posted:

Quantum mechanics isn't easily computable by turing machines anyway, that's why quantum computers are theoretically superior for some types of computation, isn't it?

For quantum computing, we look at quantum turing machines. The class of problems they solve is called BQP (bounded error, quantum polynomial time, the reason for 'bounded error' is that the algorithms on quantum turing machines are probabilistic, so you want a worst-case error).

It is thought (but again, not proven) that BQP is not the same as NP-hard, and so the numerical sign problem would still make lattice QCD very difficult even if you managed to build a quantum computer.

su3su2u1
Apr 23, 2014

SolTerrasa posted:

See, that's just the thing. He's close! He's SO close. He's so close that I was taken in for a long time before I actually started my thesis.

I would argue that you were probably taken in early in your learning process, as a student, because Yudkowsky has managed to sound (much of the time) like an actual expert. And now you want to think he was close because it makes being taken in a little easier to deal with. I had a similar experience with Eric Drexler's nanomachines.

quote:

It's made MORE confusing by the fact that his essays on human rationality are actually quite good!

But are they really? I think almost all of the actual interesting stuff is lifted idea-for-idea directly from Kahneman's popular books. Outside of those ideas on biases, you are left with Yudkowsky's... eccentric definition of rationality.

He uses beliefs in cryonics,the many worlds interpretation of quantum mechanics, and Drexler style nano-technology as heavy focal points in his essays on rationality. My training is physics, and I'm deeply skeptical of a "rationality" focused educator who can't poke holes in the wild claims of cryonics advocates and Drexler's work. I'm not expecting people to be physicists, I'm expecting people to use the well known "rationality tool" of asking an expert. Want to evaluate cryonics? Find some cryobiologists ask them what they think. Want to know about nanotech? Find some physicists or chemists working on 'real' nanotech and ask them what they think.

In his quantum mechanics sequence (here http://lesswrong.com/lw/qa/the_dilemma_science_or_bayes/), Yudkowsky asks his followers to whole-heartedly embrace the many worlds interpretation of quantum mechanics, and in fact to conclude that his pop-Bayesianism is a better tool than science, based entirely on approximately twenty pages of Yudkowsky explaining what quantum mechanics is.

So Yudkowsky ACTIVELY TEACHES that rational humans should trust their own opinion, formed entirely by reading 20 mathless pages on the internet, OVER physicists. Not only that, he teaches that they should trust their own opinion SO STRONGLY that they should throw out the scientific method in favor of pop-Bayesianism.

If I ever am stuck in a hospital bed for months or something, I'll go through all of his sequences and point out where he is falling victim to the very biases he rails against.

quote:

It's so goddamn frustrating because if he'd stop talking so much and start writing code, maybe he'd contribute something to the field. He might even be right, and he might even prove it. Probably not, but at least we'd LEARN something.

He can't. He doesn't know enough to write sophisticated code. He wants to get paid for blogging and for writing Harry Potter fanfiction. Look at his 'revealed preferences' to use the economic parlance. When given a huge research budget, instead of hiring experts to work with him and doing research, he instead blogs and writes Harry Potter fanfiction.

The people his institute hires are for the most part undergraduates with bachelor degrees, not domain experts, not proven researchers. The requirement to get hired seems to be long-time involvement in his "rationalist" community. This is why even the paper his institute has submitted to peer review is poorly written- no one at the institute knows how to first author a paper.

Lets look at his crack research team- Luke Muehlhauser, lists his education as "Previously, he studied psychology at the University of Minnesota." Exclude his (non reviewed) transhumanist vanity publications and he has none.

Louie Helm- masters in computer science (this seems promising), exclude the transhumanist vanity bs and you have one combinatorics paper and one conference proceedings. The conference proceeding appears to be his masters thesis, and its on quantum computing. Both published in 2008. Thats 5+ years of inactivity.

Malo Bourgon- masters in general engineering, no publications. Seems to work in HR and not as a researchers.

Alex Vermeer- unspecified degree in general engineering. No publicaitons, seems to manage the website.

Benja Fallenstein- undergrad in math. Might actually have two published papers on hypertext implementations way back in 2001-2002, hard to tell, there is certainly A Benja Fallenstein who published while associated with the University of Bielefeld, but the one at miri got his degree at university of Vienna.

Nate Soares- undergrad in CS, no published papers. Worked at google, so he can probably actually code? Then again, didn't work at google very long.

Katja Grace- ABD phd student in logic, no published papers.

Rob Bensinger- BA in philosophy, no published papers.

These are the direct employees, there are other "research associates" who just seem to be affiliated. Note- none of these people have extensive publication records IN ANYTHING. Most have no CS qualifications and no history of programming anything. The CS masters appears to have done combinatorics/applied math more than actual code-based-CS. Yudkowsky has very carefully avoided hiring anyone who can call him out on his bullshit- most of his hires were affiliated with his robo-cult for years before he hired them (and were roped into the robo-cult before they learned enough to know better).

I think its like scientology- after you've spent a ton of money and effort learning, cognitive disonance is too strong for someone roped in to realize its bullshit.

su3su2u1 fucked around with this message at 03:44 on May 15, 2014

su3su2u1
Apr 23, 2014

SolTerrasa posted:

I'd like to argue that the core idea that he has, the core idea of a Bayesian reasoning system, is not so unreasonable that it should be dismissed out of hand.

Sure, a lot of Bayesian ideas are incredibly useful when you have the right application. The key is to have the right application. I make a lot of statistical models for work (like many physicists I work for an insurance company predicting high risk claims), and sometimes Bayesian modeling really is the best way to go. Those situations are rare though, and if I insisted on Bayes as the one true probability model, I'd be fired.

Like the biases though, the ideas mostly aren't his. Yudkowsky lifted his whole Bayesian framework straight from Jaynes (Yudkowsky's contribution is probably pretending to apply it to AI?). See: http://omega.albany.edu:8008/JaynesBook.html I think he is even explicit about how much he owes Jaynes.

quote:

What I'm lamenting is the loss of someone with great enthusiasm for the work. I'm just some guy, you know? I do my best, I work as hard as I can, and at the end of the day, there's an idea in natural language generation that wouldn't have been invented yet without me. It's not that I'm brilliant; I was able to do that just because I have both enthusiasm and a work ethic. Yudkowsky has the former and not the latter and it makes me sad sometimes.

Sure, I did my phd in physics, I don't think you need brilliance, you need fortitude and a work ethic. I put the time, did the slog, and at the end of the day got some results. I'm willing to bet both of us have published more peer reviewed papers than Yudkoswky's entire institute. I actually think Yudkowsky is a pretty bright guy, he just somehow fell into a trap of doing cargo cult science. It almost looks like research, it almost smells like research...

In an alternate reality, where he got off his rear end and took a few courses he might have accomplished something. The problem is he already thinks he is a phd level researcher (the world's foremost expert on Friendly AI after all), and already leads a research team.

quote:

I really ought to go read Kahneman, then.

The classic text is probably Judgement Under Uncertainty: Heuristics and Biases. Kahneman and Tversky kicked off a ton of literature in the late 80s that is fun to read.

And because I agree we may have bored others, some quotes, here we find Yudkowsky arguing that studying AI makes you less qualified to work on AI

Yudkowsky posted:

I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone’s reckless youth against them – just because you acquired a doctorate in AI doesn’t mean you should be permanently disqualified.

Here is Yud on the basilisk that so many in this thread seem to enjoy:

Yudkowsky posted:

Listen to me very closely, you idiot.

YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.
[snip]
(For those who have no idea why I’m using capital letters for something that just sounds like a random crazy idea, and worry that it means I’m as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us. It is the sort of thing you want to be EXTREMELY CONSERVATIVE about NOT DOING.)

And here we reject global warming as existential risk in favor of evil AI:

Yud posted:

Having seen that intergalactic civilization depends on us[Yud's institute], in one sense, all you can really do is try not to think about that, and in another sense though, if you spend your whole life creating art to inspire people to fight global warming, you’re taking that ‘forgetting about intergalactic civilization’ thing much too far.

su3su2u1 fucked around with this message at 07:28 on May 15, 2014

su3su2u1
Apr 23, 2014
This is amazing- some new-comer to LessWrong, not realizing Harry of Yud's fanfic is a total self-insert, offers some arm-chair psychology involving Harry being a narcissist:

http://lesswrong.com/lw/jc8/harry_potter_and_the_methods_of_rationality/axjq

Algernoq posted:

Grandiose sense of self-importance? Check. He wants to “optimize” the entire Universe

Obsessed with himself? Check. He appears to only care about people who are smarter or more powerful than him -- people who can help him
...
Goals are selfish? Check. Harry claims to want to save everyone, but in practice he tries to increase his own power most quickly
...
Troubles with normal relationships? Check.
...
Becomes furious if criticized? Check.
..
Has fantasies of unbound success, power, intelligence, etc.? Check
..
Feels entitled - has unreasonable expectations of special treatment? Check.
..
Takes advantage of others to further his own need? Check.

su3su2u1
Apr 23, 2014

Djeser posted:

Ideally, when you go into cryo, your body is frozen slowly enough to prevent the formation of ice that will rupture cells.

I'm going to be a horrible pedant here,but you want the body frozen quickly, not slowly. If you can move a substance past its freezing point quickly enough and too a low enough temperature, it won't form ice crystals, because the individual molecules are very cold (and therefore moving very slowly, too slowly to rearrange themselves into ice-crystals in a reasonable period of time.)

Anyway, on cryonics:
The best technology available today messes up something as small as a rabbit kidney. The 'successful' transplant of a kidney resulted in the rabbit losing something like 1/5 of its weight, and after a month it was drinking less and urinating less (symptoms of kidney failure, not surprisingly). The researchers declared success, claimed the rabbit could live indefinitely (in reality, its serum creatinine, a marker of kidney function, was high enough that death was probably imminent), and euthanized it in order to study the kidney. They found that the kidney was messed up, their best guess was cracking/fracturing had damaged it: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2781097/

So what can we learn from our rabbit kidney?
1. In half an hour of cooling the kidney had substantial damage from a probable fracture (it was only below the glass transition for a few minutes). This damage was not readily apparent from inspection. What does this mean for humans at cryonics facilities? The kidney was kept just below the vitrification temperature for only a few minutes. Human brains are kept at liquid nitrogen temperature for years. Liquid nitrogen is substantially colder than the kidney ever was. As an object gets colder, it gets more brittle. And being frozen for decades is lots of time for little bumps (mechanical stress) to turn into large fractures.

2. Despite top of the line cryoprotectant, the kidney had some damage from ice crystals. A rabbit kidney is way smaller than a brain, and as objects get bigger surface area grows more slowly than volume. This makes large objects harder to cool quickly- you dump them in the cold, and the outside freezes nice and fast, but the middle takes longer to cool. If the kidney had some ice crystal formation, a brain almost certainly will.

So basically, even if you grant cryonicists most of their assumptions, they're still mushifying brains.

Of course, they believe "molecular nanotechnology" will solve all of their problems. Molecular nanotechnology is, of course, the transhumanist word for magic. I'll maybe write a post about the ridiculousness of the nanotech-catchall if I get bored later. (nanotech is literally at the core of every transhumanist argument about the future).

su3su2u1 fucked around with this message at 05:47 on Jun 21, 2014

su3su2u1
Apr 23, 2014
So to take this in a different direction, lets talk molecular nanotechnology (this is probably going to be long, I apologize). Its Less Wrong's version of magic and their scientific trump card.

Lots of Less Wrong conversations go like this: "Cryonics can't work because of..." "BUT NANO! GO AWAY LUDDITE!" (Here is one example of Yudkowsky pulling "molecular probe" out to argue about cryonics: http://lesswrong.com/lw/do9/welcome_to_less_wrong_july_2012/8n54 )

Even AI is dangerous because it might solve molecular nanotech.

Here is a hilarious example of how powerful the magic nanotech is in the LessWrong world view- a former Singularity Institute employee, Anissimov argues that nanotech is so powerful the only solution is to bring back the monarchy: http://www.moreright.net/reconciling-transhumanismand-neoreaction/

So what is this magic? Quite simply, Drexler's idea is that we can take the mechanical engineering manufacturing paradigm and scale it down to the very tiny. Imagine using molecular sized gears and grippers in order to position individual atoms and make new molecules. Imagine, if you will (as Anissimov says) a 3d printer capable of rearranging the atoms in material to build basically anything. After all, proponents tell us, biological systems exist and we are full of biological nanomachines. WHO CAN ARGUE WITH THAT?

The important point I want to make is that things are very different at small scale, but first I have to take a tiny digression into temperature. Consider an everyday-sized object, like a simple pendulum (a small weight tied to a rope, swinging back and forth). As it swings back and forth, it'll slow down and eventually stop. So what happened? Where did the energy go? We can think of the rope as lots of individual particles held together by springs (the spring represent atomic bonds). When the pendulum is swinging, all the springs have to move in unison. Over time, little nudges build up and the different springs start to move against each other, and as the individual springs jostle around more and more, it takes energy from the swinging motion, so that slows down. So it reaches equilibrium when all the vibrational and rotational modes are sharing roughly equal amounts of energy.

A really important formula in thermodynamics tells us that, once the system is in equilibrium the probability a given mode has an energy E is proportional to exp(-E/(kT)). k is Boltzmann's constant, and T is the temperature of the system. kT for a system at room temperature is on the order 10^(-21) joules, which is super small compared to every day expectation. The probability the pendulum starts to swing randomly is proportional (therefore) to -10^(21), which is basically 0.

However, for very small objects, this becomes very different from everyday experience. Something as simply as a ratchet can't work in equilibrium (see http://en.wikipedia.org/wiki/Brownian_ratchet ). For rotational modes of molecules, the typical energy is something like 10^-4 eV, so kT is substantially larger, and the probability of randomly starting to rotate is quite high (tens of percents). The same is true for random straight-line motion. Typical molecular spring energies are a few tenths of an eV, so the probability of randomly vibrating, while small, is non-trivial.

Similarly, at an atomic level, Brownian motion is pretty important. Essentially, everything is constantly buffeted by the atoms in the air. The mean-free-path in air at room temperature (at atmospheric pressure) is about 70 nm. An atom is about 1 nm. This means that when an atom is traveling in a random direction, it'll manage to travel about 70x its length before it slams into something and changes direction.

There are also Van der Waals forces. At small distances, the fact that molecules aren't actually neutral, but are made of electrons and protons causes a strong attractive force. What happens is that the electrons in a molecule will try to move far away from the electrons in the other. This turn each neutral molecule into something like a dipole, and you get a configuration between two molecules of the form +-/+- . This gives an attractive force that grows quickly as the objects get near 1/r^6.

Now, imagine what manufacturing would look like if your gears rotated randomly,if the distances between teeth of gears fluctuates with non-trivial probability, if the materials you put down while working randomly would start jumping around randomly, rotating randomly. If everything (your gears, your grippers, your working materials) are coated in incredibly sticky glue. My point here isn't that its totally impossible- my point is that the paradigm is pretty obviously wrong. The small world is different enough that it doesn't make sense to try to impose the way we manufacture goods to manipulating the small scale.

BUT, the LessWrongian might ask- what about biology? Good question- biology operates completely differently. Everything is wet, and the unique properties of water are an integral part of how everything works. Systems of membranes create and maintain different molecules in different concentrations and those concentration gradients are used to power machines. The degree of specialization is very high, with lots of proteins doing lots of different things. "Real" nano-tech will look more like the chemistry of enzymes in water.

None of this is to disparage the very real nano-materials people are building today (for instance http://en.wikipedia.org/wiki/IBM_(atoms) ). But these are materials that are being designed either through normal means or by manipulating individual atoms with (relatively) huge devices like atomic force microscopes. Its low temperature, ultra vacuum work, nothing like the transhumanist vision.

But haven't degrees been awarded for Drexler's stuff? Aren't there active researchers? Basically none of the active researchers have actual phds in a relevant subject. Drexler does have a phd but its from MITs Architecure and Planning college (specifically, the media lab. Anything crackpotty that comes out of MIT comes out of the medialab). Like all things transhumanist, the entire field is basically a (perhaps unintentional) con game.

su3su2u1
Apr 23, 2014
I think Goodreads and wikipedia have a problem dealing with groups that have high internet presence, but are of little actual importance. Yudkowsky can unleash his tiny army of LessWrongians to review on Goodreads, edit on TV Tropes, and make sure he maintains a page on wikipedia (a page that remains completely free of criticism).

The handful of people who stumbled upon MPMOR and hated it get drowned out by his built in audience (you can tell their reviews, as they refer to Yudkowsky as an AI researcher or a decision theorist).

su3su2u1
Apr 23, 2014
So this is basically self promotion, but I've started reading HPMOR chapter by chapter and blogging my rage.

http://su3su2u1.tumblr.com/

su3su2u1
Apr 23, 2014

Pidmon posted:

What tag do you use for your blogging? Because if you add /tagged/[blah blah blah]/chrono people can read in order.

I added some navigation links to the top of the tumblr, one for all the MOR posts, and one for them all in chronological order.

su3su2u1
Apr 23, 2014

The Unholy Ghost posted:

Okay, so I posted earlier in here about how I didn't understand the hate for this guy, and now I understand even less. Everyone here told me that HPMOR goes completely bonkers or whatever and that Harry summons Carl Sagan as a patronus, but...

Well, I'm at Chapter 46, and Harry's patronus is a human in general, not Carl Sagan. I still haven't seen anything offensive yet, and I'm just overall feeling kind of disappointed that this thread is working so hard to hate a perfectly fine, somewhat eye-opening story.

So I'm blogging my experience of reading HPMOR here: http://su3su2u1.tumblr.com/tagged/Hariezer-Yudotter/chrono

Basically none of the science mentioned in HPMOR is correct. I'm 27 chapters I'm yet to encounter anything eye-opening, just a lot of cloying elitism and really poor science.

su3su2u1
Apr 23, 2014
I just read the entirety of the timeless decision theory paper that Yud self-published through his research institute. After 100 loving pages, we get this:

Yud posted:

I delay the formal presentation of a timeless decision algorithm because of some
significant extra steps I wish to add

The paper goes on for 10 more pages without presenting a timeless decision algorithm, and then ends. WHY? WHY WOULD ANYONE THINK THIS COUNTS AS A PAPER?

su3su2u1
Apr 23, 2014

SolTerrasa posted:

This keeps bothering me. I also read that paper, but I assumed I must have missed or skimmed over the math, because legitimately that theory requires well under a page of math. I studied decision theory, my thesis had a strong decision theory component, and his new theory is just not that complex.

I know. He spends 100 pages making his decision theory seem incredibly complicated, and in the end can't be bothered to spell it out explicitly. I'd be surprised if an actual paper containing a formalized version of his theory would run more than 5 pages. I'd also be surprised if it merited publishing.

Btw, I'd love to see more on the Yud/Hanson debate because otherwise I might find myself actually reading it.

su3su2u1
Apr 23, 2014

Big Yud posted:

I'm incredibly brilliant and yes, I'm proud of it,
and what's more, I enjoy showing off and bragging about it. I don't know
if that's who I aspire to be, but it's surely who I am. I don't demand
that everyone acknowledge my incredible brilliance, but I'm not going to
cut against the grain of my nature, either. The next time someone
incredulously asks, "You think you're so smart, huh?" I'm going to answer,
"*Hell* yes, and I am pursuing a task appropriate to my talents." If
anyone thinks that a Friendly AI can be created by a moderately bright
researcher, they have rocks in their head. This is a job for what I can
only call Eliezer-class intelligence.

http://www.sl4.org/archive/0406/8977.html

su3su2u1
Apr 23, 2014

Political Whores posted:

So is what I assume is extreme sexual frustration/repression a big part of being a rationalist?

I'll just respond with this:
http://slatestarcodex.com/2014/09/27/cuddle-culture/

su3su2u1
Apr 23, 2014
So the Slatestarcodex guy responded to something I wrote on tumblr with this

http://slatestarscratchpad.tumblr.com/post/99067848611/su3su2u1-based-on-their-output-over-the-last

quote:

Possibly you’re more active in the fanfic-reading community than in the Lob’s Theorem-circumventing community?

Over the last decade:
1. A whole bunch of very important thought leaders including Stephen Hawking, Elon Musk, Bill Gates, Max Tegmark, and Peter Thiel have publicly stated they think superintelligent AI is a major risk. Hawking specifically namedropped MIRI; Tegmark and Thiel have met with MIRI leadership and been convinced by them. MIRI were just about the first people pushing this theory, and they’ve successfully managed to spread it to people who can do something about it.

2. Various published papers, conference presentations, and chapters in textbooks on both social implications of AI and mathematical problems relating to AI self-improvement and decision theory. Some of this work has been receiving positive attention in the wider mathematical logic community - see for example here

3. MIT just started the Future of Life Institute, which includes basically a who’s who of world-famous scientists. Although I can’t prove MIRI made this happen, I do know of that of FLI’s five founders I met three at CFAR workshops a couple years before, one is a long-time close friend of Michael Vassar’s, and I saw another at Raymond’s New York Solstice.

4. A suspicious number of MIRI members have gone on to work on/help lead various AI-related projects at Google.

5. Superintelligence by Bostrom was an NYT bestseller reviewed in the Guardian, the Telegraph, the Economist, Salon, and the Financial Times. Eliezer gets cited just about every other page,and in MIRI HQ there is a two-way videoscreen link from them to Nick Bostrom’s office in Oxford because they coordinate so much.Searching the book’s bibliography for citations of MIRI people I find Stuart Armstrong, Kaj Sotala, Paul Christiano, Wei Dai, Peter de Blanc, Nick Hay, Jeff Kaufman, Roko Mijic, Luke Muehlhauser, Carl Shulman, Michael Vassar, and nine different Eliezer publications.

My impression as an outsider who nevertheless gets to talk to a lot of people on the inside is that their two big goals are to work on a certain abstruse subfield of math, and to network really deeply into academia and Silicon Valley so that their previously fringe AI ideas get talked about in universities, mainstream media, and big tech companies and their supporters end up highly placed in all of these.

As one of the many people who doesn’t understand their math, I can’t comment on that. But the networking - well, I don’t know if it’s just an idea whose time has come, or the zeitgeist, or what - but I bet that ten years ago, I could have made you bet me at any odds that this weird fringe theory called “Friendly AI” invented by a guy with no college degree wouldn’t be on the lips of Elon Musk, Stephen Hawking, half of Google’s AI department, institutes at MIT and Oxford, and scattered throughout a best-selling book.

Networking is by its nature kind of invisible except for the results, but the results speak for themselves. As such, I’d say they’re doing a pretty impressive job with the small amount of money we give them.

Anyone want to weigh in on a response? In particular, SolTerrasa how do you feel about the "suspicious number" of former MIRI employees "leading AI projects" at google?

su3su2u1
Apr 23, 2014

SolTerrasa posted:

I definitely do, but I'm kinda busy working on AI projects at Google at the moment. Maybe tonight?

I'll hold off replying until you have a chance to weigh in.

su3su2u1
Apr 23, 2014

SolTerrasa posted:

I'm unlikely to be able to respond to it, though, so it won't help for your Tumblr war (please don't have a Tumblr war, and if you do, please don't involve me).

I was just curious what he could possibly mean. Like, it seems like MIRI/SIAI has only had a handful of researchers ever, so how many could really have moved on to google?

I don't plant to have a tumblr war. I was just curious on your take. I also talked to another friend of mine who works at google in Boston.

su3su2u1
Apr 23, 2014
Today I learned from the Slatestarcodex post that

quote:

Nate/Benja/Eliezer are spending the rest of 2014 working on material for the FLI AI conference, and on introductory FAI material to send to Stuart Russell and other bigwigs.

Anyone want to put odds on Russell and "other bigwigs" finding this at all interesting?

su3su2u1
Apr 23, 2014

Cardiovorax posted:

How many of those self-taught people could come up with a concept like the Random Forest, though?

I've actually seen something like random forest be (not-perfectly) implemented by actuaries and engineers who didn't realize what they were doing at all. It seems like "one of these decision trees isn't working, lets try training a poo poo ton of these things" (or GLMs instead of decision trees, in the case of actuaries) is a pretty common thing to try.

Now, could the people who randomly did this tell you exactly what their model is doing,statistically or optimize the parameters, etc? No.

su3su2u1
Apr 23, 2014

Spoilers Below posted:

I'm saying:

1) If the computer is 100% accurate, it will never be incorrect about which box to put the money in.

2) If the computer is wrong, even once, it is not 100% accurate.

3) If the computer does not manage to account for something that fucks with the box choice, something which might not be accounted for in the initial brain scan or whatever, the computer cannot be called 100% accurate.

4) There are a poo poo ton of things that the computer cannot account for based purely on a brain scan or a personality test which might cause a person to act out of character, leading to less than 100% accuracy. (i.e. In between the quiz and the box choice, the person gets a phone call stating that their significant other has been kidnapped, and if they don't pay $1,001,000 to the kidnappers, their SO will be killed. Fantastical, but not beyond the realm of possibility. Exactly the kind of prank a dick rival researcher would pull)

There are several easy ways to show that a 100% predictor can't exist:
1. A human can decide what to do based on a coin flip or some other random bit (I do this all the time when deciding what to do for lunch, for instance).
2. If you have a 100% accurate super-intelligence, you can build another one that attempts to do the opposite of whatever the first 100% accurate intelligence decides it will do. This seems like a fake "gotcha" thing but I deal with predictions that change the outcomes of whats being predicted all the time at work (predict an insurance claim will be expensive, and the incentive to settle it early goes up. Which means it ends up not being expensive...)

su3su2u1
Apr 23, 2014

bartlebyshop posted:

It's truly incredible to me that people are still donating money to an organization of ~10 "research associates" and EY and Luke Muehlhauser that has been as or less productive than one unsuccessful grad student. Who would have cost over their 10 years to degree as much as one of MIRI's people.

Even more incredibly the subset of people that do this really, really claim to believe in being "effective."

I bet a grad student over 10 years would cost less than one of MIRI's people. They seem to pay the full-timers a bit more than a market postdoc.

su3su2u1
Apr 23, 2014

bartlebyshop posted:

Someone (possibly you?) at one point dug up MIRI's tax returns and it turns out EY is getting paid north of $100K a year to write about one fanfiction chapter every 3 months. I know postdocs who make that much who've published more papers in the last year than MIRI has ever.

I wish I could get hired for more than double my current salary based solely on posts to LessWrong.

Particle physics postdocs make like 50k or so. Half a MIRI researcher or so. And of course they publish more than MIRI, because its basically impossible not to.

EDIT: Also, can anyone with more in-field knowledge than I have tell me if this is as silly as it looks: http://intelligence.org/files/CorrigibilityTR.pdf

Is there something interesting or profound hidden in there that I'm missing?

su3su2u1 fucked around with this message at 10:13 on Nov 19, 2014

su3su2u1
Apr 23, 2014

SolTerrasa posted:

I just read it. I'm not sure what they thought was cool about their result, which is basically "we don't know!", but I found it... Well, not offensively awful. It's possible to enjoy MIRI stuff like that, kind of like science fiction. if you grant their premises (*all* of them, including many-worlds and sustainable improvement cycles and whatever else), then posit that there already exists an intelligent utility maximizing agent, how *would* you convince it to shut down if need be? It's "work" that should be done (and has been) by the likes of stoned undergrads, but hey.

Also one of their "problems" is "wait, what if we accidentally build Fraa Jad from Anathem", so you can see why it's hard to take too seriously.

It seems like the paper is:
1. here is a problem from our sci-fi scenario.
2. Here is the exact same problem restated in terms of a broad math formalism.
3. Here are some trivial results relating to the formalism (not the problem)
4. this problem sure is hard! Look how unsolved it is!

I feel like I must be missing something subtle, because the whole paper is contained in the sentence "it might be really hard to turn off a super powered AI" and the math looks like window dressing? Like... where is the actual result?

su3su2u1
Apr 23, 2014
This might be the dumbest paper I've ever read (its the only Yud arxiv paper):

http://arxiv.org/abs/1401.5577

In it they note that they can make programs that will cooperate on the one-shot prisoner's dilemma, if they can see each other's source code ahead of time.

So if players are allowed to coordinate, coordination problems go away? WHO'D HAVE THUNK IT?

su3su2u1
Apr 23, 2014

SolTerrasa posted:

I was thinking about that paper today, apropos of nothing, and I realized that it's nontrivial to implement in a practical way! It's one of those things where you can theorize about it with no effort at all, but if you try to write the actual code then you have more concerns. For example you need to be able to avoid the infinite loop scenario where you check the code of the other agent which checks your code which checks its code, etc. I wonder if Yud actually wrote it.

I don't think they have any actual working code? It just seems like a bunch of super-trivial solutions to the prisoner's dilemma. i.e. "fair Bot" whose definition is "cooperate with anything that will provably cooperate with you." Prudent Bot "cooperate with anything that will cooperate with you UNLESS it will cooperate even if you defect." Then they discuss that while these are in-principle unexploitable, real world implementations are probably exploitable.

If they had dealt with the constraints of the real world, they might have stumbled on to something interesting in terms of how to actually go about doing the proving.

I suppose thats why its gone nearly a year with no citations- there is nothing there.

su3su2u1 fucked around with this message at 04:50 on Nov 22, 2014

su3su2u1
Apr 23, 2014

SolTerrasa posted:

They must have done something. I'm sitting around waiting for my mapreduces to finish so I wrote up working code for FairBot and DefectBot in 25 lines of python, and three of those are comments, and seven more are tests. I believe that they have wrong beliefs about the probability of a particular kind of singularity, but they aren't all a bunch of idiots, they could do this.

I mean... you say that they "must have done something", but LOOK AT THE PAPER! If they did do something, why isn't it referenced anywhere in the paper?

Did you write FairBot in the way they suggest- treating agents as formulas in Peano arithmetic and searching for equivalence proofs in towers of formal systems? Naively, I'd expect that to be totally impractical. Of course they need the silly methodology because they insist on provability because they claim the recursion problem is intractable.

su3su2u1 fucked around with this message at 05:52 on Nov 22, 2014

su3su2u1
Apr 23, 2014

SolTerrasa posted:

No, I used monkeypatching. Their approach reduces to "can I prove that if I cooperate, the other agent will cooperate?" So FairBot examines the memory of the other bot, then patches in guaranteed cooperation to all those instances, then checks if the other bot would cooperate, then cooperates if it does. Pretty boring, but it works and I cannot fathom why you'd try it their way instead.

So its a typical MIRI paper:

They give up on the practical problem as impossible in their introduction. Instead, they have roughly 20 pages of a really silly formal system, with math showing trivial things (if you write most of their theorems out in English words, they seem trivial). Their formalism is of 0 practical importance, and does more to obscure then enlighten.

Meanwhile, the practical problem can be solved in a fairly straightforward fashion if you are a bit clever about it.

Its like that 100 page decision theory paper Yud wrote- 100 pages of totally unnecessary background, and he fails to formalize his decision theory.

su3su2u1
Apr 23, 2014

SolTerrasa posted:

Yeah? I mean, I obviously think that if he makes a prediction about AI specifically it's likely to be wrong, but I thought his singularity was a lot less science fiction than Big Yud's. I thought it was mostly about accelerating hardware capabilities and falling costs. Maybe I'm wrong, I was never a huge fan of the guy. What makes him nuts?

Kurzweil gave a talk when I was attending undergrad, and I stayed after for a conversation he had with some professors. His talk was fine, but while interacting with the professors he went full-crackpot pretty fast. Uploading brain's in 10 years (this was significantly more than 10 years ago), living forever, etc.

su3su2u1
Apr 23, 2014

Nessus posted:

What is with all this "aspiring rationalist" poo poo anyway? Do they think they have to somehow completely internalize whatever ideological touchstones they've decided, and then once they've accumulated enough merit, excuse me blessings, excuse me mitzvot, excuse me bootstraps, excuse me critical thinking points, they will suddenly ascend into Science Valhalla with Carl Sagan saying "Welcome, thou good and faithful servant"?

It would take very little for them to start talking about Clearing the planet.

They don't call it "Clearing the planet" they instead call it "raising the sanity waterline" AND THEY TALK ABOUT IT ALL THE TIME.

su3su2u1
Apr 23, 2014

SolTerrasa posted:

Hilariously, he only reads the parts of this thread that get filtered through his cult members. He thinks that we're impressed by him and / or scared of his mind control powers.

What do the cult members think?

su3su2u1
Apr 23, 2014

SolTerrasa posted:

I do actually think that Yud is irredeemably nuts (not crazy, just fifteen degrees shifted from reality), but I don't think you're wrong. My intellectual development went via lesswrong, after all. But it'll be much easier to convince Catbug's philosopher friend that Yud is a hack than that he's crazy. Try this: Yudkowsky has had grandiose ideas since he was 17, and in those two decades he has implemented zero of them. He is, at best, a popularizer of rationalist principles, though he conflates them with his own singularity-seeking views to an extent that should be alarming. The subset of his work which is well-done is not original; the subset of his work which is original is panned unanimously among recognized experts.


I first read it here:

http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/

A GiveWell staffer says don't donate to SI, citing a theft of 20% of their operating budget. Less than a year later, SI sells its only asset (the brand of the Singularity Summit) to the now existing Singularity University. Less than a year after that, MIRI forms with many of the same people (minus the thief).

Do we know who the thief was? Is it someone still transhumanistly involved?

su3su2u1
Apr 23, 2014

SolTerrasa posted:

This is really super true and I hadn't realized it until you said it. He reminds me of the more aggravating of the "gifted" kids in high school: rightly convinced of his superiority but never knowing how slim the margins are. One of the terms he proposed for "muggles" (as opposed to himself) was "Gaussians", as in, non-outliers.

It's also empirically true: One of the things I found while incredibly bored last night was his SAT score. I don't know why he took the SAT, but he did. I don't know why he was bragging about it either. Anyway it's 1480. That's pretty good, but it's like, 97th percentile. That's only two standard deviations better than average. Congrats on being a reasonably bright child I guess, but blathering on about being a one-in-six-billion genius, and writing a 3k word essay about how we should call normal people Gaussians is just not justified by your 1480 on the SAT.

E: the reason I mention this is because this is A Thing That Happens with some American gifted kids. You get put in special classes because regular classes are boring you, but you're very clever and you compete favorably with all the kids in the special classes, too. So you don't have any perspective or any way to judge relative intelligence. Obviously you're smarter than everyone. Everything seems to confirm it: your grades are great because you're brilliant or they're mediocre because you were too bored. You score well on standardized tests because you're brilliant or you score poorly because standardized tests are bullshit, look at the research everyone knows it.

In the normal course of things, it goes "then college happens" and you go to an entire college full of clever people and realize that you're now average, or slightly above. Even if you're at a mediocre school, the professors have dozens of years more knowledge than you. It takes *really* arrogant people to get through college believing that they're the smartest person on earth.

Yudkowsky never got over that high school "I am literally the greatest ever" mentality, and just the same, everything seems to prove it to him. Flare died without even an initial tech demo? Well, AI isn't *about* programming languages. Unpublished? Well, mainstream publications aren't ready for him. Can't actually bring himself to finish a Harry Potter fanfic, for God's sake? Well, it's a side effect of being so brilliant, I can't get much sit-down work done at a time.

Yes, so much this. He never went to highschool, college, or grad school. So he is stuck in the "I was the smartest kid in middle school" mentality. He also has never been forced to learn anything he didn't want to- so anything he'd find challenging he can just convince himself it's not worth it, and stop.

So what you get is someone decently smart who has picked up bits and pieces of things, but never anything in any real depth. So he uses jargon mostly correct, but in slightly off ways. Every now and then he says something ridiculous (like the NP-hard thing earlier in the thread), and you realize "wait.. he doesn't REALLY get any of this." It's like talking to someone who has read every word of a textbook, but has never actually done any of the exercises.

su3su2u1
Apr 23, 2014

SolTerrasa posted:

As in Conway's Game of Life? Cellular automata are super fun, but what a strange choice for "literally the smartest human".

You didn't see that Less Wrong post? It's one of the most Yudiest:

http://lesswrong.com/lw/ua/the_level_above_mine/

Basically, Yud asked a summer student he hired "who is the most famous math person you've met, and is he as smart as me?" And the student was like "John Conway was at math camp, and he was smarter than you." Another amazing piece I'll just quote:

Yud the magic dragon posted:

It spoke well of Mike Li that he was able to sense the aura of formidability surrounding Jaynes. It's a general rule, I've observed, that you can't discriminate between levels too far above your own. E.g., someone once earnestly told me that I was really bright, and "ought to go to college". Maybe anything more than around one standard deviation above you starts to blur together, though that's just a cool-sounding wild guess.

Someone told Yud "you are bright and you ought to go to college," and Yud's response was basically "you are too stupid to perceive that I've mastered college despite never having gone."

But Yud, if you are listening, you are bright guy, you ought to go to college.

su3su2u1
Apr 23, 2014
I'm very drunk, but had an insight...

Lets say you started an organization whose goal was to build a godlike-AI... you've taken the money... but you don't like work, and this seems really hard. Why not just write a metric ton of words to the effect of "it would be super dangerous to build this thing I promised, so I won't until I finish mathwanking over here in the corner."

Adbot
ADBOT LOVES YOU

su3su2u1
Apr 23, 2014

SolTerrasa posted:

I assume a philosophy Masters is very different from an engineering or math Masters. It's apparently from a prestigious school, University of Sao Paulo. And it looks like (though I might be wrong) philosophy has a long tradition of writing analyses of other people's work.

And let's be honest, the real title of nearly every Masters thesis in the world is "A Very Long Document That At Most Ten People Will Ever Read". Standards tend to slip a bit, and if you can prove that you tried really hard, your advisor is probably going to approve it.

I gotta quote that Patreon, though. If we collectively pay this man $20k/mo, he will...


Isn't his degree in philosophy? He claims to be a world-class altruist, but doesn't see the conflict between that and gating his ~True Form~ behind a paywall?

E: I would not read A Very Long Document from this man, and I do not think he is a Hero deserving of a Sidekick (or a Dragon)

What is effective about this? He is just siphoning money from the EA movement...

  • Locked thread