Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Cingulate
Oct 23, 2012

by Fluffdaddy
Can you be a bit more specific on where you are and what you want?

Adbot
ADBOT LOVES YOU

Twerkteam Pizza
Sep 26, 2015

Grimey Drawer
What about Sociologists? That's my main area of study, the English Literature is my other major because I am at the height of privilege.

Also wouldn't a house full of processors basically melt?

Cingulate
Oct 23, 2012

by Fluffdaddy

Twerkteam Pizza posted:

What about Sociologists? That's my main area of study, the English Literature is my other major because I am at the height of privilege.

Also wouldn't a house full of processors basically melt?
Imagine this meeting this. Although you can really start doing Machine Learning at home, with last year's GeForce, and while it wouldn't be cutting edge ML, it could still be a novel approach to a lot of problems.

Sociology ... well, again, you could use sentiment analysis. Do you want to know what people are tweeting about? Like, a few million Tweets about Donald Trump? Are Tweets by people with stereotypically "White" names more likely to positively mention Trump? Are people more likely to tweet positively about Trump the day after a terrorist attack? That's stuff where you'd use ML.

Doc Hawkins
Jun 15, 2010

Dashing? But I'm not even moving!


Twerkteam Pizza posted:

Also wouldn't a house full of processors basically melt?

We call them data centers and we've gotten okay at managing the heat. If you have the money Amazon will happily let you rent a preposterous number of GPUs and run whatever you want. Although they'll want to upsell you their machine learning service.

Peel
Dec 3, 2007

Cingulate posted:

Can you be a bit more specific on where you are and what you want?

I'm finishing up a physics PhD so I'm comfortable with maths and programming, and I'm interested in resources or texts which will help me achieve or at least get started on a lay familiarity with machine learning, for personal interest. I'm not opposed to spending money if a book isn't easily accessible via library.

Practical stuff like the Karpathy thing is also cool, though my computer is pretty weedy.

Curvature of Earth
Sep 9, 2011

Projected cost of
invading Canada:
$900

Twerkteam Pizza posted:

Also wouldn't a house full of processors basically melt?

That's pretty much what supercomputers *are*

https://upload.wikimedia.org/wikipedia/commons/7/70/Titan_supercomputer_at_the_Oak_Ridge_National_Laboratory.jpg

That is currently the second most powerful supercomputer in the world right now.

Cingulate
Oct 23, 2012

by Fluffdaddy

Peel posted:

I'm finishing up a physics PhD so I'm comfortable with maths and programming, and I'm interested in resources or texts which will help me achieve or at least get started on a lay familiarity with machine learning, for personal interest. I'm not opposed to spending money if a book isn't easily accessible via library.

Practical stuff like the Karpathy thing is also cool, though my computer is pretty weedy.
A good book to start is this: http://statweb.stanford.edu/~tibs/ElemStatLearn/
The PDF is free. There is also a video series and exercises: http://www.r-bloggers.com/in-depth-introduction-to-machine-learning-in-15-hours-of-expert-videos/. With a PhD in physics, you'll probably balze through this, and can start with the more intense stuff.
(Hastie and Tibshirani are major figures here.)

This is a bit on the statistics and R side of things. The flashier, machine intelligence part is neural networks.

As a very practical introduction, you can look at scikit-learn, which is a Python interface to a vast array of machine learning tools (including, with a hack, Google's just released TensorFlow toolkit for deep learning with neural nets). With scikit-learn, you can start machine learning within minutes. After 15 minutes, you can go to Kaggle.com to compete against other ML people - predict what products people will like based on their history, or recognize individual whales based on helicopter pictures.
scikit-learn otherwise has a no-neural-networks policy, but from there, you can easily proceed to Theano/Keras for easily accessible deep learning.

Basically, there are two sides of one coin - statistical learning, where a prototypical example would be discovering genes predictive of certain diseases, and people will use R, and machine intelligence, which is what people will call it when computers can speak or recognize images, and people use Python or C or Java. Under the hood, it's all very similar - the Lasso kind of is logistic regression kind of is an SVM kind of is a perceptron kind of is a neural net kind of is a deep net. But the flashier stuff, that's neural nets, and the heavy-lifting, fleshed-out, applied stuff is often statistical learning.

That the kind of answer you're looking for?

Cingulate has a new favorite as of 04:02 on Feb 14, 2016

Cingulate
Oct 23, 2012

by Fluffdaddy
If you just want a one-glance impression of what machine learning is currently about, go here: https://www.kaggle.com/competitions
And if you want a one-glance impression of how it does that:

(This is a range of ML tools trying to classify a simulated data set based on just two features - you see some classifiers, such as the trees, make a bunch of binary decisions, others fit simple (Linear SVM, Naive Bayes) or more complex (RBF SVM) functions to the data.)

Yeah I know, when Yudkowsky talks about it, it looks a bit scarier right?

A really enlightening example are the aggregation (e.g. AdaBoost, RandomForest) algorithms. Here, a bunch of extremely stupid learners are built, each one being deliberately kept stupid, and then, they are combined into one superior predictor. One key feature is that this prevents the tool from (basically) memorization. This is pretty cutting-edge stuff.

E: another thing you can immediately see is the "No Free Lunch" theorem: no single classifier is simply "better" than all others. Each classifier is worse at one task than some other classifier. Some will excel at the complex tasks, but be beaten at the easier ones, for example.
Currently, machine intelligence is very much about specialization, not generalization.

Cingulate has a new favorite as of 03:51 on Feb 14, 2016

Peel
Dec 3, 2007

Yeah, that looks really cool. Thanks.

e: actually, while we're here, what's the difference between linear svm and linear discriminant analysis? I assume they're two different ways of getting to results of that 'shape'?

e2: the 'neural networks have to be so small they can't learn everything they see' thing is the most interesting thing I've heard all week. It mirrors directly the philosophy of science point that you can trivially describe a 'law' to 'govern' any set of data by just writing the data down again, but it won't have any application outside the data. You want your laws to encompass the data with simpler principles. It's cool when something so abstract appears directly in real work.

Peel has a new favorite as of 04:36 on Feb 14, 2016

divabot
Jun 17, 2015

A polite little mouse!
nostalgebraist's small Yudkowsky shell script should be achieving AI-Foom any minute now

Cingulate
Oct 23, 2012

by Fluffdaddy

Peel posted:

Yeah, that looks really cool. Thanks.

e: actually, while we're here, what's the difference between linear svm and linear discriminant analysis? I assume they're two different ways of getting to results of that 'shape'?
LDA is a direct application of the linear model; it's usually related to PCA, Logistic Regression, ANOVA. You can look at it like an inverted ANOVA. The point is to find a linear combination of features which maximizes the difference between classes (a generative classifier), ie., build a linear model/projection of the data, and then you use a bit of Bayes to classify based on that. Underlying it is the linear model assumption - we try to find the class means and assume a normal distribution of the data, and so on, and we look for a least-squares solution/minimize the squared error.
It is a very old method - possibly the oldest; it was proposed by Fisher. It has a closed-form solution; this is good if you need to do it a lot for smaller problems, but bad if you need to do it once for a very large problem (as you have to invert massive matrices).

SVM intuitively works "the other way around". It tries to find the hyperplane that separates the two hardest cases from each other (the member of group A that is closest to B, and the member of B that is closest to A; that is, the error to be minimized is the "hinge" loss). Then, it doesn't look at the distances of objects from the populations they're from; it just tells you what side of the hyperplane they land on (that is, it is a discriminative, not a generative classifier). It has to be fit iteratively, and is much younger than LDA.
While linear SVMs work nicely, their true power often depends on nonlinear kernels, such as RBF.
In practice, SVM greatly resembles Logistic Regression, and in fact implementations often share a lot of their code.

Generally, SVM is a very common tool, whereas LDA has largely gone out of fashion. I think there might be something inherent to SVMs that makes them easier to regularize, but maybe I'm wrong here.

Makes sense?

Peel posted:

e2: the 'neural networks have to be so small they can't learn everything they see' thing is the most interesting thing I've heard all week. It mirrors directly the philosophy of science point that you can trivially describe a 'law' to 'govern' any set of data by just writing the data down again, but it won't have any application outside the data. You want your laws to encompass the data with simpler principles. It's cool when something so abstract appears directly in real work.
Funny: this aspect of nets was probably discovered by linguists working in a psychological and CS tradition (e.g. by Elman, here - although this is a further dimension of keeping it small). And there is a long-lasting fight between linguistics from that tradition, and linguists from the Chomskian tradition. Chomskians hate networks, and statistics, and generally the idea that data-driven probabilistic learning can lead to human-like behavior. And Chomskians argue that in truth, this psychological/CS tradition never proceeds to being an actual theory, discovering actual laws, but always amounts to a redescription of the data (see e.g. Norvig contra Chomsky).

Also, note the k-Nearest Neighbor algorithm, where the model is actually just the data and the rule "look at the k nearest items, with distance measure X". (It's an interesting classifier, but it inherently never learns a model of the data.)

Also I want to repeat that I'm not a statistician, mathematician, computer scientist or ML researcher. I'm working on cognitive neuroscience, usually of language, and that field is using a lot of ML tools as of recently. So this is basically just the perspective of a layman who's trying to apply these methods.

The Vosgian Beast
Aug 13, 2011

Business is slow
https://twitter.com/RichardBSpencer/status/689692099009097729

Silver2195
Apr 4, 2012

https://twitter.com/RichardBSpencer/status/698709070585278464

https://twitter.com/RichardBSpencer/status/698709400261758977

Umberto Eco posted:

3. Irrationalism also depends on the cult of action for action's sake.

Action being beautiful in itself, it must be taken before, or without, reflection. Thinking is a form of emasculation. Therefore culture is suspect insofar as it is identified with critical attitudes. Distrust of the intellectual world has always been a symptom of Ur-Fascism, from Hermann Goering's fondness for a phrase from a Hanns Johst play ("When I hear the word 'culture' I reach for my gun") to the frequent use of such expressions as "degenerate intellectuals," "eggheads," "effete snobs," and "universities are nests of reds." The official Fascist intellectuals were mainly engaged in attacking modern culture and the liberal intelligentsia for having betrayed traditional values.

...

13. Ur-Fascism is based upon a selective populism, a qualitative populism, one might say.

In a democracy, the citizens have individual rights, but the citizens in their entirety have a political impact only from a quantitative point of view -- one follows the decisions of the majority. For Ur-Fascism, however, individuals as individuals have no rights, and the People is conceived as a quality, a monolithic entity expressing the Common Will. Since no large quantity of human beings can have a common will, the Leader pretends to be their interpreter. Having lost their power of delegation, citizens do not act; they are only called on to play the role of the People. Thus the People is only a theatrical fiction. There is in our future a TV or Internet populism, in which the emotional response of a selected group of citizens can be presented and accepted as the Voice of the People.

Because of its qualitative populism, Ur-Fascism must be against "rotten" parliamentary governments. Wherever a politician casts doubt on the legitimacy of a parliament because it no longer represents the Voice of the People, we can smell Ur-Fascism.

Silver2195 has a new favorite as of 00:31 on Feb 15, 2016

Shame Boy
Mar 2, 2010

Cingulate when will you stop talking about boring garbage like actual science and address the real issues like the importance of anime in the future of the white race :colbert:

Cingulate
Oct 23, 2012

by Fluffdaddy

Parallel Paraplegic posted:

Cingulate when will you stop talking about boring garbage like actual science and address the real issues like the importance of anime in the future of the white race :colbert:
I think I have to stop. It feels over. Real life caught up with me.

Somebody on my Facebook liked one of Scott's articles today.

The Vosgian Beast
Aug 13, 2011

Business is slow
Scott seems okay if you only read one or two of his posts here or there.

Patrick Spens
Jul 21, 2006

"Every quarterback says they've got guts, But how many have actually seen 'em?"
Pillbug

You know, I'd mention the whole "ending slavery" thing, but I feel like there's a pretty good chance he considers that a negative.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



That was a major political realignment and over a hundred years ago, though; what have they done that's positive since, say, Eisenhower?

SolTerrasa
Sep 2, 2011

Cingulate posted:

Also I want to repeat that I'm not a statistician, mathematician, computer scientist or ML researcher. I'm working on cognitive neuroscience, usually of language, and that field is using a lot of ML tools as of recently. So this is basically just the perspective of a layman who's trying to apply these methods.

I wanted to chime in and say thanks for your posts, you're dead on. I wish more laypeople had the same knowledge that you do. Your image of common ML algorithms on different classification tasks was an especially great find. I just finished giving a tech talk to a bunch of college students, and I thought the summary might be a useful bit to append to your posts. I summarize the talk with: "Although AI is incredibly powerful, it's usually less powerful than we intuitively feel it should be, and for any nontrivial task you will always find that your intuitions are overestimating the strength of an AI system, and underestimating the difficulty of the task that that system performs."

Basically, if you're a layperson who doesn't interact with ML (or other AI techniques) much, I cannot stress enough Cingulate's point that it's so easy to get started. Aside from the resources they linked, you might appreciate Caffe, which is a deep neural net framework which is mostly used with ImageNet, an image-classifying system. All you need is access to a linux box; Amazon will happily rent you one for cheap. Here's the page which provides the list of models they have pre-trained, so that you don't even need the graphics card Cingulate mentioned: http://caffe.berkeleyvision.org/model_zoo.html.

Patrick Spens
Jul 21, 2006

"Every quarterback says they've got guts, But how many have actually seen 'em?"
Pillbug

Nessus posted:

That was a major political realignment and over a hundred years ago, though; what have they done that's positive since, say, Eisenhower?

You can quibble about how much of the budget went to abstinence based organizations, but PEPFAR was absolutely a good thing, and has saved a lot of people's lives.

But there is the same issue where Spencer probably wouldn't approve of it.

sub supau
Aug 28, 2007

Is there another DE thread that Cingulate hasn't claimed yet?

Tesseraction
Apr 5, 2009


Not that it's particularly here or there, but these motherfuckers spit out heat like there's no tomorrow. I mean, you'd expect graphics cards to spew heat, but even expecting it I was taken aback by the amount.

Do not try booting one of these up outside of an air corridor unless you're outside in Finland.

If you're in Finland look into moving ASAP.

Cingulate
Oct 23, 2012

by Fluffdaddy

TetsuoTW posted:

Is there another DE thread that Cingulate hasn't claimed yet?
Well if there is, what do you hope to achieve by asking about it here?..

SolTerrasa posted:

I wanted to chime in and say thanks for your posts, you're dead on. I wish more laypeople had the same knowledge that you do.
Ha, thanks. And speaking of laypeople:

The Vosgian Beast posted:

Scott seems okay if you only read one or two of his posts here or there.
Yeah; I'm not saying he's too stupid to understand AI, for example. (I like about 75% of what he writes. 75% of the paragraphs, not 75% of the texts; there's a stupid paragraph in almost all of them.) His general approach to statistics probably wouldn't translate too horribly to ML.
I'm saying, he doesn't care about AI, and he is in an environment that can obsess over Terminator/Matrix-style Scifi AI all day while failing to grasp even the most basic aspects of real-world AI, and failing to provide its members with any information about actual AI.

Like, they'll catch up on some of the flashier findings - Google artificial brains can now paint like Monet! And if you don't understand ML, you assume they're doing that in a similar manner to how a human would do that, so you assume AI is is approaching human-like performance. But this is false. At its heart, the part of AI that really works right now would not be completely misdescribed by calling it "fuzzy statistics", and that means it's really good at something humans fail at (due to, e.g., the cognitive fallacies Yud knows from Kahnemann), and really terrible at stuff humans are great at.

Woolie Wool
Jun 2, 2006


The Vosgian Beast posted:

Gamergate was started as a way of getting back at a creative type. As far as the big figures in the gamergate controversy go, the vast majority of actual video game creators are in the anti-gamergate camp. The exceptions are what, Vanishing of Ethan Carter guy and Postal 2 guys? Milo and Christina Hoff Sommers certainly weren't game developers, or even people who gave a poo poo about the gaming community before gamergate.

The Hugo Award controversy fares slightly better under this narrative, but even then it's one group of creatives vs another group of creatives, and it's dreadfully inaccurate to say that progressive SF is by any means a new phenomenon. Unless you wanted to say progs occupied SF by going back in time and having Daughter Of Influential Early Feminist Mary Shelley and Avowed Socialist HG Wells create it in the first place.

If anything, it's more accurate to say "progs create, freaks get really angry progs are creating in freak spaces" but it's not like either category is stable or meaningful in the first place.

The "progs" are mostly "freaks" themselves, because this is not nerds vs. the rest of the world, this is a nerd civil war. Non-nerds largely stay out of it unless they see an opportunity to gain something by associating with it (i.e. Christina Hoff Sommers).

And saying "nerd civil war" makes me imagine RMS smashing ESR with a ladder pro-wrestling style which is just hilarious.

Woolie Wool has a new favorite as of 16:15 on Feb 15, 2016

Twerkteam Pizza
Sep 26, 2015

Grimey Drawer

Cingulate posted:

Sociology ... well, again, you could use sentiment analysis. Do you want to know what people are tweeting about? Like, a few million Tweets about Donald Trump? Are Tweets by people with stereotypically "White" names more likely to positively mention Trump? Are people more likely to tweet positively about Trump the day after a terrorist attack? That's stuff where you'd use ML.

loving wicked. Although outside my area of expertise, I can't even begin to imagine the kind of network analysis you could do with these things.

Doc Hawkins posted:

We call them data centers and we've gotten okay at managing the heat. If you have the money Amazon will happily let you rent a preposterous number of GPUs and run whatever you want. Although they'll want to upsell you their machine learning service.

I still really want to see a house melt.

Curvature of Earth posted:

That's pretty much what supercomputers *are*

https://upload.wikimedia.org/wikipedia/commons/7/70/Titan_supercomputer_at_the_Oak_Ridge_National_Laboratory.jpg

That is currently the second most powerful supercomputer in the world right now.

Yeah but can it run Crysis?

The Vosgian Beast
Aug 13, 2011

Business is slow

Woolie Wool posted:

The "progs" are mostly "freaks" themselves, because this is not nerds vs. the rest of the world, this is a nerd civil war. Non-nerds largely stay out of it unless they see an opportunity to gain something by associating with it (i.e. Christina Hoff Sommers).

And saying "nerd civil war" makes me imagine RMS smashing ESR with a ladder pro-wrestling style which is just hilarious.

I'd watch it.

Tesseraction
Apr 5, 2009


I'd enjoy it until ESR gets confused about who the enemy is like he usually does and ends up repeatedly punching himself.

Actually then I'd still enjoy it.

SolTerrasa
Sep 2, 2011

Cingulate posted:

Like, they'll catch up on some of the flashier findings - Google artificial brains can now paint like Monet! And if you don't understand ML, you assume they're doing that in a similar manner to how a human would do that, so you assume AI is is approaching human-like performance. But this is false. At its heart, the part of AI that really works right now would not be completely misdescribed by calling it "fuzzy statistics", and that means it's really good at something humans fail at (due to, e.g., the cognitive fallacies Yud knows from Kahnemann), and really terrible at stuff humans are great at.

"Fuzzy statistics" is a great description of reinforcement learning (like what Google used to make its AI beat those old Atari games), or UCT / Guided Monte Carlo approaches (like what Google used to win Go). I still don't love the phrasing; I feel that it removes a bit too much nuance. For instance, machine learning as a field is probably better described as "naive but powerful pattern finding", and non-ML computer vision is "a pile of simple algorithms mixed with heuristics that work astonishingly well". There's a lot of work in AI that I don't feel is captured by that description.

Is there some place I can go to talk about this? I keep making GBS threads up Yudkowsky threads with AI pedantry.

Twerkteam Pizza
Sep 26, 2015

Grimey Drawer

SolTerrasa posted:

Is there some place I can go to talk about this? I keep making GBS threads up Yudkowsky threads with AI pedantry.

Maybe go to the "Science, Academics and Languages" subforum and either open your own thread about artificial intelligence or check the CompSci thread?

Peel
Dec 3, 2007

A thread about actual factual AI research would be pretty great. These threads keep sliding into discussing the topics themselves rather than dorks mangling them for the same reason those people take enough interest to mangle them: they're cool and interesting.

divabot
Jun 17, 2015

A polite little mouse!
ahahaha Phil Sandifer emailed this morning. He was working out what you could ascertain of Yudkowsky's attitude to women and then he found Red Tidday UP White Tidday DOWN and his head exploded. dis gon be gud.gif

Peel posted:

A thread about actual factual AI research would be pretty great. These threads keep sliding into discussing the topics themselves rather than dorks mangling them for the same reason those people take enough interest to mangle them: they're cool and interesting.

You can get a bit of that in the old LW mock thread. Look for anything by SolTerrasa.

Peel
Dec 3, 2007

Yeah, I have that thread still bookmarked pretty much for that reason.

TinTower
Apr 21, 2010

You don't have to 8e a good person to 8e a hero.
There is a certain irony in Gamergaters jumping in with the NRx love of the cuck meme when Eron Gjoni, by his own admission, is a quintuple beta cuck.

(Word filter :allears:)

GottaPayDaTrollToll
Dec 3, 2009

by Lowtax

Woolie Wool posted:

The "progs" are mostly "freaks" themselves, because this is not nerds vs. the rest of the world, this is a nerd civil war. Non-nerds largely stay out of it unless they see an opportunity to gain something by associating with it (i.e. Christina Hoff Sommers).

And saying "nerd civil war" makes me imagine RMS smashing ESR with a ladder pro-wrestling style which is just hilarious.

RMS is the better nerd because, while ESR probably has one of those Dork Enlightenment fake Magic cards, RMS has a real one.

Merdifex
May 13, 2015

by Shine
You know how Wesley has this idea that the progs are all looking to put down the "low status" people (poor White people) by making it difficult for poor people or something? These are the class revolutionaries who'll change that, apparently. Check out the entire thread below:

https://twitter.com/ClarkHat/status/699203481366896641
https://twitter.com/ClarkHat/status/699251353378557956

Rev says a lot of crap in there that's worth looking at purely for how confounding they are. Also:

https://twitter.com/St_Rev/status/698505887535800320
https://twitter.com/puellavulnerata/status/698517267269754880

And:

https://twitter.com/St_Rev/status/699170961556430848
https://twitter.com/St_Rev/status/699215847429169153

I mean, who needs arguments when assertions and ignorance, with a touch of obscurantism will do? You're gonna argue about institutional racism? You can't 'cause I define racism like this, and it can only be so.


And in case anyone still had doubts about whether Scott hadn't imbibed of the NRx kool-aid and fallen victim to his own confirmation bias (which is the same confirmation bias Moldbug displays) here's him confirming his belief in the existence of the Cathedral, in case it wasn't already clear how much he likes NRx concepts:

http://slatestarcodex.com/2016/02/12/before-you-get-too-excited-about-that-github-study/#comment-325869

The Vosgian Beast
Aug 13, 2011

Business is slow

Merdifex posted:

You know how Wesley has this idea that the progs are all looking to put down the "low status" people (poor White people) by making it difficult for poor people or something? These are the class revolutionaries who'll change that, apparently. Check out the entire thread below:

https://twitter.com/ClarkHat/status/699203481366896641
https://twitter.com/ClarkHat/status/699251353378557956

Rev says a lot of crap in there that's worth looking at purely for how confounding they are. Also:

https://twitter.com/St_Rev/status/698505887535800320
https://twitter.com/puellavulnerata/status/698517267269754880

And:

https://twitter.com/St_Rev/status/699170961556430848
https://twitter.com/St_Rev/status/699215847429169153

I mean, who needs arguments when assertions and ignorance, with a touch of obscurantism will do? You're gonna argue about institutional racism? You can't 'cause I define racism like this, and it can only be so.


And in case anyone still had doubts about whether Scott hadn't imbibed of the NRx kool-aid and fallen victim to his own confirmation bias (which is the same confirmation bias Moldbug displays) here's him confirming his belief in the existence of the Cathedral, in case it wasn't already clear how much he likes NRx concepts:

http://slatestarcodex.com/2016/02/12/before-you-get-too-excited-about-that-github-study/#comment-325869

If waterboarding is torture, we need a new word for slowly flaying people's skin off!

divabot
Jun 17, 2015

A polite little mouse!
I am so annoyed Phil Sandifer is passing up this title.

potatocubed
Jul 26, 2012

*rathian noises*

Merdifex posted:

And in case anyone still had doubts about whether Scott hadn't imbibed of the NRx kool-aid and fallen victim to his own confirmation bias (which is the same confirmation bias Moldbug displays) here's him confirming his belief in the existence of the Cathedral, in case it wasn't already clear how much he likes NRx concepts:

http://slatestarcodex.com/2016/02/12/before-you-get-too-excited-about-that-github-study/#comment-325869

I thought I'd do a quick reference check on the many citations he links during that piece.

An hour later I've lost the will to live. Basically he cites himself a lot, he cites conservative news sites and bloggers talking about studies rather than the studies themselves (even when the studies more or less support what he's saying), he cites a handful of op-eds that offer no concrete data at all, he privileges studies over real-world experience (e.g. about police violence vs black people) and draws incredibly strong conclusions from weak or controversial data.

And from time to time he chats utter poo poo.

I mean look:

quote:

It’s why we’re told women fear for their lives in Silicon Valley because of endemic sexual harassment, even though nobody’s ever formally investigated if it’s worse than anywhere else, and the only informal survey I’ve ever seen shows harrassment in STEM to be well-below the average harrassment rate.

That link in there? To a HuffPo report of a Cosmo survey? The infographic states that 31% of women in STEM have experienced harrassment. Which to Scott is "well-below" the average harrassment rate of 33%.

But then, this whole thing is basically "everyone is lying to you except the sources I like, which just so happen to agree with my preconceived biases". As a scientist I would have expected Scott to recognise an unfalsifiable theory when he sees one, but apparently he's just not looking in the right place.

Cingulate
Oct 23, 2012

by Fluffdaddy

potatocubed posted:

I thought I'd do a quick reference check on the many citations he links during that piece.

An hour later I've lost the will to live. Basically he cites himself a lot, he cites conservative news sites and bloggers talking about studies rather than the studies themselves (even when the studies more or less support what he's saying), he cites a handful of op-eds that offer no concrete data at all, he privileges studies over real-world experience (e.g. about police violence vs black people) and draws incredibly strong conclusions from weak or controversial data.

And from time to time he chats utter poo poo.

I mean look:


That link in there? To a HuffPo report of a Cosmo survey? The infographic states that 31% of women in STEM have experienced harrassment. Which to Scott is "well-below" the average harrassment rate of 33%.

But then, this whole thing is basically "everyone is lying to you except the sources I like, which just so happen to agree with my preconceived biases". As a scientist I would have expected Scott to recognise an unfalsifiable theory when he sees one, but apparently he's just not looking in the right place.
I don't think this is all that bad; for example: quoting your own blog posts can be better than linking to one primary study, because no single study is conclusive, but a blog post can discuss and evaluate multiple studies. Citing popular media discussions of studies is of course terrible, but then, it's not like anyone else would not be doing this - it's terrifyingly common.
And Scott isn't a scientist, he's a blogger and a doctor. He should be held to the standards of journalists, not scientists.
You're right though that he's conveying an impression that does not stand to proper scrutiny.

My personal take on what's most wrong with his post there is how he's fully buying into this, here's-one-narrative-here's-another-one-is-false thing in the way he is setting it up (even though he'd probably argue he is above that). He is, surely, to some extent correct. The question is, how large of a factor is this in the grand scheme of things? How do the alternatives fare, what's their cost/benefit trade-off? This should be viewed numerically, not categorically.

But that's admittedly hard, and it's much easier on human brains. (This is in fact a primary reason why machine-based reasoning is often superior.)

Lastly, you should not give the impression that he's actually in the Moldburg camp politically speaking; he finishes off the post by saying that he thinks it would be terrible if the "red tribe" was in power, because even though the "blue tribe" is bad, the "red tribe" is so much worse. (It's just that his argument makes him look like he's anti-"blue tribe".)

There is also very interesting research by Andrew Gelman showing that this perceived absolute polarization (red tribe only eats moose and Blacks, blue tribe gay marries and eats kale, and there is no contact ever but hatred) is pretty false. The world is much more purple than hard-red or hard-blue. But again, this is about quantitative, not categorical thinking, which people are bad at.

Adbot
ADBOT LOVES YOU

Woolie Wool
Jun 2, 2006


potatocubed posted:

I thought I'd do a quick reference check on the many citations he links during that piece.

An hour later I've lost the will to live. Basically he cites himself a lot, he cites conservative news sites and bloggers talking about studies rather than the studies themselves (even when the studies more or less support what he's saying), he cites a handful of op-eds that offer no concrete data at all, he privileges studies over real-world experience (e.g. about police violence vs black people) and draws incredibly strong conclusions from weak or controversial data.

And from time to time he chats utter poo poo.

I mean look:


That link in there? To a HuffPo report of a Cosmo survey? The infographic states that 31% of women in STEM have experienced harrassment. Which to Scott is "well-below" the average harrassment rate of 33%.

But then, this whole thing is basically "everyone is lying to you except the sources I like, which just so happen to agree with my preconceived biases". As a scientist I would have expected Scott to recognise an unfalsifiable theory when he sees one, but apparently he's just not looking in the right place.

The guy's spent years and years lying down with neoreactionary dogs, it should not be surprising that he picks up their intellectual fleas. At this point he might as well sign on as neoreaction's propaganda minister.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply