|
Can you be a bit more specific on where you are and what you want?
|
# ? Feb 14, 2016 03:00 |
|
|
# ? May 20, 2024 01:19 |
|
What about Sociologists? That's my main area of study, the English Literature is my other major because I am at the height of privilege. Also wouldn't a house full of processors basically melt?
|
# ? Feb 14, 2016 03:07 |
|
Twerkteam Pizza posted:What about Sociologists? That's my main area of study, the English Literature is my other major because I am at the height of privilege. Sociology ... well, again, you could use sentiment analysis. Do you want to know what people are tweeting about? Like, a few million Tweets about Donald Trump? Are Tweets by people with stereotypically "White" names more likely to positively mention Trump? Are people more likely to tweet positively about Trump the day after a terrorist attack? That's stuff where you'd use ML.
|
# ? Feb 14, 2016 03:15 |
|
Twerkteam Pizza posted:Also wouldn't a house full of processors basically melt? We call them data centers and we've gotten okay at managing the heat. If you have the money Amazon will happily let you rent a preposterous number of GPUs and run whatever you want. Although they'll want to upsell you their machine learning service.
|
# ? Feb 14, 2016 03:17 |
|
Cingulate posted:Can you be a bit more specific on where you are and what you want? I'm finishing up a physics PhD so I'm comfortable with maths and programming, and I'm interested in resources or texts which will help me achieve or at least get started on a lay familiarity with machine learning, for personal interest. I'm not opposed to spending money if a book isn't easily accessible via library. Practical stuff like the Karpathy thing is also cool, though my computer is pretty weedy.
|
# ? Feb 14, 2016 03:22 |
|
Twerkteam Pizza posted:Also wouldn't a house full of processors basically melt? That's pretty much what supercomputers *are* https://upload.wikimedia.org/wikipedia/commons/7/70/Titan_supercomputer_at_the_Oak_Ridge_National_Laboratory.jpg That is currently the second most powerful supercomputer in the world right now.
|
# ? Feb 14, 2016 03:22 |
|
Peel posted:I'm finishing up a physics PhD so I'm comfortable with maths and programming, and I'm interested in resources or texts which will help me achieve or at least get started on a lay familiarity with machine learning, for personal interest. I'm not opposed to spending money if a book isn't easily accessible via library. The PDF is free. There is also a video series and exercises: http://www.r-bloggers.com/in-depth-introduction-to-machine-learning-in-15-hours-of-expert-videos/. With a PhD in physics, you'll probably balze through this, and can start with the more intense stuff. (Hastie and Tibshirani are major figures here.) This is a bit on the statistics and R side of things. The flashier, machine intelligence part is neural networks. As a very practical introduction, you can look at scikit-learn, which is a Python interface to a vast array of machine learning tools (including, with a hack, Google's just released TensorFlow toolkit for deep learning with neural nets). With scikit-learn, you can start machine learning within minutes. After 15 minutes, you can go to Kaggle.com to compete against other ML people - predict what products people will like based on their history, or recognize individual whales based on helicopter pictures. scikit-learn otherwise has a no-neural-networks policy, but from there, you can easily proceed to Theano/Keras for easily accessible deep learning. Basically, there are two sides of one coin - statistical learning, where a prototypical example would be discovering genes predictive of certain diseases, and people will use R, and machine intelligence, which is what people will call it when computers can speak or recognize images, and people use Python or C or Java. Under the hood, it's all very similar - the Lasso kind of is logistic regression kind of is an SVM kind of is a perceptron kind of is a neural net kind of is a deep net. But the flashier stuff, that's neural nets, and the heavy-lifting, fleshed-out, applied stuff is often statistical learning. That the kind of answer you're looking for? Cingulate has a new favorite as of 04:02 on Feb 14, 2016 |
# ? Feb 14, 2016 03:39 |
|
If you just want a one-glance impression of what machine learning is currently about, go here: https://www.kaggle.com/competitions And if you want a one-glance impression of how it does that: (This is a range of ML tools trying to classify a simulated data set based on just two features - you see some classifiers, such as the trees, make a bunch of binary decisions, others fit simple (Linear SVM, Naive Bayes) or more complex (RBF SVM) functions to the data.) Yeah I know, when Yudkowsky talks about it, it looks a bit scarier right? A really enlightening example are the aggregation (e.g. AdaBoost, RandomForest) algorithms. Here, a bunch of extremely stupid learners are built, each one being deliberately kept stupid, and then, they are combined into one superior predictor. One key feature is that this prevents the tool from (basically) memorization. This is pretty cutting-edge stuff. E: another thing you can immediately see is the "No Free Lunch" theorem: no single classifier is simply "better" than all others. Each classifier is worse at one task than some other classifier. Some will excel at the complex tasks, but be beaten at the easier ones, for example. Currently, machine intelligence is very much about specialization, not generalization. Cingulate has a new favorite as of 03:51 on Feb 14, 2016 |
# ? Feb 14, 2016 03:47 |
|
Yeah, that looks really cool. Thanks. e: actually, while we're here, what's the difference between linear svm and linear discriminant analysis? I assume they're two different ways of getting to results of that 'shape'? e2: the 'neural networks have to be so small they can't learn everything they see' thing is the most interesting thing I've heard all week. It mirrors directly the philosophy of science point that you can trivially describe a 'law' to 'govern' any set of data by just writing the data down again, but it won't have any application outside the data. You want your laws to encompass the data with simpler principles. It's cool when something so abstract appears directly in real work. Peel has a new favorite as of 04:36 on Feb 14, 2016 |
# ? Feb 14, 2016 03:58 |
|
nostalgebraist's small Yudkowsky shell script should be achieving AI-Foom any minute now
|
# ? Feb 14, 2016 13:57 |
|
Peel posted:Yeah, that looks really cool. Thanks. It is a very old method - possibly the oldest; it was proposed by Fisher. It has a closed-form solution; this is good if you need to do it a lot for smaller problems, but bad if you need to do it once for a very large problem (as you have to invert massive matrices). SVM intuitively works "the other way around". It tries to find the hyperplane that separates the two hardest cases from each other (the member of group A that is closest to B, and the member of B that is closest to A; that is, the error to be minimized is the "hinge" loss). Then, it doesn't look at the distances of objects from the populations they're from; it just tells you what side of the hyperplane they land on (that is, it is a discriminative, not a generative classifier). It has to be fit iteratively, and is much younger than LDA. While linear SVMs work nicely, their true power often depends on nonlinear kernels, such as RBF. In practice, SVM greatly resembles Logistic Regression, and in fact implementations often share a lot of their code. Generally, SVM is a very common tool, whereas LDA has largely gone out of fashion. I think there might be something inherent to SVMs that makes them easier to regularize, but maybe I'm wrong here. Makes sense? Peel posted:e2: the 'neural networks have to be so small they can't learn everything they see' thing is the most interesting thing I've heard all week. It mirrors directly the philosophy of science point that you can trivially describe a 'law' to 'govern' any set of data by just writing the data down again, but it won't have any application outside the data. You want your laws to encompass the data with simpler principles. It's cool when something so abstract appears directly in real work. Also, note the k-Nearest Neighbor algorithm, where the model is actually just the data and the rule "look at the k nearest items, with distance measure X". (It's an interesting classifier, but it inherently never learns a model of the data.) Also I want to repeat that I'm not a statistician, mathematician, computer scientist or ML researcher. I'm working on cognitive neuroscience, usually of language, and that field is using a lot of ML tools as of recently. So this is basically just the perspective of a layman who's trying to apply these methods.
|
# ? Feb 14, 2016 14:53 |
|
https://twitter.com/RichardBSpencer/status/689692099009097729
|
# ? Feb 14, 2016 23:20 |
|
https://twitter.com/RichardBSpencer/status/698709070585278464 https://twitter.com/RichardBSpencer/status/698709400261758977 Umberto Eco posted:3. Irrationalism also depends on the cult of action for action's sake. Silver2195 has a new favorite as of 00:31 on Feb 15, 2016 |
# ? Feb 15, 2016 00:26 |
|
Cingulate when will you stop talking about boring garbage like actual science and address the real issues like the importance of anime in the future of the white race
|
# ? Feb 15, 2016 00:33 |
|
Parallel Paraplegic posted:Cingulate when will you stop talking about boring garbage like actual science and address the real issues like the importance of anime in the future of the white race Somebody on my Facebook liked one of Scott's articles today.
|
# ? Feb 15, 2016 01:04 |
|
Scott seems okay if you only read one or two of his posts here or there.
|
# ? Feb 15, 2016 02:23 |
|
You know, I'd mention the whole "ending slavery" thing, but I feel like there's a pretty good chance he considers that a negative.
|
# ? Feb 15, 2016 07:46 |
That was a major political realignment and over a hundred years ago, though; what have they done that's positive since, say, Eisenhower?
|
|
# ? Feb 15, 2016 07:57 |
|
Cingulate posted:Also I want to repeat that I'm not a statistician, mathematician, computer scientist or ML researcher. I'm working on cognitive neuroscience, usually of language, and that field is using a lot of ML tools as of recently. So this is basically just the perspective of a layman who's trying to apply these methods. I wanted to chime in and say thanks for your posts, you're dead on. I wish more laypeople had the same knowledge that you do. Your image of common ML algorithms on different classification tasks was an especially great find. I just finished giving a tech talk to a bunch of college students, and I thought the summary might be a useful bit to append to your posts. I summarize the talk with: "Although AI is incredibly powerful, it's usually less powerful than we intuitively feel it should be, and for any nontrivial task you will always find that your intuitions are overestimating the strength of an AI system, and underestimating the difficulty of the task that that system performs." Basically, if you're a layperson who doesn't interact with ML (or other AI techniques) much, I cannot stress enough Cingulate's point that it's so easy to get started. Aside from the resources they linked, you might appreciate Caffe, which is a deep neural net framework which is mostly used with ImageNet, an image-classifying system. All you need is access to a linux box; Amazon will happily rent you one for cheap. Here's the page which provides the list of models they have pre-trained, so that you don't even need the graphics card Cingulate mentioned: http://caffe.berkeleyvision.org/model_zoo.html.
|
# ? Feb 15, 2016 08:08 |
|
Nessus posted:That was a major political realignment and over a hundred years ago, though; what have they done that's positive since, say, Eisenhower? You can quibble about how much of the budget went to abstinence based organizations, but PEPFAR was absolutely a good thing, and has saved a lot of people's lives. But there is the same issue where Spencer probably wouldn't approve of it.
|
# ? Feb 15, 2016 08:14 |
|
Is there another DE thread that Cingulate hasn't claimed yet?
|
# ? Feb 15, 2016 09:35 |
|
Cingulate posted:Imagine this Not that it's particularly here or there, but these motherfuckers spit out heat like there's no tomorrow. I mean, you'd expect graphics cards to spew heat, but even expecting it I was taken aback by the amount. Do not try booting one of these up outside of an air corridor unless you're outside in Finland. If you're in Finland look into moving ASAP.
|
# ? Feb 15, 2016 11:36 |
|
TetsuoTW posted:Is there another DE thread that Cingulate hasn't claimed yet? SolTerrasa posted:I wanted to chime in and say thanks for your posts, you're dead on. I wish more laypeople had the same knowledge that you do. The Vosgian Beast posted:Scott seems okay if you only read one or two of his posts here or there. I'm saying, he doesn't care about AI, and he is in an environment that can obsess over Terminator/Matrix-style Scifi AI all day while failing to grasp even the most basic aspects of real-world AI, and failing to provide its members with any information about actual AI. Like, they'll catch up on some of the flashier findings - Google artificial brains can now paint like Monet! And if you don't understand ML, you assume they're doing that in a similar manner to how a human would do that, so you assume AI is is approaching human-like performance. But this is false. At its heart, the part of AI that really works right now would not be completely misdescribed by calling it "fuzzy statistics", and that means it's really good at something humans fail at (due to, e.g., the cognitive fallacies Yud knows from Kahnemann), and really terrible at stuff humans are great at.
|
# ? Feb 15, 2016 12:55 |
|
The Vosgian Beast posted:Gamergate was started as a way of getting back at a creative type. As far as the big figures in the gamergate controversy go, the vast majority of actual video game creators are in the anti-gamergate camp. The exceptions are what, Vanishing of Ethan Carter guy and Postal 2 guys? Milo and Christina Hoff Sommers certainly weren't game developers, or even people who gave a poo poo about the gaming community before gamergate. The "progs" are mostly "freaks" themselves, because this is not nerds vs. the rest of the world, this is a nerd civil war. Non-nerds largely stay out of it unless they see an opportunity to gain something by associating with it (i.e. Christina Hoff Sommers). And saying "nerd civil war" makes me imagine RMS smashing ESR with a ladder pro-wrestling style which is just hilarious. Woolie Wool has a new favorite as of 16:15 on Feb 15, 2016 |
# ? Feb 15, 2016 16:10 |
|
Cingulate posted:Sociology ... well, again, you could use sentiment analysis. Do you want to know what people are tweeting about? Like, a few million Tweets about Donald Trump? Are Tweets by people with stereotypically "White" names more likely to positively mention Trump? Are people more likely to tweet positively about Trump the day after a terrorist attack? That's stuff where you'd use ML. loving wicked. Although outside my area of expertise, I can't even begin to imagine the kind of network analysis you could do with these things. Doc Hawkins posted:We call them data centers and we've gotten okay at managing the heat. If you have the money Amazon will happily let you rent a preposterous number of GPUs and run whatever you want. Although they'll want to upsell you their machine learning service. I still really want to see a house melt. Curvature of Earth posted:That's pretty much what supercomputers *are* Yeah but can it run Crysis?
|
# ? Feb 15, 2016 16:15 |
|
Woolie Wool posted:The "progs" are mostly "freaks" themselves, because this is not nerds vs. the rest of the world, this is a nerd civil war. Non-nerds largely stay out of it unless they see an opportunity to gain something by associating with it (i.e. Christina Hoff Sommers). I'd watch it.
|
# ? Feb 15, 2016 16:19 |
|
The Vosgian Beast posted:I'd watch it. I'd enjoy it until ESR gets confused about who the enemy is like he usually does and ends up repeatedly punching himself. Actually then I'd still enjoy it.
|
# ? Feb 15, 2016 16:23 |
|
Cingulate posted:Like, they'll catch up on some of the flashier findings - Google artificial brains can now paint like Monet! And if you don't understand ML, you assume they're doing that in a similar manner to how a human would do that, so you assume AI is is approaching human-like performance. But this is false. At its heart, the part of AI that really works right now would not be completely misdescribed by calling it "fuzzy statistics", and that means it's really good at something humans fail at (due to, e.g., the cognitive fallacies Yud knows from Kahnemann), and really terrible at stuff humans are great at. "Fuzzy statistics" is a great description of reinforcement learning (like what Google used to make its AI beat those old Atari games), or UCT / Guided Monte Carlo approaches (like what Google used to win Go). I still don't love the phrasing; I feel that it removes a bit too much nuance. For instance, machine learning as a field is probably better described as "naive but powerful pattern finding", and non-ML computer vision is "a pile of simple algorithms mixed with heuristics that work astonishingly well". There's a lot of work in AI that I don't feel is captured by that description. Is there some place I can go to talk about this? I keep making GBS threads up Yudkowsky threads with AI pedantry.
|
# ? Feb 15, 2016 16:29 |
|
SolTerrasa posted:Is there some place I can go to talk about this? I keep making GBS threads up Yudkowsky threads with AI pedantry. Maybe go to the "Science, Academics and Languages" subforum and either open your own thread about artificial intelligence or check the CompSci thread?
|
# ? Feb 15, 2016 16:34 |
|
A thread about actual factual AI research would be pretty great. These threads keep sliding into discussing the topics themselves rather than dorks mangling them for the same reason those people take enough interest to mangle them: they're cool and interesting.
|
# ? Feb 15, 2016 17:17 |
|
ahahaha Phil Sandifer emailed this morning. He was working out what you could ascertain of Yudkowsky's attitude to women and then he found Red Tidday UP White Tidday DOWN and his head exploded. dis gon be gud.gifPeel posted:A thread about actual factual AI research would be pretty great. These threads keep sliding into discussing the topics themselves rather than dorks mangling them for the same reason those people take enough interest to mangle them: they're cool and interesting. You can get a bit of that in the old LW mock thread. Look for anything by SolTerrasa.
|
# ? Feb 15, 2016 20:29 |
|
Yeah, I have that thread still bookmarked pretty much for that reason.
|
# ? Feb 15, 2016 20:36 |
|
There is a certain irony in Gamergaters jumping in with the NRx love of the cuck meme when Eron Gjoni, by his own admission, is a quintuple beta cuck. (Word filter )
|
# ? Feb 16, 2016 01:24 |
|
Woolie Wool posted:The "progs" are mostly "freaks" themselves, because this is not nerds vs. the rest of the world, this is a nerd civil war. Non-nerds largely stay out of it unless they see an opportunity to gain something by associating with it (i.e. Christina Hoff Sommers). RMS is the better nerd because, while ESR probably has one of those Dork Enlightenment fake Magic cards, RMS has a real one.
|
# ? Feb 16, 2016 02:44 |
|
You know how Wesley has this idea that the progs are all looking to put down the "low status" people (poor White people) by making it difficult for poor people or something? These are the class revolutionaries who'll change that, apparently. Check out the entire thread below: https://twitter.com/ClarkHat/status/699203481366896641 https://twitter.com/ClarkHat/status/699251353378557956 Rev says a lot of crap in there that's worth looking at purely for how confounding they are. Also: https://twitter.com/St_Rev/status/698505887535800320 https://twitter.com/puellavulnerata/status/698517267269754880 And: https://twitter.com/St_Rev/status/699170961556430848 https://twitter.com/St_Rev/status/699215847429169153 I mean, who needs arguments when assertions and ignorance, with a touch of obscurantism will do? You're gonna argue about institutional racism? You can't 'cause I define racism like this, and it can only be so. And in case anyone still had doubts about whether Scott hadn't imbibed of the NRx kool-aid and fallen victim to his own confirmation bias (which is the same confirmation bias Moldbug displays) here's him confirming his belief in the existence of the Cathedral, in case it wasn't already clear how much he likes NRx concepts: http://slatestarcodex.com/2016/02/12/before-you-get-too-excited-about-that-github-study/#comment-325869
|
# ? Feb 17, 2016 04:23 |
|
Merdifex posted:You know how Wesley has this idea that the progs are all looking to put down the "low status" people (poor White people) by making it difficult for poor people or something? These are the class revolutionaries who'll change that, apparently. Check out the entire thread below: If waterboarding is torture, we need a new word for slowly flaying people's skin off!
|
# ? Feb 17, 2016 04:58 |
|
I am so annoyed Phil Sandifer is passing up this title.
|
# ? Feb 17, 2016 09:48 |
|
Merdifex posted:And in case anyone still had doubts about whether Scott hadn't imbibed of the NRx kool-aid and fallen victim to his own confirmation bias (which is the same confirmation bias Moldbug displays) here's him confirming his belief in the existence of the Cathedral, in case it wasn't already clear how much he likes NRx concepts: I thought I'd do a quick reference check on the many citations he links during that piece. An hour later I've lost the will to live. Basically he cites himself a lot, he cites conservative news sites and bloggers talking about studies rather than the studies themselves (even when the studies more or less support what he's saying), he cites a handful of op-eds that offer no concrete data at all, he privileges studies over real-world experience (e.g. about police violence vs black people) and draws incredibly strong conclusions from weak or controversial data. And from time to time he chats utter poo poo. I mean look: quote:It’s why we’re told women fear for their lives in Silicon Valley because of endemic sexual harassment, even though nobody’s ever formally investigated if it’s worse than anywhere else, and the only informal survey I’ve ever seen shows harrassment in STEM to be well-below the average harrassment rate. That link in there? To a HuffPo report of a Cosmo survey? The infographic states that 31% of women in STEM have experienced harrassment. Which to Scott is "well-below" the average harrassment rate of 33%. But then, this whole thing is basically "everyone is lying to you except the sources I like, which just so happen to agree with my preconceived biases". As a scientist I would have expected Scott to recognise an unfalsifiable theory when he sees one, but apparently he's just not looking in the right place.
|
# ? Feb 17, 2016 10:57 |
|
potatocubed posted:I thought I'd do a quick reference check on the many citations he links during that piece. And Scott isn't a scientist, he's a blogger and a doctor. He should be held to the standards of journalists, not scientists. You're right though that he's conveying an impression that does not stand to proper scrutiny. My personal take on what's most wrong with his post there is how he's fully buying into this, here's-one-narrative-here's-another-one-is-false thing in the way he is setting it up (even though he'd probably argue he is above that). He is, surely, to some extent correct. The question is, how large of a factor is this in the grand scheme of things? How do the alternatives fare, what's their cost/benefit trade-off? This should be viewed numerically, not categorically. But that's admittedly hard, and it's much easier on human brains. (This is in fact a primary reason why machine-based reasoning is often superior.) Lastly, you should not give the impression that he's actually in the Moldburg camp politically speaking; he finishes off the post by saying that he thinks it would be terrible if the "red tribe" was in power, because even though the "blue tribe" is bad, the "red tribe" is so much worse. (It's just that his argument makes him look like he's anti-"blue tribe".) There is also very interesting research by Andrew Gelman showing that this perceived absolute polarization (red tribe only eats moose and Blacks, blue tribe gay marries and eats kale, and there is no contact ever but hatred) is pretty false. The world is much more purple than hard-red or hard-blue. But again, this is about quantitative, not categorical thinking, which people are bad at.
|
# ? Feb 17, 2016 12:13 |
|
|
# ? May 20, 2024 01:19 |
|
potatocubed posted:I thought I'd do a quick reference check on the many citations he links during that piece. The guy's spent years and years lying down with neoreactionary dogs, it should not be surprising that he picks up their intellectual fleas. At this point he might as well sign on as neoreaction's propaganda minister.
|
# ? Feb 17, 2016 16:01 |