|
https://twitter.com/ClarkHat/status/698148621460570112
|
# ? Feb 12, 2016 20:54 |
|
|
# ? Jun 12, 2024 06:08 |
|
Remember when Rosa Parks hid under a tarp and said that she'd rather die than go to jail, then got into a high-speed chase and committed suicide by cop? Good times.
|
# ? Feb 12, 2016 21:00 |
|
Question for you people who browse this stuff regularly: How has the alt-right handled the now-finished Oregon Standoff?
|
# ? Feb 12, 2016 21:04 |
Technically I believe that Rosa Parks was on private property I also don't believe her agenda was the seizure of the Montgomery bus line and its redistribution to poor people. (I actually can't extend the analogy much further because the lands these guys were demanding were way less profitable than a bus company, which might have become a cooperative moneymaker for the local community.)
|
|
# ? Feb 12, 2016 21:17 |
|
Nessus posted:Technically I believe that Rosa Parks was on private property The Dildo Ranchers wanted the reserve to be seized and redistributed to rich people.
|
# ? Feb 12, 2016 22:38 |
|
I feel Clark no longer has the rights to the shared hat now that he's been kicked off Popehat. He needs to be ClarkHead now, or Clark White Pointed Hood
|
# ? Feb 12, 2016 23:10 |
The Lone Badger posted:The Dildo Ranchers wanted the reserve to be seized and redistributed to rich people.
|
|
# ? Feb 12, 2016 23:18 |
|
A White Guy posted:
Patriots until they pussied out then they were glory seekers co-opting the movement.
|
# ? Feb 12, 2016 23:26 |
|
Tesseraction posted:Patriots until they pussied out then they were glory seekers co-opting the movement. Is anyone calling it false-flag yet?
|
# ? Feb 12, 2016 23:27 |
|
The Lone Badger posted:Is anyone calling it false-flag yet? During the original standoff, yes. I admit I'm reporting from goons laughing at the situation, but there were plenty of PATRIOTS who believed the entire town was false-flag actors paid to subvert the uprising. Thankfully they were a tiny minority and the rest see this as hilariously said.
|
# ? Feb 12, 2016 23:43 |
|
shabogangraffiti posted:If, one day, full communism reigns all over the world, the world will be free of hierarchy, exploitation, alienation, racism, sexism, and war. So every day without communism means unnecessary, preventable deaths. Also, communism would unleash the productive forces to such an extent that unimaginable technological progress would be made at an incredible rate. Ergo, time travel would soon be invented. Ergo, a future communist society might well think it obligatory to go back in time and torture everyone in the past who is not working full-pelt at activism in order to bring communiam about. (Or possibly create simulations in which indistinguishable copies of those people are tortured… or maybe torture all possible iterations of those people found in every everett’s branch.) Of course, if you’ve never heard of communism, you’ll be fine (for some reason) but if you’re a reactionary of any kind, and you’ve knowingly rejected communism as a goal, you will be punished by the communism of the future. As indeed will I, because not only do I know about it, but I also advocate it, but pretty much all I do is write blog posts about SF, whereas I should spend 24 hours a day agitating.
|
# ? Feb 13, 2016 12:16 |
|
This is the dumbest.
|
# ? Feb 13, 2016 14:53 |
|
Please post the responses.
|
# ? Feb 13, 2016 14:54 |
|
I have a question this thread might know the answer to: why is there such a MRA movement in India? I posted a story about Roosh V on my Twitter and I got a load of Indian men replying to me telling me about how it's really feminism that's illogical.
|
# ? Feb 13, 2016 15:55 |
|
First, there's 500 million men in india. You'll find all kinds of opinions there. I assume the diversity within India is unimaginable. Next, India is pretty famously sexist (to the extent that such a generalization over 1 billion people is possible). (Almost every place on earth looks hypersexist compared to the West, of course.) Then, being against feminism doesn't mean you're MRA. I used to have a lot of (at least nominally) anti-feminist female friends, for example. Finally, are these ethnically indian men in the West, or people living currently in India?
|
# ? Feb 13, 2016 16:26 |
|
Cingulate posted:First, there's 500 million men in india. You'll find all kinds of opinions there. I assume the diversity within India is unimaginable. People living in India.
|
# ? Feb 13, 2016 16:38 |
|
Cingulate posted:Please post the responses. so far only this one: tenaciousvoidcycle posted:As we all know it is rational to choose an outcome which has a higher utility when its probability is taken into account. The probability of full communism is low. The utility is near infinite. Therefore every rational agent should work tirelessly to bring about full communism. It is the only rational thing to do.
|
# ? Feb 13, 2016 17:26 |
|
The Vosgian Beast posted:I feel Clark no longer has the rights to the shared hat now that he's been kicked off Popehat.
|
# ? Feb 13, 2016 17:29 |
|
TinTower posted:I have a question this thread might know the answer to: why is there such a MRA movement in India? I posted a story about Roosh V on my Twitter and I got a load of Indian men replying to me telling me about how it's really feminism that's illogical. The caste system ingrained a strong system of hierarchy and treatment of women plays into that. The dudes replying to you are most likely those who weren't shat on by that system and like with people in the west, those with more privilege are often uncaring of those with less.
|
# ? Feb 13, 2016 17:33 |
|
TinTower posted:People living in India. Share this next. More seriously, I don't doubt there are men in India who feel threatened by the empowerment of women, and who would find Roosh's "works" appealing. I'm interested to see if the MRA movement would be changed by becoming less lily-white, and it may happen in the coming decades.
|
# ? Feb 13, 2016 17:34 |
|
Tesseraction posted:The caste system ingrained a strong system of hierarchy and treatment of women plays into that. The dudes replying to you are most likely those who weren't shat on by that system and like with people in the west, those with more privilege are often uncaring of those with less. Doc Hawkins posted:Share this next.
|
# ? Feb 13, 2016 17:54 |
|
Cingulate posted:I'd guess Indian people higher on the totem pole are probably less sexist, due to more education, more exposure to the West, etc. Bharat Indians do not have totem poles, racist. More seriously not everyone who's done well class-wise in modern India is necessarily better educated in societal issues. It also depends more on how you grew up and the community you were in, same as in the west. The feminist movement in India has more of an uphill battle to fight.
|
# ? Feb 13, 2016 17:58 |
|
Also English language education in India is relatively good, so you're more likely to encounter Indians in a position to read and respond to your tweet than otherwise, all else equal.
|
# ? Feb 13, 2016 18:11 |
|
Tesseraction posted:Bharat Indians do not have totem poles, racist.
|
# ? Feb 13, 2016 18:15 |
|
Cingulate, you are Scott Alexander and I claim my five pounds
|
# ? Feb 13, 2016 18:31 |
|
Ichabod Sexbeast posted:Cingulate, you are Scott Alexander and I claim my five pounds
|
# ? Feb 13, 2016 19:00 |
|
Effective Altruism and MIRI have been taking a beating lately on Tumblr, particularly involving su3su2u1 pointing out that calling MIRI "effective" in any manner is loving ludicrous. (You'll be pleased to know I did my bit to exacerbate matters.) This conversation between su3 and a MIRI supporter gives you some idea of the intellectual parkour MIRI supporters indulge in to keep their sci-fi boondoggle soaking up literally the actual money that would have been going to mosquito nets. Scott proposes an equitable and moderate solution. The poison pill is point 2: people who believe trivially insane bullshit must not be excluded from the community if they're sincere!! I wonder what group he could be thinking of. (I think altruism with an eye to effectiveness is a great idea! I have a number of concerns with the Effective Altruism subculture, however. And in conclusion, reifying verbs and particularly reifying adjectives is frequently a mistake.)
|
# ? Feb 14, 2016 00:16 |
|
Uuuuuuuh I have a question. Why is it Effective Altruism? Shouldn't it be Efficient Altruism?
|
# ? Feb 14, 2016 00:22 |
|
Another Scottwatch, because divabot made me go to Scott's other page thing. http://slatestarscratchpad.tumblr.com/post/139215026911/oligopsony-gdanskcityofficial Here, Scott is discovering the unreasonable effectiveness of recurrent networks, a blog post by Alex Karpathy that made a semi big splash on the web almost a year ago. It is Alex talking about, to simplify, the deep learning trend that's currently absolutely dominating machine intelligence. Every huge result about AI you've heard of recently? It's deep learning, usually either conv nets (for images) or recurrent nets (for language). Here, Karpathy is showing how good character-based recurrent nets have become. We all know recurrent nets (and their bigger brothers, LSTMs) are the poo poo what language is concerned, but usually, we give them at least words. But they've gotten so good, we can just give them single characters, and they learn to produce vaguely coherent and grammatically correct sentences! That's some amazing stuff. You can do this today from your home - if you have a recent (NVIDIA) graphics card, your computer will be speaking like Nietzsche in about 10 minutes time, starting from the moment you've installed Python and CUDA. And then, there's Scott. Scott posted:What would happen if you iterated it a billion times? Would it eventually level off? Would it just give you War and Peace exactly as written? I can go into more detail (and promise to actually do it this time) if anybody cares, it's only mildly technical, but this is really obvious stuff: Scott nows nothing about real AI. He's basically asking the questions that people in the 80s would bring up as straw men when trying to explain how AI actually works, not to speak of the fact that he seemingly has no concept of either scale or information theory. E: I'm not saying Scott is stupid; these are reasonable questions, for a person utterly naive to all of the relevant fields (learning theory, CS, cognitive science, linguistics, ...), and I only know better because I'm working in a few of these fields. I'm saying, all of his exposure to AI risk people has not resulted in him understanding even the mere basics of contemporary machine intelligence. Cingulate has a new favorite as of 00:45 on Feb 14, 2016 |
# ? Feb 14, 2016 00:39 |
|
"Imagine four billion iterations on the edge of a cliff." -Scotty Five-Codex
|
# ? Feb 14, 2016 00:40 |
|
I am also naive to all of those fields, so could you talk me through it so I don't have to guess?
|
# ? Feb 14, 2016 00:52 |
|
Lol. My only contact with deep learning is from taking Andrew Ng's Stanford courses back in the day and intermittently dipping into the literature and even I can tell what bullshit Scott is spewing there. And he's one of the biggest spokesman for MIRI.
|
# ? Feb 14, 2016 00:59 |
|
Peel posted:I am also naive to all of those fields, so could you talk me through it so I don't have to guess? Well: machine intelligence is machine learning. Right? Computers are fast and reliable. Now, they're still far away from human brains, but they're pretty close to something like the visual cortex of a mouse, and yet computer vision is still poo poo; and they're close to the motor cortex and cerebellum of a small bird, but bipedal robots still can't walk upright on two legs very well; and there are a few cases of people losing most of their brain slowly, but still being able to speak better than any computer. So either brains come with a better default software, or brains are much better at learning - and the real answer is: brains come with a better default software for learning stuff. Neural nets are currently our best bet at learning many things. And one thing we very quickly learned is that to make them learn, we need to prevent them from memorization. That is, you can feed a neural network a lot of text, and if you use a very big network, it will be able to perfectly remember the whole input, it will be able to perfectly repeat the input*, but it will not generalize - it will not be able to do anything else than the input. This is because computer storage is so reliable; digital storage is perfect. Human memory is, in contrast, running a massively successful compression algorithm. A computer may be easily able to remember every single pixel in Star Wars, but it won't remember the plot. You watch it once, you can't remember any specific line, but you'll remember what was happening. And for learning, the network must be forced to generalize - figure out not the pixels, but the plot. So the first thing anyone does when training a net is ensure that its capacity is too small to simply memorize. We've known this since at least the 80s, when the first recurrent nets began to be used to learn language. Contemporary work does a lot more to reduce the network's ability to memorize and force it to actually learn while making the nets bigger and bigger (e.g., to be able to keep adding layers and layers for deep nets). For example, dropout is used to introduce random noise (making perfect recall impossible), deep nets are trained layer by layer, and LSTMs are being used instead of regular recurrent nets to fight the vanishing gradient problem (which is basically the inverse of perfect recall, to oversimplify). By some measures, regularization, in some form or other (various mechanisms to prevent overfitting - the more general problem underlying memorization) has been the primary topic in machine learning for the last decade or so. Like, that's what all of the Google guys working on nets have been worrying about. And then, Scott wonders if perfect recall will set in after "a billion" iterations. No; because the information density of the net is inherently smaller than the corpus, because otherwise, it would never learn anything about the English language, it would only memorize the input. If it did have the capacity, it would be very quick at storing the pattern. Since it doesn't, it never will. * E: a neural network can approximate a Turing machine to an arbitrary degree. Your computer can easily have perfect recall for War and Peace (you can copy and paste it right now and perfectly multiply it 10000 times). Thus, so could a neural network of sufficient size. Merdifex posted:Lol. My only contact with deep learning is from taking Andrew Ng's Stanford courses back in the day and intermittently dipping into the literature and even I can tell what bullshit Scott is spewing there. Cingulate has a new favorite as of 01:12 on Feb 14, 2016 |
# ? Feb 14, 2016 01:09 |
|
That is actually really interesting. I should have made my other major Linguistics instead of English I guess
|
# ? Feb 14, 2016 02:23 |
|
Twerkteam Pizza posted:That is actually really interesting. I should have made my other major Linguistics instead of English I guess It's really cool stuff. These things can produce Wikipedia HTML, including brackets and stuff (that's the most impressive aspect: quasi-recursive, long-term dependencies!). And really, you can use them yourself very easily, and train them on arbitrary corpora. You could make your own Scottmachine.
|
# ? Feb 14, 2016 02:30 |
|
Cingulate posted:And really, you can use them yourself very easily, and train them on arbitrary corpora. You could make your own Scottmachine. nostalgebraist is apparently feeding the Sequences to it, and uploadedyudkowsky posted a bit today ... perhaps Yudkowsky really can be replaced by a very small shell script.
|
# ? Feb 14, 2016 02:36 |
|
Cingulate posted:Read the original post by Karpathy: http://karpathy.github.io/2015/05/21/rnn-effectiveness/ This is nice but I don't quite see the point if it can't create new intelligence like you said (I think, maybe I totally hosed it up). If I can make a computer that would write literature reviews for me though that would be cool. I'd pay a pretty penny for that
|
# ? Feb 14, 2016 02:38 |
|
divabot posted:nostalgebraist is apparently feeding the Sequences to it, and uploadedyudkowsky posted a bit today ... perhaps Yudkowsky really can be replaced by a very small shell script. I guess that makes me naive. I wonder if any of them roughly understands what an LSTM is. Or at least a Support Vector Machine. (I'm not claiming to be some kind of AI pro; I'm only using that stuff in my own, remotely related, work, so I have very passing lay familiarity.)
|
# ? Feb 14, 2016 02:49 |
|
Twerkteam Pizza posted:This is nice but I don't quite see the point if it can't create new intelligence like you said (I think, maybe I totally hosed it up). But of course, machine intelligence can do things even English majors might find useful, such as sentiment analysis of texts, authorship detection, identifying stereotypical patterns in text.
|
# ? Feb 14, 2016 02:52 |
|
|
# ? Jun 12, 2024 06:08 |
|
That is actually very interesting. Are there good easily-available starter resources for studying this? I'm a physicist by training but I've wanted to become informed on this topic even if I don't land there for an actual job.
|
# ? Feb 14, 2016 03:00 |