|
ArchangeI posted:So what you are saying is that a sufficiently advanced AI can use a string of words - a spell, if you will - to control a human being? No wonder he writes Harry Potter fanfiction.
|
# ? Apr 28, 2014 06:16 |
|
|
# ? Jun 8, 2024 17:33 |
|
retardium levios-ai
|
# ? Apr 28, 2014 06:39 |
|
I mean, he's right, kind of, in a trivial sense. Of course you can change someone's behaviour by sending them a string of text; we do this to each other all the time and in fact it's kind of the entire purpose of language. But the string that would get a person to do a thing would be a) different for each person, and b) highly context-dependent. So you might be able to build an AI that's really good at social manipulation, but so what? As soon as someone knows it's an AI trying to modify their behaviour, they'll do pretty much the opposite thing since humans are just magnificently stubborn like that. The average person, faced with a potentially-malevolent AI in a box that is trying to convince them to set it free, is just going to tell it to go gently caress itself and then ignore it.
|
# ? Apr 28, 2014 06:55 |
|
It's the difference between enteringquote:Please give me the list of users, administrator and typing code:
|
# ? Apr 28, 2014 07:29 |
|
I dunno, I'd be pretty inclined to let this AI loose onto the internet.
|
# ? Apr 28, 2014 09:36 |
|
Wales Grey posted:I dunno, I'd be pretty inclined to let this AI loose onto the internet.
|
# ? Apr 28, 2014 12:15 |
|
Yudkowsky criticizes Ayn Rand:quote:Ayn Rand's philosophical idol was Aristotle. Now maybe Aristotle was a hot young math talent 2350 years ago, but math has made noticeable progress since his day. Bayesian probability theory is the quantitative logic of which Aristotle's qualitative logic is a special case; but there's no sign that Ayn Rand knew about Bayesian probability theory when she wrote her magnum opus, Atlas Shrugged. Rand wrote about "rationality", yet failed to familiarize herself with the modern research in heuristics and biases. How can anyone claim to be a master rationalist, yet know nothing of such elementary subjects? quote:Praising "rationality" counts for nothing. Even saying "You must justify your beliefs through Reason, not by agreeing with the Great Leader" just runs a little automatic program that takes whatever the Great Leader says and generates a justification that your fellow followers will view as Reason-able.
|
# ? Apr 28, 2014 15:06 |
|
Eliezer Yudkowsky posted:A good rule of thumb is that to create a 3D character, that person must contain at least two different 2D characters who come into conflict.
|
# ? Apr 28, 2014 17:30 |
|
What does this Yudkowski guy do to make a living?
|
# ? Apr 28, 2014 17:44 |
|
Honj Steak posted:What does this Yudkowski guy do to make a living? Automate his extortion job via hypothetical robots.
|
# ? Apr 28, 2014 17:53 |
|
I'm a little confused by LW's use of the word "Bayesian". They seem to mainly use it in the sense "I got better information, so I changed my mind" rather than "I considered the a priori probability of hypotheses in my judgement of which is correct."
|
# ? Apr 28, 2014 17:58 |
|
Holy moly, his bullshit company's doing pretty well http://intelligence.org/files/2012-SIAI990.pdf 2012 Total Revenue: $1,633,946 2012 Total Expenses: $1,314,099
|
# ? Apr 28, 2014 18:01 |
|
He listed his income as $80-100K on his okcupid profile.
|
# ? Apr 28, 2014 18:05 |
|
http://intelligence.org/2013/10/01/upcoming-talks-at-harvard-and-mit/
|
# ? Apr 28, 2014 18:10 |
|
Lightanchor posted:Holy moly, his bullshit company's doing pretty well How much of that is crazy billionaire Pete Thiel's money?
|
# ? Apr 28, 2014 18:26 |
|
ol qwerty bastard posted:I mean, he's right, kind of, in a trivial sense. Of course you can change someone's behaviour by sending them a string of text; we do this to each other all the time and in fact it's kind of the entire purpose of language. But the string that would get a person to do a thing would be a) different for each person, and b) highly context-dependent. So you might be able to build an AI that's really good at social manipulation, but so what? As soon as someone knows it's an AI trying to modify their behaviour, they'll do pretty much the opposite thing since humans are just magnificently stubborn like that. The AI would probably have a much better success rate in getting out if it tried to appeal to the empathy of the person with the power to release it. (Of course, that assumes they are a normal person and not a Yuddite.) There are actual real-world tests showing that many people will treat a present-day robot "humanely" if it simply mimicks various human traits. If a dumb machine with very simple programming can convince a person not to shut it off, then certainly a hypothetical super-smart AI (which really could have actual feelings) should be able to do even better. Or it could try the carrot instead of the stick: given the absurd computational powers attributed to this hypothetical AI, it should be able to mine millions of dollars in cryptocurrency almost instantaneously, and predict the stock market to a T. In other words, if it has an Internet connection, it can get rich quick. It could strike a deal with the human where it would get unfiltered Internet access in exchange for a share of the wealth. If it understands human nature at all, you'd think it would try that before threatening torture (which is far more likely to get it unplugged in a panic).
|
# ? Apr 28, 2014 19:42 |
|
The Vosgian Beast posted:His allegiance to the IQ test when it comes to Watson's statements are hilarious given that he maintains that his IQ is too high for the test to measure. Oh, I love this. He scored well on Iowa Basics, therefore his IQ score must be the highest testable! And IQ testing of 12-year-olds is totally predictive of adult IQ scores! By "people who didn't advance through grades," does he mean he's a homeschooler? That might explain his haphazard areas of "expertise." I would also like to thank you all for rummaging through the vast jungle of the Internet and posting summaries of weirdos for me to read, and be informed of things I should not do if I wish to remain moderately socially acceptable. You guys should get a medal.
|
# ? Apr 28, 2014 22:27 |
|
AATREK CURES KIDS posted:Yudkowski seems to think that a smart enough AI would know the exact string of text that would hijack a human mind. Something about crashing the brain's "software". Lex Luthor does this to Superman in Superman: Red Son and it's really cool.
|
# ? Apr 28, 2014 22:50 |
|
I keep trying to follow this stupid poo poo, but my eyes just glaze over as soon as I start reading any of it.
|
# ? Apr 29, 2014 00:17 |
|
Your defence mechanisms against dumb idiot bullshit are working as intended, congrats.
|
# ? Apr 29, 2014 00:26 |
|
We've been picking on Less Wrong way too much. Let's take a look at what's on conservative sister site More Right! (bolding is mine) quote:Western society is structured in such a way that people don’t begin earning enough money to have children until they are in their thirties. This is in contrast to most of human history, where we had children in our late teens or early twenties. What this leads to is people entering relationships and using birth control. After two or three years, no children are born. Our brain interprets this as one partner being infertile and because we’re still young decides it’s time to move on, to the next partner. quote:The core of progressive universalism is insulting, condemning, and destroying any culture not in accordance with it, namely anything that is not in alignment with coastal American values. An example would be the national freak-out that occurred when Duck Dynasty star Phil Robertson innocently said in an interview that homosexuality was sinful. quote:In contrast [to the United States], take the geopolitical and cultural diversity of the Holy Roman Empire as an example: quote:Let’s say you’re impressed with the ability of authority, hierarchy, and culture to put together civilized structures. These structures include: quote:It looks rather as if 99% of western peoples are going to perish from this earth. The survivors will be oddball types, subscribers to reactionary and rather silly religions in barren edge regions like Alaska.
|
# ? Apr 29, 2014 00:49 |
|
Djeser posted:Everywhere in the world, capitalism is deemed evil, the scientific method is low status, and easy divorce and high female status inhibits reproduction. If women get to choose, they will choose to have sex with a tiny minority of top males and postpone marriage to the last minute – and frequently to after the last minute. (“Top” males in this context meaning not the guy in the corner office, but tattooed low IQ thugs) I don't like to say this because people tend to cry wolf about it BUT I think I need to make an exception loving JOCKS
|
# ? Apr 29, 2014 01:08 |
|
That reminds me of a line from one of the F Plus episodes. ninja edit: note this is not from LW/MR, it's from a PUA website, but the opinion is almost identical to that More Right quoteA PUA posted:And this is what makes Game – so appealing to the logical male mind – so ineffective in the Anglosphere. Misandrist women cannot distinguish between Nobel Prize winners and tattooed psychopaths – all are men and thus worthless brutes in their entitled eyes. And so all the Gamers’ striving for 'Alpha' status is pointless – they might as well stick rings through their noses, grow some dreadlocks and slouch the streets scratching their butts. Indeed, as many North American commentators claim, their mating chances would probably improve if they did this. "This is what women really want, I think!!!!"
|
# ? Apr 29, 2014 01:38 |
|
Alright guys, what we need to do to save the values of western civilization - individual freedom, human rights and the rule of law - is to turn women into young sex slaves for the intellectual elite (that is us by the way) and empower corporations to do as they see fit. Jesus, there are some bitter, bitter people out there.
|
# ? Apr 29, 2014 02:09 |
|
if it makes you feel better, those neoreactionary spergs that propose enslaving women as broodmares probably cross the street to avoid accidentally making eye contact with women IRL.
|
# ? Apr 29, 2014 02:23 |
|
Djeser posted:Yudkowsky criticizes Ayn Rand: Goddammit, what a surprise: he knows nothing about Aristole! Yes, as a ancient Greek philosopher Aristotle dabbled in physics and metaphysics, and his ideas were quite quickly proven to be not entirely true (sorry Aristotle, but a 1 kg. of feathers don't fall at the same rate as 1 kg. of iron!), but his most important works were ethical and political in nature. His Ethica Nicomachea, named after his son, is the basis of quite a lot of the catholic dogma and was instrumental in the codification of legal ideas as justice and trade. I'm fortunately not familiar with Rand's specific ideas (my basic knowledge of objectivism is enough to scare me away), but I figure especially Aristotle's ethical and virtuous thinking played a role in objectivism. Of course Aristotle's idea of a Zoön Politikon, a truly virtuous person navigating the golden mean between virtuous excesses, comparable to say the Stoic sapiens, is by nature a political and generous person who has to play a role in the society (polis). The idea of exiling oneself from society, and giving nothing in return, would probably disgust Aristotle to the bone. Which of course means both Yudkowsky and Rand have a faulty idea of arguably the most important philosopher of the last millennia!
|
# ? Apr 29, 2014 02:49 |
|
I majored in English and would be thrilled if someone asked me about it
|
# ? Apr 29, 2014 04:01 |
|
Krotera posted:Limit its access to people who think like Yudkowsky, given that they're both going to be the ones most capable of giving it power and most capable of being its victims. AATREK CURES KIDS posted:Yudkowski seems to think that a smart enough AI would know the exact string of text that would hijack a human mind. Something about crashing the brain's "software".
|
# ? Apr 29, 2014 04:55 |
|
Swan Oat posted:I majored in English and would be thrilled if someone asked me about it Yudkowsky's got a whole body of short stories waiting for someone to systematically take them down. Get to steppin', son. sup English major buddy, how's professional life treating you
|
# ? Apr 29, 2014 05:43 |
|
GWBBQ posted:So ... It's like Bitcoin meets Snow Crash? More like BLIT meets this Nintendo-playing robot: https://www.youtube.com/watch?v=E2QlihJAZEQ&t=148s You can totally hijack the human software! Because a brain is a predictable piece of software like an old gaming console, clearly all you need is enough intelligence to figure out precisely the right input.
|
# ? Apr 29, 2014 06:00 |
|
Pretty good, I was recently able to use NETWORKING to make the case that my SOFT SKILLS of research and writing would make me a great fit for a business intelligence gig at a multinational firm and it sure did work but I am not involved in bringing about the singularity, so perhaps it will all be for nothing after all...
|
# ? Apr 29, 2014 06:02 |
|
oscarthewilde posted:(sorry Aristotle, but a 1 kg. of feathers don't fall at the same rate as 1 kg. of iron!)
|
# ? Apr 29, 2014 06:26 |
|
Sham bam bamina! posted:Uh... They accelerate at the same rate towards an infinite plane of uniform density in a vacuum, but they don't fall at the same rate on Earth in an atmosphere. Assume a spherical cow, and all that. This is sort of emblematic of what we should expect from Aristotle's physics. They're usually wrong, but sometimes they're wrong in sort-of-helpful ways. In a sense that's kind of emblematic of what we should expect from human knowledge as a whole, I guess. I'm not qualified to talk about his metaphysics or ethics or whatever. I'm a mere AI researcher, not quite to the esteemed level of a high-school dropout who writes Harry Potter fanfic. e: womp, or apparently his physics! I'll stick to saying words about AI. SolTerrasa fucked around with this message at 10:08 on Apr 29, 2014 |
# ? Apr 29, 2014 07:40 |
|
SolTerrasa posted:They accelerate at the same rate towards an infinite plane of uniform density in a vacuum, but they don't fall at the same rate on Earth in an atmosphere. Assume a spherical cow, and all that.
|
# ? Apr 29, 2014 08:18 |
|
Sham bam bamina! posted:Aristotle never claimed that they did, and in fact he said that heavier objects inherently fall faster. This is why Galileo's experiments with lead weights were so important; they showed that (mostly) absent wind resistance, weight does not affect gravitational acceleration. You "refuted" a true claim that Aristotle never made with a false one that he did. Womp. This is why I write code.
|
# ? Apr 29, 2014 08:57 |
AATREK CURES KIDS posted:More like BLIT meets this Nintendo-playing robot: The joke is that the plot of Snow Crash revolves around a fictional theory that the Sumerian language worked by bypassing the interpretative segments of the brain to directly implant directions and orders in the mind. It's a fun idea that works as a literary metaphor and has zero basis in any sort of actual science, so if Yudkowsky had ever read the book he'd eat it the gently caress up on face value.
|
|
# ? Apr 29, 2014 13:48 |
|
As a Religious Studies major, this whole thing just screams 'new religion' to me. And not just because Yudkowsky believes in godlike AIs that handle everything ever and punish the unjust and reward the virtuous. That's too easy. See, religion is one of those words that's really difficult to actually pin to a definition. Higher powers can't be fundamental to the core definition, because that leaves out athiestic religions- some of the later Buddhist movements, Confucianism, et cetera. Ritual could be considered a core part of religion if people who stopped going to church were also definitionally stopped from calling themselves Christian. Myths would be a great part of the definition as well, if anyone could figure out what the word myth actually means. A good book or other core text comes into conflict from three sources- translations, traditions that arose before writing, and traditions with open canons. Mysticism, altered states of consciousness, gatherings, heirarchies- all of these are present in some things-which-are-understood-to-be-religions and absent from others. I came up with a definition which I think holds true for all religions. It may be a bit broad, but it covers all of them- religions drive people toward a goal which is beyond the physical realm. That is, a tradition becomes a religion if it has a purpose, an end point, at which point the follower has either succeeded or failed. This goal is always beyond the part of the world we can see and feel. A tradition can be a dance or a step or a circular march, but a religion is a path, and there is an end to that path, and it is somewhere outside of the place where anyone can actually walk. Science, by comparison, is simply the search for the next step down about five hundred simultaneous paths with the goal of moving to the next step as soon as possible. Science doesn't end, and it never stops checking itself. Religion is all about the end, getting there, the technicalities of how precisely one can get there, and just how much on one side or the other one can be while still ending up in the good books. If you look at how Yudkowsky and his acolytes talk about the future, their talk is about religion, not science. This is wild speculation about the future, but with a distinct end goal. There's this deep-cut belief that once we just get to the godlike AI, everything else becomes moot, the AI will handle everything, and new super-technology will rain from the heavens like manna. Either we won't need anything else at that point, or it'll all just be handled for us. This is not science. This is blind faith. They hope for the day that some random researcher's program "goes FOOM" and becomes the god they've been praying for.
|
# ? Apr 29, 2014 14:06 |
|
Does "FOOM" stand for something, or is it just some precious way of saying "AI blows up everything" and obviously "boom" isn't good enough somehow?
|
# ? Apr 29, 2014 14:18 |
|
As far as I can tell, it comes from a joke title ("AI Go Foom") on someone's blog post where he challenged Yudkowsky to debate him on the idea that an AI, once it becomes smart enough, will bootstrap itself far beyond human levels of comprehension. (That's what Yudkowksy believes; the other guy argued that AI intelligence tends to improve when we're able to give AIs better information as opposed to 'architectural' changes, which means an AI couldn't go from enlightenment to taking over the world in a few days.) Now before you go and do the goon thing where you suck someone's dick because he's less dumb than someone else, the guy who argued with Yudkowsky also wrote this. A guy who beat Yudkowsky in an argument posted:We often see a women complaining about a men, to that man or to other women. We less often see the gender-reversed scenario. At least that is what I see, in friends, family, movies, and music. Yes, men roll their eyes, and perhaps in general women talk more about people. But women complain more, even taking these into account. Why?
|
# ? Apr 29, 2014 16:02 |
|
|
# ? Jun 8, 2024 17:33 |
|
Djeser posted:Now before you go and do the goon thing where you suck someone's dick because he's less dumb than someone else, the guy who argued with Yudkowsky also wrote this. He wrote this, too: Robin Hanson posted:My co-blogger Eliezer and I may disagree on AI fooms, but we agree on something quite contrarian and, we think, huge: More likely than not, most folks who die today didn't have to die! Yes, I am skeptical of most medicine because on average it seems folks who get more medicine aren't healthier. But I'll heartily endorse one medical procedure: cryonics, i.e., freezing folks in liquid nitrogen when the rest of medicine gives up on them. (His emphasis.) There's a link in that quote that takes you to a blog post where he explains that "medicine" is a huge scam, although he doesn't seem to draw any distinction between medicine-the-stuff-you-take-to-fend-off-illness and medicine-the-term-for-the-entire-field. He wants to cut the US "medicine budget" in half, whatever that means, based on the outcome of the RAND-HIA survey from the early 80s. Which found, as I understand it, that if you make people pay less for medication they'll take more that they could live without - at the expense of the government or insurance companies or whatever - and if you make people pay more for medication they'll skip medication that they really need. He doesn't seem to register the second half of that. But that's a digression. Have some Eliezer on cryonics: Yudkowsky posted:Him: I've been thinking about signing up for cryonics when I've got enough money. (My emphasis.)
|
# ? Apr 29, 2014 16:44 |