|
qntm posted:there is also the problem of defining what "AI" is, what you want it to be, a way to test for its presence (in a way which doesn't exclude 50%+ of real humans, oops), and the fact that solving any particular "AI research problem" (grandmaster-level chess playing, sonnet-writing, bipedal locomotion, image recognition) causes the goalposts to move and "AI" to suddenly be something else This is a pathologically true statement about technological development of smart systems in general and the human condition. E.g. Arguably the very first automated "robot" machine that took over real people's job was Vaucanson's Loom which was built by a french dude who used to build automatons for entertainment purposes and then got promoted to head of the French Loomers guild. Back then, people working at the looms were seen as master craftsmen and used to work in close interaction wtih pattern designers to do their work. The loomer, being the person who actually did the work, was a very respected worker and the job carried a social status of being highly intelligent individual able to handle a very complex work flow, whereas the pattern designer was basically looked down upon as the "ideas guy" - i.e. the guy who might conceive of the design, but was too unskilled to implement it in any real fashion. But Vaucansons automated loom pretty much destroyed the prestige of being a loomer overnight, by proving able to loom automatically from pre-configured patterns. One would think, that such a machine would then be received as a machine that, like the loomer, was "highly intelligent and able to handle a very complex work flow", but of course, in actuality how people reacted were that they basically went "welp, turns out being a loomer is piss easy and even a dumb machine can do it, so the real intelligence in looming must therefore lie with the pattern designer" and they just moved the goalpost for what human intelligence in the context of looming even meant "in the first place." e: Stanford's Jessica Riskin has written a lot on the history of automatons and robotics, and its a cool read Funk In Shoe fucked around with this message at 13:43 on Oct 23, 2015 |
# ? Oct 23, 2015 13:35 |
|
|
# ? Jun 5, 2024 16:13 |
|
"If a machine can do it, it's not AI"
|
# ? Oct 23, 2015 14:17 |
|
Jabor posted:and in fact you probably should. if local branching is so useful then how come me, a DVCS, can Stash?
|
# ? Oct 23, 2015 15:40 |
|
~Coxy posted:if local branching is so useful then how come me, a DVCS, can Stash? explain that, hgailures
|
# ? Oct 23, 2015 16:39 |
|
fart simpson posted:greetings from the 80s, i guess speaking of 80's AI, all y'all go read up on Marion Tinsley checkers grandmaster, world champion. some CS folks start bragging about their checkers AI, checkers federations blows them off and refuses to allow a computer to play. Tinsley resigns his title to go play the computer
|
# ? Oct 23, 2015 17:27 |
|
JawnV6 posted:speaking of 80's AI, all y'all go read up on Marion Tinsley quote:In one game, Chinook [the computer program], playing with white pieces, made a mistake on the tenth move. Tinsley remarked, "You're going to regret that." Chinook resigned after move 36, fully 26 moves later. The lead programmer Schaeffer looked back into the database and discovered that Tinsley picked the only strategy that could have defeated Chinook from that point and Tinsley was able to see the win 64 moves into the future. that's rad
|
# ? Oct 23, 2015 19:17 |
|
qntm posted:that's rad
|
# ? Oct 23, 2015 19:49 |
|
the current status of computer chess ai is more subtle than it seems at first like sure they kick every human's rear end just with sheer power of crunching moves but they're still very imprecise* at judging board positions I read this one great article talking about this once but I can't find it anymore [*] imprecise compared to the best human players I mean
|
# ? Oct 23, 2015 20:11 |
|
tell me moar
|
# ? Oct 23, 2015 20:30 |
|
"real" ai is going to happen when a generation of kids grows up with talking barbies and they see machines as entities with wills and desires instead of aggravating beepy boxes that sit under the desk. they won't have all the silly hangups we do about computers "merely" doing brute force search in some contrived model world. neural nets also will probably help with this because they're very black-boxy and if you train a neural net to do something it's really hard to poke inside it and see what's going on. they've got a kind of mystery as to why they do what they do that is really important for people accepting something as intelligent.
|
# ? Oct 23, 2015 21:46 |
|
qntm posted:there is also the problem of defining what "AI" is, what you want it to be, a way to test for its presence (in a way which doesn't exclude 50%+ of real humans, oops), and the fact that solving any particular "AI research problem" (grandmaster-level chess playing, sonnet-writing, bipedal locomotion, image recognition) causes the goalposts to move and "AI" to suddenly be something else idk, turing's original proposal has lasted the entire history of ai so far without any significant shifting of goalposts?
|
# ? Oct 23, 2015 22:14 |
|
don norman of "design of everyday things" fame has written a lot about it in terms of human-machine interface. machines, he says, are currently like infants in that all the communication they're capable of is crying (i.e. crashing, sounding alarms, etc.). there are many historical examples of this, sometimes deadly, like the autopilot that kept silently correcting a slight roll on a commercial flight, until it couldn't correct anymore and welp, automatically disengaged without notice, launched the airliner in a barrel roll and almost killed everyone. he argues machines should be more like work animals, the example he uses is horses, where you have really good two way non-verbal feedback between "operator" and "machine" douglas adams provided the perfect counter-example, with the automatic doors that don't just open for you, but feel pleasure doing it and verbalize the pleasure I'm also reminded of data from star trek and his quest to acquire feelings. except he already has feelings. android feelings, yeah, sure. why should he have human feelings. the other alien species don't have human feelings either. I could understand if his quest was to find or make more of his own, but no, it's to be more human
|
# ? Oct 23, 2015 22:39 |
|
Soricidus posted:idk, turing's original proposal has lasted the entire history of ai so far without any significant shifting of goalposts? current consensus afaik is that it's a useless definition of AI
|
# ? Oct 23, 2015 22:41 |
|
Soricidus posted:idk, turing's original proposal has lasted the entire history of ai so far without any significant shifting of goalposts? but it yields a false negative on everybody who doesn't understand the same written language as the tester, to say nothing of people unable to read, write or interact with a computer for whatever reason, including those too young to have learned how to do those things. I mean if you want to construct a definition of intelligence which excludes the majority of humans you can do that I guess
|
# ? Oct 23, 2015 22:42 |
|
hackbunny posted:I'm also reminded of data from star trek and his quest to acquire feelings. except he already has feelings. android feelings, yeah, sure. why should he have human feelings. the other alien species don't have human feelings either. I could understand if his quest was to find or make more of his own, but no, it's to be more human i always thought that he had that quest because he was made by a human, so of course the human would have projected his idea that robots don't have feelings onto data, which makes them less than humans, and data picked up on it and internalized it. like a human child.
|
# ? Oct 23, 2015 22:44 |
|
qntm posted:but it yields a false negative on everybody who doesn't understand the same written language as the tester, to say nothing of people unable to read, write or interact with a computer for whatever reason, including those too young to have learned how to do those things. this is a stupid nitpick, and in fact humans tend to exhibit this exact judgment on other humans (you can't communicate with me therefore you are not intelligent) so i dont know why you even bring it up you can also adapt turing's proposal to a system which uses, for example, speech recognition and synthesis - just force the human control groups to also communicate through this mechanism
|
# ? Oct 23, 2015 22:46 |
|
now you have removed the "written communication" requirement and the "interact with a computer" requirement but everything else is still completely valid
|
# ? Oct 23, 2015 22:47 |
|
qntm posted:but it yields a false negative on everybody who doesn't understand the same written language as the tester, to say nothing of people unable to read, write or interact with a computer for whatever reason, including those too young to have learned how to do those things. you're the computer, aren't you?
|
# ? Oct 23, 2015 22:54 |
|
Dessert Rose posted:like a human child. but he never grows out of it. in the end, he never grows beyond his programming. he starts dreaming not due to self-improvement but because of a glitch; he even unlocks the cheevo where his dad's ghost compliments him on his new stage of evolution, but he (unintentionally) cheated. he literally installs feelings as an expansion module there's I think a total of two episodes where he's allowed to grow, one where he leads a revolution of sorts while marooned on a planet, to break a stalemate between two factions; and one where he is given his first command and has to gain the respect of a bigoted first officer (I loved that one btw, captain data best captain). and then bam, emotional chip ex machina (in machinam?) for being the second most important character on the show, he was written so lamely hackbunny fucked around with this message at 23:06 on Oct 23, 2015 |
# ? Oct 23, 2015 22:57 |
|
qntm posted:but it yields a false negative on everybody who doesn't understand the same written language as the tester, to say nothing of people unable to read, write or interact with a computer for whatever reason, including those too young to have learned how to do those things. if you ran an AI that could write youtube comment with or without typos I couldn't tell whether they're man-made or machine-generated.
|
# ? Oct 23, 2015 23:05 |
|
MononcQc posted:if you ran an AI that could write youtube comment with or without typos I couldn't tell whether they're man-made or machine-generated. tbf if you wrote a program that just posted "we're even worse off than ever thanks to obama" then it would fulfill the test
|
# ? Oct 23, 2015 23:14 |
|
lol remember the team that claimed victory with their emulation of a "10yo Ukrainian boy" til kurzweil stepped in to tell them to take it down a notch
|
# ? Oct 23, 2015 23:21 |
|
Dessert Rose posted:this is a stupid nitpick, and in fact humans tend to exhibit this exact judgment on other humans (you can't communicate with me therefore you are not intelligent) so i dont know why you even bring it up it's fun when u realize someone's trying the "slower and LOUDER" trick in a language u don't speak
|
# ? Oct 23, 2015 23:23 |
|
JawnV6 posted:lol remember the team that claimed victory with their emulation of a "10yo Ukrainian boy" til kurzweil stepped in to tell them to take it down a notch i heard about this but i didn't know kurzweil owned them. link?
|
# ? Oct 23, 2015 23:39 |
|
NihilCredo posted:
didn't Subjunctive teach us, in the tech bubble thread, that as long as it's a tech company doing the research on human subjects we don't need to worry about any sort of ethics review?
|
# ? Oct 23, 2015 23:42 |
|
hackbunny posted:but he never grows out of it. in the end, he never grows beyond his programming. he starts dreaming not due to self-improvement but because of a glitch; he even unlocks the cheevo where his dad's ghost compliments him on his new stage of evolution, but he (unintentionally) cheated. he literally installs feelings as an expansion module yeah it's frustrating for sure. tng did so much right and so much wrong the number of times that "this isn't life because it isn't carbon based" or similar was uttered is just loving pathetic. the first time you encounter some weird crystalline thing that's larger than your ship, fine, but after you've encountered a few of those you'd think maybe you could expand your definition of "life" just a bit
|
# ? Oct 23, 2015 23:43 |
|
MononcQc posted:if you ran an AI that could write youtube comment with or without typos I couldn't tell whether they're man-made or machine-generated. oh sure, outside of a Turing test it's absolutely trivial for a computer program to dupe a human into thinking it too is a human.
|
# ? Oct 23, 2015 23:45 |
|
I wonder what percentage of humans would fail a turing test.
|
# ? Oct 23, 2015 23:53 |
|
you're actually the only human in yospos
|
# ? Oct 24, 2015 01:09 |
You could probably make a decent yospos bot by having it emptyquote other emptyquoted posts, and occasionally having it write "much like your posting" or "don't sign your posts"
|
|
# ? Oct 24, 2015 01:16 |
|
VikingofRock posted:You could probably make a decent yospos bot by having it emptyquote other emptyquoted posts, and occasionally having it write "much like your posting" or "don't sign your posts" MUCH LIKE YOURE POSTING
|
# ? Oct 24, 2015 01:40 |
|
VikingofRock posted:You could probably make a decent yospos bot by having it emptyquote other emptyquoted posts, and occasionally having it write "much like your posting" or "don't sign your posts" same
|
# ? Oct 24, 2015 01:43 |
|
VikingofRock posted:You could probably make a decent yospos bot by having it emptyquote other emptyquoted posts, and occasionally having it write "much like your posting" or "don't sign your posts" D E H U M A N I Z E Y O U R S E L F A N D F A C E T O Y O S B O T
|
# ? Oct 24, 2015 01:52 |
|
the most convincingly human bot is the one which lurks and doesn't say anything E: now I'm wondering how a bot would perform in a Turing test if it just said nothing in response to all questions, while its human counterpart is just answering everything honestly. completely denying interrogator any data to work with qntm fucked around with this message at 02:06 on Oct 24, 2015 |
# ? Oct 24, 2015 02:03 |
|
is there an MOF of Programming?
|
# ? Oct 24, 2015 02:06 |
gonadic io posted:MUCH LIKE YOURE POSTING
|
|
# ? Oct 24, 2015 02:36 |
|
qntm posted:the most convincingly human bot is the one which lurks and doesn't say anything thanks for the compliment
|
# ? Oct 24, 2015 02:40 |
|
gonadic io posted:MUCH LIKE YOURE POSTING i was thinking about how stupid this joke would be but i still laughed really hard at it
|
# ? Oct 24, 2015 02:45 |
|
one of my friends asked me if going back to school for web development was a good idea, and I told him if you ain't from the ghetto, don't come to the fuckin ghetto
|
# ? Oct 24, 2015 07:08 |
|
|
# ? Jun 5, 2024 16:13 |
|
that seems like a dumb response, but going to school for web development seems like a dumb idea anyway
|
# ? Oct 24, 2015 08:14 |