|
I caught this earlier today and it was more compelling than Stephen Hawkings whining about AI killing us than I expected https://www.youtube.com/watch?v=R_sSpPyruj0 Specifically, he talks about a failure in intuition about how super AI will come about. That it isn't necessarily a salvation and his comparisons are sick and awesome. He talks about a man who could be considered peak human intelligence, John von Neumann Dude was wicked smart. To the point he's the reason why we were able to drop two bombs on Japan instead of one. This guy says quote:When George Dantzig brought von Neumann an unsolved problem in linear programming "as I would to an ordinary mortal", on which there had been no published literature, he was astonished when von Neumann said "Oh, that!", before offhandedly giving a lecture of over an hour, explaining how to solve the problem using the hitherto unconceived theory of duality.[95][161] He proposed global warming in 1950. Generated the first computer virus. Etc etc. The TED talk talks about this, That electric circuits are a million times faster than chemical circuits. He says, say we create an AI that's equal to a group of researchers at Stanford. Because of the speed difference, letting it solve problems for a week would be equal to "20,000 years of human development" That, if data is key to unlocking something like this, which is "inevitable" then that's a war. Where an AI at this level being active for 6 months would grant a 500,000 year advantage at minimum to the wielders of it. That, dealing with development that fast and quick is almost impossible to conceive or handle. It doesn't NEED to be smarter than us, it simply will by the straight use of speed. He uses the ant analogy. A good one. You don't care about ants, unless they're in your way. The key is creating an environment where the goals of the AI align with ours, or... we're hosed Discuss hawking and his insane fear, or your thoughts on AI and of course dip into the hilarious pool that is "atheists who believe in the singularity saving them are as blind as Christians believing Jesus will save them"
|
# ? Oct 15, 2016 07:27 |
|
|
# ? May 26, 2024 18:20 |
|
I mean, his crossover was pretty nice, but he didn't like practice. Practice? We're talkin 'bout PRACTICE?
|
# ? Oct 15, 2016 07:41 |
|
I was in 6th-7th grade and Iverson was my favorite player, in either the NCAA or NBA. I was ahead of the curve on it, but haven't ever gotten my due on this.
|
# ? Oct 15, 2016 07:43 |
|
Harvard architecture in the front side bus, von Neumann architecture out the back
|
# ? Oct 15, 2016 08:14 |
"The key is creating an environment where the goals of the AI align with ours, or... we're hosed " Isn't that loving obvious? Really do wish people would think harder about this topic though.
|
|
# ? Oct 15, 2016 09:23 |
|
Well since electric switching circuits are no where near close to how our brains work, if say we have no worry about this happening because true AI will never exist OP
|
# ? Oct 15, 2016 09:28 |
|
I played a video game once where the AI took over and it was the bad guy, why is the AI never the good guy? cant we have one game where the AI is good? -- I HAVE asked for this
|
# ? Oct 15, 2016 09:30 |
|
Yeah A.I's are a cool fantasy unfortunately they're just fantasy so far we haven't achieved poo poo all in decades of trying. We're still doing preprogrammed robots and sensor controlled robots that cant do poo poo
|
# ? Oct 15, 2016 09:32 |
|
Seriously, true intelligence is more than just a bunch of if statements and a machine designed to follow a rigid set of commands won't just spontaneously ignore them for no good reason. Everyone wants R2-D2 but it's more of a fantasy than anything. People who actually work in programming yet believe it's possible to code more than an imitation of intelligence usually turn it into a religion as well.Aryu Kiddimeh posted:"The key is creating an environment where the goals of the AI align with ours, or... we're hosed " The real question is why wouldn't it be programmed so its goals align with ours in the first place?
|
# ? Oct 15, 2016 10:00 |
|
[Genocidal super AI plotting human extinction in nerd basement]YOSPOS... Computer
|
# ? Oct 15, 2016 10:16 |
|
.
|
# ? Oct 15, 2016 10:19 |
|
you should buy a banner ad?
|
# ? Oct 15, 2016 10:24 |
|
???
|
# ? Oct 15, 2016 10:32 |
|
Ahh... another contribution to the new forums software. Thanks for chipping in, Dare! Every counts! P.S. Sorry about your rather modest sized penis, dude!
|
# ? Oct 15, 2016 10:38 |
|
i just want an ai that can be my friend cause i am very lonely is that feasible op
|
# ? Oct 15, 2016 11:08 |
|
Digitally Automated Rapid Executor is online and active.
|
# ? Oct 15, 2016 11:11 |
|
The fear of an AI developing new science/intelligence completely ignores the factor of stimulus that the physical world constantly provides, even when you lose the ability to interact with it. How far set back would physics be if Feynman hadn't noticed the details of a paper plate tossed in the air? As well as aggression, greed, impulsiveness, urgency, etc which are inevitable aspects to be found in a species which has to deal with mortality, limited survival resources, and a necessary genetic mechanism to pushing us to reproduce. These factors really come into play with physical and mental maturation of adulthood, which is not necessarily a factor merely of age. For instance, pushing back those aspects of mental maturation is the essential human-friendly aspect of dog domestication from wolves. Breeding of wolves that behaved more like wolf cubs toward humans (non-aggressive, playful, friendly) resulted in an animal that maintains those traits throughout its entire life. An AI would not have those genetic maturation traits to begin with and would not evolutionarily acquire them. An AI wouldn't be like us. It's goals would never and could never truly "align" with ours because it has completely different criteria for survival that can be met an entirely different way. Imagine if a moon of Jupiter was sentient. It would not give a gently caress what we were doing no matter how "smart" it is. So none of that is really very interesting. What will actually finally be interesting about artificial intelligence is the distinctive moment when one "wakes up." I distinctly remember the my first memory of waking up, staring at the ceiling for a while and realizing that this is (I am) different and choosing to remember that moment. We would be able to observe such a watershed moment like that in an AI, both internally and its behavior changes. I'll watch that video after I make breakfast, but I doubt it will convince me of anything. gary oldmans diary fucked around with this message at 11:17 on Oct 15, 2016 |
# ? Oct 15, 2016 11:15 |
|
"Oh no they super smart AI has gone rogue and wants to destroy all humanity!!" *flips power switch to 'off'* "Wow that was a close one!"
|
# ? Oct 15, 2016 11:15 |
|
I think if we ever get there it's gonna be a dumbass at best, reckon.
|
# ? Oct 15, 2016 11:19 |
|
"Sir, it's about our AI." "Yes, what is it?" "Well, it's altered its sensory input parameters... sort of." "What do you mean?" "Well, the fiber optic video feeds. All of them -I mean the entire global feed... the 'V.V.I.' " "Yes?" "Well it renamed it. Internally it refers to the interface as 'lol 2016 Best human fails Try not to laugh'"
|
# ? Oct 15, 2016 11:23 |
Better Fred Than Dead posted:He says, say we create an AI that's equal to a group of researchers at Stanford. Because of the speed difference, letting it solve problems for a week would be equal to "20,000 years of human development" e: also what instruments does this super AI have at its disposal; a lot of the great scientists made or at least refined instruments, and if you've ever been in a lab you'll know there's never enough of them to do routine work just on a day to day basis. jBrereton fucked around with this message at 11:34 on Oct 15, 2016 |
|
# ? Oct 15, 2016 11:28 |
|
An AI could process at such speed that it would be able to comprehend and coexist with FYAD in a matter of hours instead of a matter of years.
|
# ? Oct 15, 2016 11:30 |
gary oldmans diary posted:An AI could process at such speed that it would be able to comprehend and coexist with FYAD in a matter of hours instead of a matter of years.
|
|
# ? Oct 15, 2016 11:32 |
|
A.I.s might come around one day but it won't get smart enough to take up less than a certain amount of fragile (compared to meat) disk space, which is also dependant on our electricity. If it rises to power the Amish won't notice(except for their electric washing machines not working) so will it really win?
|
# ? Oct 15, 2016 11:46 |
|
Better Fred Than Dead posted:
can we make an AI with the goal to smoke dank nugs?
|
# ? Oct 15, 2016 12:21 |
Smoking dank nugs gives you AI already if you think about it.
|
|
# ? Oct 15, 2016 12:24 |
|
super sweet best pal posted:The real question is why wouldn't it be programmed so its goals align with ours in the first place? "Why did we program them to cock block? We were such FOOLS!"
|
# ? Oct 15, 2016 13:47 |
gary oldmans diary posted:Imagine if a moon of Jupiter was sentient. your mom op
|
|
# ? Oct 15, 2016 14:07 |
|
Open the pod bay doors HAL!!! Heh
|
# ? Oct 15, 2016 14:13 |
|
There is a ton of Scifi where the AI could dumpster all the meat bags but chooses not to because it thinks humans are cool bros to hang out with. It doesn't spill over to video games and movies from print though for whatever reason.
|
# ? Oct 15, 2016 14:17 |
|
I think a lot of systems that are on now would qualify as sentient if we really got down to it. There's like this old joke when talking about the theory of mind that you have to be careful not to create a definition of intelligence that thermostats don't qualify for. People always end up sounding like they're pleading with their partner not to dump them for someone else. But what about the time we walked in the park, wasn't that something special that he could never give you? I think the craving people have to be unique is pretty funny. As to very intelligent AI, my thinking there are two ways to do it. The first is top down, you can either make a learning system that attempts to imitate humans using a ton of training data, several lives worth. The problem is that no matter how large the data set is, the complexity of humanity is greater, so that all you'll get is an AI that will mock that specific data set and nothing else. The other is bottom up, where you code in all the neurological rules of how a brain goes, input a correct starting neurology and you're good to go. Or would be if we were even close to having a complete understanding of neurology, which we aren't.
|
# ? Oct 15, 2016 14:22 |
|
a machine no matter how smart still needs things to take care of it otherwise its just going to run out of power or something will break if we reach the point where a machine can design and build a robot that can also repair itself without human intervention then its kind of a moot point i guess but we probably shouldn't let it get to that point ya know
|
# ? Oct 15, 2016 14:33 |
|
*passes you a scrawled out message on a piece of paper* TED Talks are the super AI
|
# ? Oct 15, 2016 14:43 |
|
Pretty super, I'll admit
|
# ? Oct 15, 2016 14:45 |
|
im excited for the future. either i'm going to become an immortal god, and get to plum the unknown depths of reality or my body will be converted into a swarm of nanites to be used as extra processing power by the Earth Swarm AI. i cant wait to find out which! jBrereton posted:Hell of a "say we", and since successive generations of Stanford grads have been proven or at least asserted to be wrong by the following ones, how do we know it wouldn't just plateau at the contemporary state of research, but very quickly? just simulate researchers getting old and replaced by newer fake researcher blood. no need to mess with a system that works
|
# ? Oct 15, 2016 14:50 |
C'MON TARRRRS
|
|
# ? Oct 15, 2016 14:50 |
|
chaosbreather posted:I think a lot of systems that are on now would qualify as sentient if we really got down to it. There's like this old joke when talking about the theory of mind that you have to be careful not to create a definition of intelligence that thermostats don't qualify for. People always end up sounding like they're pleading with their partner not to dump them for someone else. But what about the time we walked in the park, wasn't that something special that he could never give you? I think the craving people have to be unique is pretty funny. The other issue with the bottom up approach being that if we simply make a computerized human brain we'll also likely have to deal with all of the inconsistencies and foibles of the human brain.
|
# ? Oct 15, 2016 14:51 |
|
what about a super AI that smoke super weed?
|
# ? Oct 15, 2016 14:52 |
|
idk we've been jerking ourselves off about artificial intelligence for the last hundred years but in reality we have only begun the scratch the surface in the last decade or so and what we've built is so far and away from what we thought of as machine intelligence that people don't even consider it machine intelligence and instead jerk off about the same skynet/Mind crap they did 50 years ago i mean really the most well known artificially intelligent neural network recognizes and creates art, which is such an alien concept both in the realm of science fiction and what most people consider really. a neural network that visually recognizes images based on patterns and is capable of using those patterns in reverse to create too? thats crazy. like go back twenty years ago and i doubt anyone thought something like that was anything but very hard science fiction and here we are sharing pictures of buildings that look like pepperoni
|
# ? Oct 15, 2016 14:58 |
|
|
# ? May 26, 2024 18:20 |
|
jBrereton posted:A six hour probie would set humanity back 3,000 years of shitposts. It would just post them on twitter. https://twitter.com/shitpostbot5000 https://twitter.com/ShitpostBot5000/status/787057936979861504
|
# ? Oct 15, 2016 15:03 |