|
Warcabbit posted:Well, if you think over aeons, it's basically not that unsurprising a concept - it's a lot easier to send unmanned probes everywhere, and I suspect AI-containing probes are how we'd explore the galaxy at some point. Sublight ones. It is something to consider, just not for the filthy, techie-hating plebes of D&D.
|
# ? Aug 6, 2014 02:53 |
|
|
# ? May 8, 2024 05:34 |
|
If you're gonna criticize any tech person for caring about their starry-eyed techtopia instead of the real world, it seems like an odd choice to go after one of the few who backs an actual physical product that has the potential to improve the world instead of an app that exploits the hell out of a regulatory blind spot or a subsistence lifestyle product straight out of a cyberpunk nightmare.
|
# ? Aug 6, 2014 04:17 |
|
Dr.Zeppelin posted:If you're gonna criticize any tech person for caring about their starry-eyed techtopia instead of the real world, it seems like an odd choice to go after one of the few who backs an actual physical product that has the potential to improve the world instead of an app that exploits the hell out of a regulatory blind spot or a subsistence lifestyle product straight out of a cyberpunk nightmare. Not to mention a dude who's got a bunch of patents that he's letting anybody who wants to use them do so.
|
# ? Aug 6, 2014 04:22 |
|
Pope Guilty posted:Not to mention a dude who's got a bunch of patents that he's letting anybody who wants to use them do so. To be fair it's really easy to parse that as a cynical and pragmatic business move by saying that in the long run Tesla makes way more money by selling cars that are convenient to charge than they do by monopolizing the ability to charge them.
|
# ? Aug 6, 2014 04:25 |
|
Dr.Zeppelin posted:To be fair it's really easy to parse that as a cynical and pragmatic business move by saying that in the long run Tesla makes way more money by selling cars that are convenient to charge than they do by monopolizing the ability to charge them. On the other hand, in the same place Apple would probably be making plans for the three Apple Car Charge Stations available per major city. Corporations are often poo poo when it comes to doing the right thing, even when it is actually good for their bottom line.
|
# ? Aug 6, 2014 04:27 |
|
I hope we're not just the irrigation system for agriculture superintelligence. I hope we're not just the lubrication for industrial superintelligence. I hope we're not the
|
# ? Aug 6, 2014 04:30 |
|
SedanChair posted:I hope we're not just the irrigation system for agriculture superintelligence. To be fair, even Stephen loving Hawking is worried about sentient AI. Not that that really makes much of a difference considering neither of them work in AI research.
|
# ? Aug 6, 2014 09:45 |
|
Why would an AI care about humans? We're a threat to ourselves, not it.
|
# ? Aug 6, 2014 14:31 |
|
Pope Guilty posted:Not to mention a dude who's got a bunch of patents that he's letting anybody who wants to use them do so. No, he said he wouldn't litigate on breaking those patents. He's not licensing them out because ~~reasons~~.
|
# ? Aug 6, 2014 14:49 |
|
HootTheOwl posted:Why would an AI care about humans? We're a threat to ourselves, not it. Yeah, I love how everyone who discusses AI becoming sentient immediately jumps to THEY WILL WIPE US ALL OUT. Project much?
|
# ? Aug 6, 2014 15:03 |
|
It's entirely possible that the AI will have better social skills than tech workers, so I'm kinda down with the Singularity.
|
# ? Aug 6, 2014 15:15 |
|
Yeah, it's not a thing to be worried about so much as 'they're probably going to be what outlives humanity and colonizes the stars' because of various things like human meat puppets requiring things like air and food. I'm actually very interested in the evolutionary patterns and behavior of AI, because it won't have the underlying urges of the human / mammal / lizard / whatever brain stem. Unless it's a scanned human brain that all AI is based on. (I am making no predictions.) Honestly, my opinion on AI is that sooner or later, we're going to have systems that we can socialize with and work with and not be able to tell they're not human. Will those be true AI? I don't know. Will they be 'for all intents and purposes' AI? I'd guess so.
|
# ? Aug 6, 2014 15:19 |
|
Eh, I don't know much about AI, but Diane Forsythe showed that the AI work in the 80s tended to reflect the biases and assumptions of the (mostly white, male) researchers. The thought that singularity-esque AI, built by humans, would somehow not reflect humanity strikes me as techno-utopian thinking. I can understand why people are making fun of Musk for worrying about such a seemingly remote problem, but the tech scene today is already producing poo poo that it doesn't understand. The entire "Big Data" buzzword parade is all about identifying the "what" without a goddamn care about the "why." The US government already uses remotely-piloted drones to kill people based on metadata alone. It's not much of a stretch to imagine a world where an automated algorithm mines the metadata, some keyboard jockey gives it a cursory glance, pushes a button, and then another automated algorithm drops a bomb on someone. Is that really all that different from ooga-booga Matrix/Terminator AI?
Kobayashi fucked around with this message at 15:42 on Aug 6, 2014 |
# ? Aug 6, 2014 15:39 |
|
Those are all issues explained by runaway and unaccountable institutions. Emergence=/=AI, unless we call the court at Versailles AI.
|
# ? Aug 6, 2014 16:31 |
|
There's only one known kind of intelligent mind so far, so suggesting that another would be very similar to that isn't too outre.
|
# ? Aug 6, 2014 16:36 |
|
Nessus posted:I assume she is fishmech. I can't believe I'm saying this but you're being really unfair to fishmech there
|
# ? Aug 6, 2014 16:58 |
Pope Guilty posted:There's only one known kind of intelligent mind so far, so suggesting that another would be very similar to that isn't too outre. It is very true that humans do a lot of awful things on a regular basis. There are also some humans who genuinely are sociopathic monsters. They are few, and all gags about politicians aside, the sociopathic monsters seem just as likely to be drifters or assholes working at the Pizza Hut as they do to be politicians. Humans do have a lot of blinders and prejudices, and these ingrained layers of outer dirt and psychological crap can become so deeply ingrained that someone who on a personal level is empathic, feeds a kitten, writes a check to a telethon, and hugs their children can then take a plane into Washington and vote for bombing a house full of foreign children. However, this is a very different phenomenon from saying humans are 'fundamentally bad,' because it is due to acculturation, not some genetic impulse to relentlessly hurt, dominate, and exploit each other in whatever our current metaphor for the lovely emergent behavior of capitalism is. But in a crisis most people try to work together. Or at least they don't gently caress each other up, not in the aggregate, not typically. They may abandon their respect for property law (and then be shot by police officers, themselves acculturated into the police-officer head trip, where financial incentives and ways of thinking make it reasonable to shoot people taking food and trinkets from a store when the city has drowned). But for whatever reason everyone in this drat country's media industry is horribly fascinated with the prospect of any bad situation making everyone automatically murder and abuse each other, as if by inner nature, never mind that if this really was the fundamental human nature we would have eaten ourselves to death before leaving Africa. And since, as "everyone knows," people are all monsters when you let 'em be, you reinforce the ideas that we need strong mechanisms of control and that we need to bomb others into submission at all times; that employees must be kept down, that skilled workers will organize to rip your business apart rather than 'to improve their situation.' Because they have to really be monsters under the surface. Everyone knows. Sedanchair is right: most (I will not say all) of organized lovely human behavior is due to institutions getting out of hand, not because of Original Sin. e: If crises aren't persuasive, consider other situations. Even if you swear at someone on your drive to work, think of all the cohesive social situations that led up to you having gotten up that morning, gotten in a vehicle someone else almost certainly built, and gone off to your semi-steady place of work. If everyone was robbing, killing and abusing each other constantly this would not be the case. Obviously systemic discrimination etc. exists, but that's institutions getting out of hand again, leavened by the fact that there are over three hundred million individual humans in the US alone, and some of them are going to act out on any given day. Some! Not most. Nessus fucked around with this message at 17:33 on Aug 6, 2014 |
|
# ? Aug 6, 2014 17:21 |
|
Are you spitting in the eye of Western society's implicit obsession with internally consistent rulesets and things to govern and enforce them? Don't you know that if we pass a law and create a regulation for every single situation, leaving the rest to a judiciary based on precedent, everything will work out? Computer overlords are just the ultimate extension of our general philosophy.
|
# ? Aug 6, 2014 17:29 |
|
ReindeerF posted:Are you spitting in the eye of Western society's implicit obsession with internally consistent rulesets and things to govern and enforce them? Don't you know that if we pass a law and create a regulation for every single situation, leaving the rest to a judiciary based on precedent, everything will work out? Computer overlords are just the ultimate extension of our general philosophy. "A judiciary based on precedent" is the opposite of most Western legal systems.
|
# ? Aug 6, 2014 17:34 |
computer parts posted:"A judiciary based on precedent" is the opposite of most Western legal systems. Besides, if the laws were (largely) just, and the AIs (allowing for occasional accident) followed the laws, presumably we would have very little to fear in terms of being openly enslaved or exterminated by those AIs. There would be loopholes and I guess they could be exploitative I suppose. What I guess is the real lurking underlying horror here is that they realize that laws don't mean poo poo if you're powerful enough to ignore them, and they assume Skynet would figure this out, probably sooner rather than later.
|
|
# ? Aug 6, 2014 17:39 |
|
Nessus posted:
If we're at the point where people are basically cordoned off on Earth and the AI has interstellar reach I don't see why it would want to exterminate people. At worst it just makes more sense to stop providing for and/or just blockade our solar system and then continue on its merry way through the galaxy. If you mean just in general then presumably you'd program the AI to provide for humanity, and without that it wouldn't have a purpose. An existential crisis, in other words, without our presence.
|
# ? Aug 6, 2014 17:44 |
|
Even if you program an AI to respect whatever values you decide, if the AI is to be self-improving you have to make sure it never "improves" in a way that doesn't preserve those values.
|
# ? Aug 6, 2014 17:45 |
computer parts posted:If we're at the point where people are basically cordoned off on Earth and the AI has interstellar reach I don't see why it would want to exterminate people. At worst it just makes more sense to stop providing for and/or just blockade our solar system and then continue on its merry way through the galaxy.
|
|
# ? Aug 6, 2014 17:46 |
Personally I'm pretty sure most AI conversations are a moot point because why would anyone build an AI that's like a person? Most of the reason we build artificial intelligence today is to perform tasks very quickly that a person can't do or would get bored doing over and over again. It seems like adding any kind of sentience to our computers that mine asteroids or predict the weather or whatever future computers we build is intensely counterproductive.
|
|
# ? Aug 6, 2014 18:53 |
|
Augustin Iturbide posted:Personally I'm pretty sure most AI conversations are a moot point because why would anyone build an AI that's like a person? Most of the reason we build artificial intelligence today is to perform tasks very quickly that a person can't do or would get bored doing over and over again. It seems like adding any kind of sentience to our computers that mine asteroids or predict the weather or whatever future computers we build is intensely counterproductive. There are definately some "pure" CS researchers looking to create an AI for the sake of itself, and even if they weren't pure, once you get an AI with free will, the capacity to improve itself, and self determination, it becomes trivial to create one that has all of that, but only has the drive to process lots of pizza delivery orders. You could probably ask the first AI to create the second one for fun and get it done in seconds. There are only 2 dangers we'd face from an AI. The first is if we forced it to value growth at all costs. Which is something only an insane person would do because it's clearly a bad idea to give something the drive to turn the entire solar system into a single computer. The other would be if we can't convince it that we don't think it's a threat, because almost every piece of fiction we have with Artificial Intelligence outside of a few obscure Sci-Fi Authors, is about that AI killing everything. And in that case it's more likely to just gently caress off to the moon and leave us alone forever.
|
# ? Aug 6, 2014 19:47 |
|
Pope Guilty posted:There's only one known kind of intelligent mind so far, so suggesting that another would be very similar to that isn't too outre. Not enough information to make a judgement, really. That being said, personally, I don't think the first AI will be entirely a planned thing, but rather an emergent behavior of multiple half-assed networked systems. http://www.zdnet.com/cias-amazon-cloud-goes-live-firewalled-and-private-7000032314/ ... okay, we're hosed.
|
# ? Aug 6, 2014 20:06 |
senae posted:There are definately some "pure" CS researchers looking to create an AI for the sake of itself, and even if they weren't pure, once you get an AI with free will, the capacity to improve itself, and self determination, it becomes trivial to create one that has all of that, but only has the drive to process lots of pizza delivery orders. You could probably ask the first AI to create the second one for fun and get it done in seconds. Sure, but why would we give an AI free will or self determination in the first place? It seems counterproductive to what we use computers for. It'd be like saying how great it would be to uplift our cows so they can walk themselves into the slaughterhouse. It adds a bunch of problems in exchange for 'solving' problems that aren't problems.
|
|
# ? Aug 6, 2014 20:39 |
|
Augustin Iturbide posted:Personally I'm pretty sure most AI conversations are a moot point because why would anyone build an AI that's like a person? But what about my robot girlfriend?
|
# ? Aug 6, 2014 20:42 |
|
The sort of people that want robot girlfriends probably don't want them to be capable of independent thought.
|
# ? Aug 6, 2014 20:46 |
|
HootTheOwl posted:Why would an AI care about humans? We're a threat to ourselves, not it.
|
# ? Aug 6, 2014 20:47 |
|
What if the ai was stupid.
|
# ? Aug 6, 2014 20:48 |
|
Augustin Iturbide posted:Sure, but why would we give an AI free will or self determination in the first place? It seems counterproductive to what we use computers for. It'd be like saying how great it would be to uplift our cows so they can walk themselves into the slaughterhouse. It adds a bunch of problems in exchange for 'solving' problems that aren't problems. Because we can? Scientists do poo poo for nothing but the sake of doing it all the time, it's basically their default state. Even if we start off exclusively making AIs with a predefined purpose, eventually the tech is going to be cheap enough and plentiful enough that some nerd somewhere will make Data.
|
# ? Aug 6, 2014 21:13 |
|
Nessus posted:You know here's the thing, I actually think this is a stupid and fetishistic idea which does material harm to society. I have no idea what any of this has to do with my post. I said that it's not unlikely that other intelligences would be similar to ours and suddenly I'm Thomas Hobbes?
|
# ? Aug 6, 2014 21:29 |
|
And this makes even less sense: Where does the food come from when you blot out the sun!?
|
# ? Aug 6, 2014 21:34 |
|
Jeffrey posted:Even if you program an AI to respect whatever values you decide, if the AI is to be self-improving you have to make sure it never "improves" in a way that doesn't preserve those values. When we talk about the behavior of "AI," we inevitably simply talk about humanity. Every parent has faced this quandary and experienced the limitations of one's ability to control another intelligent being's behaviors. Likewise, we became a lot less concerned about AIs deciding to create a nuclear holocaust once the system called the "Cold War" disintegrated.
|
# ? Aug 6, 2014 21:54 |
Pope Guilty posted:I have no idea what any of this has to do with my post. I said that it's not unlikely that other intelligences would be similar to ours and suddenly I'm Thomas Hobbes?
|
|
# ? Aug 6, 2014 22:02 |
|
Helios in 'Deus Ex' was pretty cool.
|
# ? Aug 6, 2014 23:42 |
|
I'm not cool with King Schmidt because I just know he would gently caress up NPR
|
# ? Aug 6, 2014 23:53 |
|
ufarn posted:It's Pope Guilty posted:There's only one known kind of intelligent mind so far, so suggesting that another would be very similar to that isn't too outre. Just because we dont know how to speak their language is not proof that intelligence is absent. Research points to the fact that intelligence is actually present, and it was blindness that missed it to begin with. If anything pressured an AI to be "like us" it would be the language it was first given so that it could communicate with us, after that ... ? http://news.discovery.com/animals/zoo-animals/angry-crows-memory-life-threatening-behavior-110628.htm quote:Crows remember the faces of threatening humans and often react by scolding and bringing in others to mob the perceived miscreant, according to a new study published in the latest Proceedings of the Royal Society B. http://www.forbes.com/sites/brucedorminey/2012/10/18/dolphin-speak-bustin-the-code-on-flippers-rhymes/ quote:The dolphin’s whistles have been sampled, statistically-parsed and then analyzed to determine whether certain whistle types can be predicted from the same or another whistle type. Results show that dolphin whistle repertoires contain higher-order internal structure or organizational complexity. This suggests their whistle “language” contains elements loosely analogous to grammar or syntax in human language. http://web.stanford.edu/dept/news/pr/00/000323gilly.html http://ocean.si.edu/blog/so-you-think-youre-smarter-cephalopod quote:A new study reveals that newborn squid actually learn through the process of trial and error, much like humans do, and that these early-life experiences can physically change a squid's nervous system in ways that may be permanent. quote:Recently, as our understanding of cephalopods has improved, we’ve begun to wonder: Are these animals intelligent? It depends on how you define "intelligence."
|
# ? Aug 7, 2014 00:21 |
|
|
# ? May 8, 2024 05:34 |
|
Have you ever driven/walked/ride shared your way into an unfamiliar part of town and immediately wished you had taken the long way around? There's an app for that!
|
# ? Aug 8, 2014 22:50 |