(Thread IKs:
Platystemon)
|
Jose posted:I think this is as true as its possible to get for the thread title anyone surprised by this doesn't know the history of tor. i mean its right there on wikipedia quote:The core principle of Tor, "onion routing", was developed in the mid-1990s by United States Naval Research Laboratory employees, mathematician Paul Syverson, and computer scientists Michael G. Reed and David Goldschlag, with the purpose of protecting U.S. intelligence communications online. Onion routing was further developed by DARPA in 1997.[23][24][25][26][27][28]
|
# ? Feb 14, 2019 17:16 |
|
|
# ? May 28, 2024 08:53 |
|
wasn't the point of tor to be used to help over throw governments America doesn't like and that many nodes are controlled by us intelligence Efb
|
# ? Feb 14, 2019 17:16 |
|
Tom Guycot posted:You either drive an uber, pack amazon boxes, or poo poo on your friends rear end. The new economy. That was my favourite black mirror episode
|
# ? Feb 14, 2019 17:59 |
|
Jose posted:the video appears to have been scrubbed off the internet and the article only has this impression
|
# ? Feb 14, 2019 18:26 |
|
johnny five deuces
|
# ? Feb 14, 2019 18:42 |
|
https://twitter.com/alexhern/status/1096092781100130304 https://twitter.com/alexhern/status/1096095131164426243
|
# ? Feb 14, 2019 18:42 |
|
Jose posted:https://twitter.com/alexhern/status/1096092781100130304 another rube gets taken for a ride "too dangerous to release"" my loving rear end. elon musk is now investing in markov generators
|
# ? Feb 14, 2019 19:14 |
|
Jose posted:the video appears to have been scrubbed off the internet and the article only has this impression the thread title is more accurate than ever
|
# ? Feb 14, 2019 19:17 |
|
This markov chain is haunting me!!
|
# ? Feb 14, 2019 19:21 |
|
Is anyone outside of tech media actually worried about AI? The only people I ever see fretting about it are people who don't understand how machine learning actually works
|
# ? Feb 14, 2019 19:24 |
|
Captain Billy Pissboy posted:Is anyone outside of tech media actually worried about AI? The only people I ever see fretting about it are people who don't understand how machine learning actually works there are a lot of big ethical issues in AI but none of them are what the public immediately think of
|
# ? Feb 14, 2019 19:27 |
|
Captain Billy Pissboy posted:Is anyone outside of tech media actually worried about AI? The only people I ever see fretting about it are people who don't understand how machine learning actually works lots of people outside tech media are worried about AI. lots of people outside tech media dont understand how machine learning works. there is overlap also, you dont need to know how ML works to have legitimate, grounded concerns about it. I dont know how opiates, firearms, or Alzheimers work, but I am concerned about the effects of all of them
|
# ? Feb 14, 2019 19:43 |
|
Subjunctive posted:lots of people outside tech media are worried about AI. lots of people outside tech media don’t understand how machine learning works. there is overlap What concerns do you have about it?
|
# ? Feb 14, 2019 19:53 |
|
Captain Billy Pissboy posted:Is anyone outside of tech media actually worried about AI? The only people I ever see fretting about it are people who don't understand how machine learning actually works This is at least an unusual and compelling tech-media take on why you should be scared of unfriendly AI. Because it took over centuries ago.
|
# ? Feb 14, 2019 19:59 |
|
Jose posted:the video appears to have been scrubbed off the internet and the article only has this impression lmfao
|
# ? Feb 14, 2019 20:07 |
|
Captain Billy Pissboy posted:What concerns do you have about it? I worry about further systematizing bias, opaque decision-making processes, reduction in number or quality of jobs in many fields. (I also work to improve some of those things in my current job.)
|
# ? Feb 14, 2019 20:14 |
|
Subjunctive posted:I worry about further systematizing bias, opaque decision-making processes, reduction in number or quality of jobs in many fields. (I also work to improve some of those things in my current job.) I see your point. After posting that question I started thinking about mass surveillance and realized I more blame capitalism than technology. That is a bit like I'm saying "guns don't kill people, people kill people" though.
|
# ? Feb 14, 2019 20:32 |
|
I'm worried some dufus is going to link an AI to drones. It'll target threats to the USA and beeline to the Whitehouse. I guess AI is okay in my book. Attn FBI I'm not an AI driven drone plz don't arrest me I'm poor.
|
# ? Feb 14, 2019 20:37 |
|
Captain Billy Pissboy posted:I see your point. After posting that question I started thinking about mass surveillance and realized I more blame capitalism than technology. That is a bit like I'm saying "guns don't kill people, people kill people" though. yeah in a just society robots and AI would be cool as hell and would work for the benefit of all mankind but thats not happening because capitalists got to them first and are using them to psychologically manipulate us, rake in insane profits, and put literally everyone out of work
|
# ? Feb 14, 2019 20:46 |
|
Someday well reckon with the fact that AIs are basically the nukes of our time. Theres a tremendous power there, but its being deployed in the most cynical, amoral way, destroying the brains of millions in the process.
|
# ? Feb 14, 2019 20:46 |
|
I believe that nukes are still the nukes or our time
|
# ? Feb 14, 2019 20:51 |
|
I don't know if I'm disappointed or not that the big AI ethical dilemma changed from "what are the ethical ramifications of creating an artificial consciousness" to "everyone is collecting all our data, using neutral networks to market garbage at us and they're also selling that data to everyone else."
|
# ? Feb 14, 2019 20:58 |
|
Poniard posted:wasn't the point of tor to be used to help over throw governments America doesn't like and that many nodes are controlled by us intelligence It's mainly for protecting US diplomatic traffic but yes, controlling exit nodes is a big plus. As always, though, technological measures cut both ways and if you create a backdoor, it's only a matter of time before everyone, your enemies included, discover and use it. Literally anyone can run exit nodes, it's really cheap if a government wants to do it, and in fact, WikiLeaks was bootstrapped with documents stolen from Tor exit nodes that they ran Finally: if you run your own private Tor network with a fixed list of nodes and no exit nodes, you can achieve almost perfect security; for example, the internal network WikiLeaks uses to move leaks around is such a private Tor network. And while it's true that, by controlling enough exit nodes, you can deanonymize Tor users, there's no proof that US intelligence can do it easily: witness the pedophile ring that escaped law enforcement for years by rigorous use of Tor, PGP and Usenet (the FBI could only catch the members stupid enough to use VPNs instead of Tor), or the extremely heavy-handed measures necessary to intercept traffic outside of a country's borders
|
# ? Feb 14, 2019 20:59 |
|
Kobayashi posted:Someday well reckon with the fact that AIs are basically the nukes of our time. Theres a tremendous power there, but its being deployed in the most cynical, amoral way, destroying the brains of millions in the process. we must prepare for the possibility that our electric traffic guidance signs will be disabled by a suitcase AI in the hands of terrorists
|
# ? Feb 14, 2019 21:01 |
|
Creating a conscious AI is as abusive as bringing another child into this world. Waiting for the day an AI sues for its personhood and immediately brings abuse charges against its maker.
|
# ? Feb 14, 2019 21:05 |
|
Outrail posted:I'm worried some dufus is going to link an AI to drones.
|
# ? Feb 14, 2019 21:41 |
|
ai software that isn't sapient is just shifting the blame of our systemic biases and misperceptions from faceless bureaucrats to the software engineers. actual sapient ai is just slavery with manufactured people so I'm glad it appears to be impossible
|
# ? Feb 14, 2019 21:41 |
|
100 HOGS AGREE posted:I don't know if I'm disappointed or not that the big AI ethical dilemma changed from "what are the ethical ramifications of creating an artificial consciousness" to "everyone is collecting all our data, using neutral networks to market garbage at us and they're also selling that data to everyone else." I've been really into cognitive science and ai since I was a kid and this really depresses me. As far as I can tell no one besides a handful of tenured professors in their 80s is researching artificial consciousness. AI is pretty much solely focused on profitable applications of machine learning now. It's probably for the best because the only thing sentient machines would be used for is guilt free slavery Even the dangers of the future are poo poo now. It's not "someone built a sentient machine that wants to destroy us" it's just "someone built a neural network that applies pre-existing biases on a mass scale, also sometimes it thinks a pedestrian is a banana"
|
# ? Feb 14, 2019 22:02 |
|
100 HOGS AGREE posted:I don't know if I'm disappointed or not that the big AI ethical dilemma changed from "what are the ethical ramifications of creating an artificial consciousness" to "everyone is collecting all our data, using neutral networks to market garbage at us and they're also selling that data to everyone else." turns out that actually creating an artificial consciousness is a lot harder than just cranking a pattern-recognition algorithm up to max sensitivity and feeding it a bunch of open-license images and then marketing the result as "AI" also, something along these lines happens about once every decade or so like, you can look at articles from thirty-plus years ago and see more or less the same kind of hype around expert systems, except with fewer theatrics and showmanship because computers weren't really consumer-oriented products back then AI is a valuable marketing term because it's so entrenched in sci-fi
|
# ? Feb 14, 2019 22:18 |
|
Captain Billy Pissboy posted:I see your point. After posting that question I started thinking about mass surveillance and realized I more blame capitalism than technology. That is a bit like I'm saying "guns don't kill people, people kill people" though. I mean it's true. Like yeah it's a glib nra talking point but guns innately don't do anything, just as a computer can't innately do anything. Bad input = bad output.
|
# ? Feb 14, 2019 22:19 |
|
Main Paineframe posted:turns out that actually creating an artificial consciousness is a lot harder than just cranking a pattern-recognition algorithm up to max sensitivity and feeding it a bunch of open-license images and then marketing the result as "AI" The expert systems of the 80s were never deployed to billions of people, though.
|
# ? Feb 14, 2019 22:23 |
|
So far as I know, the only fields in which AI use is much of a threat are High Frequency Trading, botnet attacks, and creating a bunch of shallow fake people on social media. It'll probably be threatening once a significant amount of autonomous cars are out on the streets too, especially with all those dorks who were so overly eager that they jumped at the chance to start trying to put together and incorporate the Trolley Problem into things. So over-eager it's like they wanted to stick a gun in the dashboard to finish the job. It does bug me a bit how a lot of AI development comes from a purely anti-labor perspective, often to spend a lot of money developing marginally cheaper ways to provide a worse service than what human beings provide like with automated customer service, but that's more of a societal issue than an issue with the technology itself.
|
# ? Feb 14, 2019 23:03 |
|
SlothfulCobra posted:So far as I know, the only fields in which AI use is much of a threat are High Frequency Trading, botnet attacks, and creating a bunch of shallow fake people on social media. police departments are right now using ai to decide who to arrest
|
# ? Feb 14, 2019 23:23 |
|
Elon Musk is involved, this translates to "We spent all of the research fund smoking weed and think-tanking idea from decades old scifi novels. Totally could have, like, taken over the world and gotten PhDs tho".
|
# ? Feb 14, 2019 23:28 |
|
a think tank with dudes taking fat dabs and talking about ideas from old science fiction books would be a profound improvement over the current think tank system
|
# ? Feb 14, 2019 23:29 |
|
Outrail posted:I'm worried some dufus is going to link an AI to drones. https://en.wikipedia.org/wiki/SKYNET_(surveillance_program)
|
# ? Feb 14, 2019 23:32 |
|
AI right now at the state level is just a laundering mechanism for racist policing & analysis. Garbage in garbage out. Having actual data scientists correct for things like biases (racial or just bad data) is expensive and time consuming. More expensive than some "security" tech firm selling you a cool thing you can point at and say "no no no, algorithms aren't racist"
|
# ? Feb 14, 2019 23:49 |
|
a cyberpunk goose posted:AI right now at the state level is just a laundering mechanism for racist policing & analysis. I like the AI that determined that the best employees were men named Jared who played high school lacrosse.
|
# ? Feb 15, 2019 01:03 |
|
The Nastier Nate posted:I like the AI that determined that the best employees were men named Jared who played high school lacrosse. it was right, but the implications of the question were misunderstood
|
# ? Feb 15, 2019 01:07 |
|
|
# ? May 28, 2024 08:53 |
|
The Nastier Nate posted:I like the AI that determined that the best employees were men named Jared who played high school lacrosse. data driven results babey
|
# ? Feb 15, 2019 01:09 |