|
post research papers, projects, theory, hopes, fears and nightmares about A.I here. This field is moving at a ridiculous pace so there should be more than enough to sustain a thread This thing is probably my favourite from the past year: Two neural networks, one an image synthesizer and the other an image classifier are connected to each other in an "adversarial" setup. The classifier assesses the synthesizers output. The results shown in the video aren't realistic by any stretch but you can see the roots of truly mindblowing results on the horizon: https://www.youtube.com/watch?v=rAbhypxs1qQ
|
# ¿ Sep 8, 2017 02:51 |
|
|
# ¿ May 14, 2024 22:31 |
|
https://www.economist.com/news/science-and-technology/21728614-machines-read-faces-are-coming-advances-ai-are-used-spot-signs researchers from Stanford have trained a neural network to classify people's sexuality based on facial features, with 81-95% accuracy for males. Impressive result, but also clearly dangerous in a world where being gay is illegal in many places. sifting through online discussions, I've found very little on its ethical implications. automated human profiling is a part of our daily lives in cyberspace. We seem to have come to terms with this. Soon we will be asked to come to terms with it in meatspace, as we walk the streets. If you think I'm over-reacting to a neat little result, check this out: a facial classifier that grades criminality: https://www.rt.com/news/368307-facial-recognition-criminal-china/ Amethyst fucked around with this message at 03:49 on Sep 8, 2017 |
# ¿ Sep 8, 2017 02:52 |
|
Suspicious posted:so it's modern phrenology except that it works? Pretty much
|
# ¿ Sep 8, 2017 03:01 |
|
MALE SHOEGAZE posted:oh my god that is so loving cool it's a great channel, full of interesting results
|
# ¿ Sep 8, 2017 03:27 |
|
SmokaDustbowl posted:true AI is impossible like time travel sentience is overrated. it's a transitional trait that can be discarded like gills on a land creature
|
# ¿ Sep 8, 2017 04:03 |
|
if we build a machine that can classify any input stimulus one million times faster than we can, and react to it based on an evolving expert system several times larger and with much better efficacy than a human brain, can we really say it's not "true" intelligence just because it's not aware
|
# ¿ Sep 8, 2017 04:06 |
|
SmokaDustbowl posted:yes, because it can't correct an error of course it can. consciousness has very little to do with correcting for errors and it has far less to do with basic decision making than you believe. "Make a conscious choice. Decide to move your index finger. Too late! The electricity's already halfway down your arm. Your body began to act a full half-second before your conscious self 'chose' to, for the self chose nothing; something else set your body in motion, sent an executive summary—almost an afterthought— to the homunculus behind your eyes. That little man, that arrogant subroutine that thinks of itself as the person, mistakes correlation for causality: it reads the summary and it sees the hand move, and it thinks that one drove the other."
|
# ¿ Sep 8, 2017 04:13 |
|
duTrieux. posted:congratulations, you just discovered p-zombies don't act like this stuff is everyday. i know we're jaded here but we're allowed to be confronted by weird results.
|
# ¿ Sep 8, 2017 04:14 |
|
JewKiller 3000 posted:cool tricks aside i just don't see how we're going to program a general ai that learns, and learns how to learn, given our rudimentary understanding of how we do those things at the level of the brain. intelligence isn't just going to emerge because we stuff in more transistors, and it feels to me like we're pretty far away from the level of knowledge to bootstrap the system. what even is intelligence, surely it's more than just being a really good classifier? i'm not an ai researcher but we've seen this hype before and i'm not convinced it's any different this time even if this is true, mass deployment of even contemporary classifiers will still change our collective experience massively
|
# ¿ Sep 8, 2017 04:18 |
|
and like, i'm not sure how anyone can look at the results of the paper I posted in the OP and not start to question assumptions about the potential of current A.I techniques.
|
# ¿ Sep 8, 2017 04:20 |
|
conscious thoguht is an inefficient kludge to get us over an awkward period of physical environment discovery.
|
# ¿ Sep 8, 2017 04:28 |
|
SmokaDustbowl posted:lol stop taking peter watts so seriously i will not. Watts is smart and cool
|
# ¿ Sep 8, 2017 04:32 |
|
echinopsis posted:i wonder basically how many rules are laid down when our brains are first made, vs how much is making sense of the inputs and creating intelligence on the spot over years how is this even a question? click the vid in the op to watch a neural network learn to see
|
# ¿ Sep 8, 2017 04:33 |
|
SmokaDustbowl posted:your consciousness isn't just your brain, every minuscule part of you is a sensory organ, that's impossible to artificially recreate i don't see any concrete basis for this claim there are physical laws that prevent FTL travel. You seem to think there is a similar law regarding sentience. I dont see it
|
# ¿ Sep 8, 2017 04:39 |
|
SmokaDustbowl posted:what technology could exist that could artificially replicate every nerve ending in a human body? a computer
|
# ¿ Sep 8, 2017 04:41 |
|
http://www.openworm.org/ OpenWorm is an open source project dedicated to creating the first virtual organism in a computer. We've started from a cellular approach so we are building behavior of individual cells and we are trying to get the cells to perform those behaviors. We are starting with simple crawling. The main point is that we want the worm's overall behavior to emerge from the behavior of each of its cells put together.
|
# ¿ Sep 8, 2017 04:43 |
|
SmokaDustbowl posted:what materials would the computer use? Transistors. Like every computer.
|
# ¿ Sep 8, 2017 04:44 |
|
Why is the computation performed in a physical neural network ontologically privileged over computation performed on transistors?
|
# ¿ Sep 8, 2017 04:46 |
|
SmokaDustbowl posted:because it's an entirely different structure Not at all. they are clearly isomorphic
|
# ¿ Sep 8, 2017 04:52 |
|
SmokaDustbowl posted:it would take an impossible amount of computing power, AI is straight up like time travel where and why are you drawing a boundary around computation power?
|
# ¿ Sep 8, 2017 04:55 |
|
SmokaDustbowl posted:you're being incredibly naive Okay.
|
# ¿ Sep 8, 2017 05:03 |
|
smoka does have arguments behind his point so I'll let another sci fi author make them for him: " when you modeled a hurricane, nobody got wet. When you modeled a fusion power plant, no energy was produced. When you modeled digestion and metabolism, no nutrients were consumed – no real digestion took place. So, when you modeled the human brain, why should you expect real thought to occur?" it all comes down to the notion that there is an ontological heirarchy and we are inescapably trapped on one "level" of it.
|
# ¿ Sep 8, 2017 05:10 |
|
two reasons i don't put much stock in the ontological heirarchy argument: 1) it's clear that non-sentient intelligence IS possible and I think the significance between non-sentient and sentient is far less significant than we intuitively believe 2) I've never heard a convincing argument, let alone any kind of experimental evidence that the hierarchy exists anyway.
|
# ¿ Sep 8, 2017 05:14 |
|
JewKiller 3000 posted:i don't know that i'd do him the favor of assuming he's thinking in metaphysical terms why are you giving ontological priority to the physically existing non-arguing smokadustbowl when i've presented a perfectly good platonic ideal smokadustbowl who is making perfectly valid arguments, huh?
|
# ¿ Sep 8, 2017 05:16 |
|
nvidia is releasing specialized hardware just for running deep belief networks. consumer models coming soon. https://www.nvidia.com/en-us/data-center/dgx-server/
|
# ¿ Sep 8, 2017 05:19 |
|
duTrieux. posted:i don't believe that this implies that consciousness is unnecessary; i think the ability to construct a rationalization itself is an expression of consciousness. my operating assumption is that our conscious experience is a collection of both post-hoc rationalization and higher-level executive planning/desicions/thoughts. That's an interesting way to put it I suppose that even if consciousness has no part in instantaneous impulses it can still structure the environment that fires those impulses
|
# ¿ Sep 9, 2017 02:54 |
|
atomicthumbs posted:fun fact: working on a facial recognition algorithm at this point is morally untenable, especially if you make it open source Agreed
|
# ¿ Sep 9, 2017 08:33 |
|
|
# ¿ May 14, 2024 22:31 |
|
Amethyst posted:https://www.economist.com/news/science-and-technology/21728614-machines-read-faces-are-coming-advances-ai-are-used-spot-signs This study is now under ethical review https://theoutline.com/post/2228/that-study-on-artificially-intelligent-gaydar-is-now-under-ethical-review-michal-kosinski?utm_source=TW better late than never i guess
|
# ¿ Sep 13, 2017 06:18 |