|
Amethyst posted:if we build a machine that can classify any input stimulus one million times faster than we can, and react to it based on an evolving expert system several times larger and with much better efficacy than a human brain, can we really say it's not "true" intelligence just because it's not aware if it can't apologize for something it has done before being told that it should apologize we can certainly say it's not very intelligent
|
# ? Sep 8, 2017 21:26 |
|
|
# ? May 15, 2024 02:45 |
|
animal cognition can probably teach us more about intelligence than ivory tower men huffing farts and declaring strong AI is unattainable
|
# ? Sep 8, 2017 23:18 |
|
why do so many people get a hard-on for strong ai? like we're doing tons of applicable stuff already without that; in a world that demands ever more specialization from its population what value does strong ai really hold? I mean it might get bored or something "what if the tractors could think as they spend 12 hours plowing a field"
|
# ? Sep 8, 2017 23:55 |
|
also studies on consciousness show for example that things happen before you're aware of them, implying that consciousness perhaps is a projection of what's already happening in your brain, meaning consciousness can't affect, it's only a display, and therefore perhaps unnecessary or not i was big time into consciousness for a while and read all the text books i could find etc
|
# ? Sep 9, 2017 00:37 |
|
echinopsis posted:also studies on consciousness show for example that things happen before you're aware of them, implying that consciousness perhaps is a projection of what's already happening in your brain, meaning consciousness can't affect, it's only a display, and therefore perhaps unnecessary i don't believe that this implies that consciousness is unnecessary; i think the ability to construct a rationalization itself is an expression of consciousness. my operating assumption is that our conscious experience is a collection of both post-hoc rationalization and higher-level executive planning/desicions/thoughts. this implies that zoning out while doing something boring and repetitive is what it feels like to be conscious, but not really, like, conscious, you know
|
# ? Sep 9, 2017 01:39 |
|
if one of our machines developed self-awareness and immediately tried to kill itself would that be hosed up or what
|
# ? Sep 9, 2017 01:40 |
|
like, what if the first thing that skynet did on august 29th 1997 was to call a suicide hotline
|
# ? Sep 9, 2017 01:42 |
|
duTrieux. posted:if one of our machines developed self-awareness and immediately tried to kill itself would that be hosed up or what in one of the culture novels, it’s stated that “pure” AIs (ones without cultural baggage) sublime as quickly as they develop the means to
|
# ? Sep 9, 2017 01:48 |
|
Silver Alicorn posted:in one of the culture novels, it’s stated that “pure” AIs (ones without cultural baggage) sublime as quickly as they develop the means to nice. i should start reading those.
|
# ? Sep 9, 2017 01:54 |
|
duTrieux. posted:i don't believe that this implies that consciousness is unnecessary; i think the ability to construct a rationalization itself is an expression of consciousness. my operating assumption is that our conscious experience is a collection of both post-hoc rationalization and higher-level executive planning/desicions/thoughts. That's an interesting way to put it I suppose that even if consciousness has no part in instantaneous impulses it can still structure the environment that fires those impulses
|
# ? Sep 9, 2017 02:54 |
|
those conscious choice tests works like this: you sit in front of some buttons you can press, and you may freely decide which one. you can take as much time as you need. there is a clock in the room. you are also hooked up to a brain scanny machine. once you've made a decision, before you press the button, you note the time when you committed to that choice. apparently the researchers looking at the machine can consistently predict your choice before you are conscious of having made it. and it's not a few milliseconds before either, it's like 5 seconds
|
# ? Sep 9, 2017 04:36 |
|
duTrieux. posted:i don't believe that this implies that consciousness is unnecessary; i think the ability to construct a rationalization itself is an expression of consciousness. my operating assumption is that our conscious experience is a collection of both post-hoc rationalization and higher-level executive planning/desicions/thoughts. well the experts would somewhat agree with your lash statement, but imply that "autopilot" isn't consciousness.. and things like buddhism or meditation really make the most of what consciousness can be: mindfulness is a buzz word of late but it basically describes being conscious and aware.: clearly our brains can perform complex tasks on autopilot . gently caress i check scripts on autopilot all the time - i've trained my brain with rules so if something looks out of place then consciousness kicks in and i can analyse and rationalise you do raise interesting points. they call consciousness "the hard question", because consciousness is absolutely like nothing else on the universe at all. all the current evidence seems to often discredit ways of understanding consciousness and asks a lot of questions but doesn't give a lot of answers (just a lot of pointing out how wrong theories are)
|
# ? Sep 9, 2017 06:50 |
|
echinopsis posted:consciousness is absolutely like nothing else on the universe at all you don't know that, you don't have a single piece of objective evidence to support that claim, and neither does anyone else who makes it
|
# ? Sep 9, 2017 07:02 |
|
fun fact: working on a facial recognition algorithm at this point is morally untenable, especially if you make it open source quad untenable if you're working on gait recognition
|
# ? Sep 9, 2017 08:21 |
|
it's worth reading Lenat's "The Nature of Heuristics" to see what the symbolic-reasoning people were up to also "Theory Formation by Heuristic Search" and "Eurisko: A Program That Learns New Heuristics and Domain Concepts" not a neural net in the bunch, just a whole lot of frames in a custom language atop Lisp eschaton fucked around with this message at 08:36 on Sep 9, 2017 |
# ? Sep 9, 2017 08:27 |
|
atomicthumbs posted:fun fact: working on a facial recognition algorithm at this point is morally untenable, especially if you make it open source Agreed
|
# ? Sep 9, 2017 08:33 |
|
JewKiller 3000 posted:those conscious choice tests works like this: you sit in front of some buttons you can press, and you may freely decide which one. you can take as much time as you need. there is a clock in the room. you are also hooked up to a brain scanny machine. once you've made a decision, before you press the button, you note the time when you committed to that choice. apparently the researchers looking at the machine can consistently predict your choice before you are conscious of having made it. and it's not a few milliseconds before either, it's like 5 seconds With anything involving a fMRI, the N=10 and you need to take a massive massive grain of salt with the results.
|
# ? Sep 9, 2017 10:30 |
|
JewKiller 3000 posted:once you've made a decision, before you press the button, you note the time when you committed to that choice. apparently the researchers looking at the machine can consistently predict your choice before you are conscious of having made it. and it's not a few milliseconds before either, it's like 5 seconds knowing that you can reason like this: "it's now 15:24:25 which means I must have made the decision at about 15:24:20, so that's what I'll write down here" what will the test say about consciousness then?
|
# ? Sep 9, 2017 10:47 |
|
eschaton posted:it's worth reading Lenat's "The Nature of Heuristics" to see what the symbolic-reasoning people were up to Good post.
|
# ? Sep 9, 2017 11:03 |
|
JewKiller 3000 posted:you don't know that, you don't have a single piece of objective evidence to support that claim, and neither does anyone else who makes it that's just like you're opinion man
|
# ? Sep 9, 2017 22:54 |
|
Smythe posted:my friend, begone of this thread. or perish thanks for chasing off the anime retard, smythe i picked up this Bostrom book from one of the usual NMN sci-fi thread cheap-book dumps and although it's kinda heavy going (still not finished it), it deals compellingly with the issues inherent in creating an artificial intelligence that has capacity for self-improvement. namely, how do we deal with a superintelligent, not-guaranteed-to-be-acting-in-our-best-interests entity?
|
# ? Sep 10, 2017 10:19 |
|
we install kill switches and also make sure they cant gently caress and reproduce
|
# ? Sep 10, 2017 10:21 |
|
Well then why even bother (This gets addressed too)
|
# ? Sep 10, 2017 10:37 |
|
echinopsis posted:we install kill switches and also make sure they cant gently caress and reproduce whoa whoa whoa, why shouldnt they be able to gently caress??
|
# ? Sep 10, 2017 12:07 |
|
really though the kill switch will either be internal or external. if it's internal it will have to trigger something in the mind of a superintelligent entity that can edit its own makeup (probably its code) and may be able to work around it (e.g. feed output from the killswitch into a VM) if it's external then your security relies on humans doing the right thing when confronted with a superintelligent, possibly extremely persuasive, entity. iirc there's like a chapter on this that goes into some actual detail. it's a good book
|
# ? Sep 10, 2017 12:11 |
|
echinopsis posted:we install kill switches and also make sure they cant gently caress and reproduce because of course a super intelligence wouldn't be able to subvert its own programming and disable any software kill switch needs to be a locally-triggered physical power cut, carried out by someone who can't be persuaded not to
|
# ? Sep 10, 2017 12:17 |
|
|
# ? Sep 10, 2017 19:31 |
|
atomicthumbs posted:fun fact: working on a facial recognition algorithm at this point is morally untenable, especially if you make it open source
|
# ? Sep 10, 2017 19:33 |
|
lancemantis posted:why do so many people get a hard-on for strong ai? it's religion for the anxious agnostics we'll build god and then he'll solve our problems and because he's so smart he'll invent immortality drugs and the matrix so i can live forever in whatever reality i want and I'll have a cool robot body with a six pack and babes will kiss and hug my robot body
|
# ? Sep 10, 2017 22:09 |
|
I would definitely give up my lame flesh body for a robot body
|
# ? Sep 11, 2017 04:06 |
|
the idea of there being a hierarchy of intelligence, with some kind of "super intelligence" possible, is one of the many misguided ideas we have inherited from plato, him wanting to imply that a philosopher can sit down legs crossed and pierce the veil of reality through sheer force of reason far more likely any real understanding is entirely limited by the way we interface with reality, progress is down to what experiments can be run and what they can tell us, there being truly simple explanations for complex phenomena is looking less and less probable (and it is helpful for every person to introspect a bit on *why* there would be simple explanations) end result being that strong artificial intelligence is not so much impossible as it is quite irrelevant. we have brains already, and greater general intelligences will provide us with little
|
# ? Sep 11, 2017 13:04 |
|
Cybernetic Vermin posted:there being truly simple explanations for complex phenomena is looking less and less probable (and it is helpful for every person to introspect a bit on *why* there would be simple explanations) because gratuitous irreducible complexity and special cases are likely to be exploitable in some fashion?
|
# ? Sep 11, 2017 15:47 |
|
what if an ai that can make ai?
|
# ? Sep 11, 2017 19:46 |
|
Shinku ABOOKEN posted:what if an ai that can make ai? what if ai, but too much
|
# ? Sep 11, 2017 20:37 |
|
no but seriously is there any effort to make an ai that generates programs, possible better ais?
|
# ? Sep 11, 2017 21:34 |
|
Shinku ABOOKEN posted:no but seriously is there any effort to make an ai that generates programs, possible better ais? how do you define a 'better ai'
|
# ? Sep 11, 2017 23:52 |
|
Silver Alicorn posted:I would definitely give up my lame flesh body for a robot body but what if you can only be five feet tall
|
# ? Sep 12, 2017 07:42 |
|
Shinku ABOOKEN posted:no but seriously is there any effort to make an ai that generates programs, possible better ais? read the links I posted earlier in the thread, they're about an old design for what's effectively a self-improving AI and there's not much difference between an AI improving itself or improving a copy of itself
|
# ? Sep 12, 2017 08:27 |
|
Fartificial intelligence
|
# ? Sep 12, 2017 12:07 |
|
|
# ? May 15, 2024 02:45 |
|
[quote="“Dongslayer.”" post="“476314342”"] but what if you can only be five feet tall [/quote] after being 6’2” most of my life it would sorta be a welcome change
|
# ? Sep 12, 2017 15:13 |