|
Amethyst posted:if we build a machine that can classify any input stimulus one million times faster than we can, and react to it based on an evolving expert system several times larger and with much better efficacy than a human brain, can we really say it's not "true" intelligence just because it's not aware if it can't apologize for something it has done before being told that it should apologize we can certainly say it's not very intelligent
|
# ¿ Sep 8, 2017 21:26 |
|
|
# ¿ May 14, 2024 06:11 |
|
JewKiller 3000 posted:once you've made a decision, before you press the button, you note the time when you committed to that choice. apparently the researchers looking at the machine can consistently predict your choice before you are conscious of having made it. and it's not a few milliseconds before either, it's like 5 seconds knowing that you can reason like this: "it's now 15:24:25 which means I must have made the decision at about 15:24:20, so that's what I'll write down here" what will the test say about consciousness then?
|
# ¿ Sep 9, 2017 10:47 |
|
Cybernetic Vermin posted:there being truly simple explanations for complex phenomena is looking less and less probable (and it is helpful for every person to introspect a bit on *why* there would be simple explanations) because gratuitous irreducible complexity and special cases are likely to be exploitable in some fashion?
|
# ¿ Sep 11, 2017 15:47 |