|
Smythe posted:my friend, begone of this thread. or perish thanks for chasing off the anime retard, smythe i picked up this Bostrom book from one of the usual NMN sci-fi thread cheap-book dumps and although it's kinda heavy going (still not finished it), it deals compellingly with the issues inherent in creating an artificial intelligence that has capacity for self-improvement. namely, how do we deal with a superintelligent, not-guaranteed-to-be-acting-in-our-best-interests entity?
|
# ¿ Sep 10, 2017 10:19 |
|
|
# ¿ May 14, 2024 18:03 |
|
Well then why even bother (This gets addressed too)
|
# ¿ Sep 10, 2017 10:37 |
|
really though the kill switch will either be internal or external. if it's internal it will have to trigger something in the mind of a superintelligent entity that can edit its own makeup (probably its code) and may be able to work around it (e.g. feed output from the killswitch into a VM) if it's external then your security relies on humans doing the right thing when confronted with a superintelligent, possibly extremely persuasive, entity. iirc there's like a chapter on this that goes into some actual detail. it's a good book
|
# ¿ Sep 10, 2017 12:11 |