Saturday, February 22, 2014
when artificial intelligence surpasses human intelligence, will we still have the ability to "turn it (the machine) off" or will its superior intellect prevent that from happening? how far will AI have to advance before it is the subject of ethics, for example will we ever have the responsibility to treat AI in an ethical manner? does the singularity mean that AI will gain sentience, or just advanced intelligence? can you have one without the other? is there even such a thing yet as artificial intelligence, or is it just programmed intelligence? will its sentience be comparable to ours, or something beyond our scope of understanding?