| You're assuming that computers can only operate using pure logic. Using a neural network, you can teach a computer illogical things (2+2=5, for instance, or that two truthes equals a false). If you had a network complex enough to deal with the various concepts (i.e., a "human level" AI), it could be taught whatever assumptions the teacher wanted, just like a human. (Keep in mind I've actually worked with neural nets and know how they work, what they can do, and what their current limitations are.)
That's not to say we wouldn't be surprised by some of its decisions. We no doubt would. But we're surprised by human decisions too. If anything, these surprising decisions could be considered a mark that the AI are human level or nearly so. |