This, in the Atlantic, is an interesting article about Douglas Hofstadter (the author of Gödel, Escher, Bach) and his attempts to create artificial intelligence which is really intelligent.
The idea, which I don't think is controversial, is that not long ago people stopped trying to make intelligent programmes and instead rely on a combination of big data and fast processing power to achieve results that look like intelligence. So computers can now do things that were previously thought to require a lot of intelligence (such as translate natural languages or play chess) but no one really thinks they know what they are doing. It's a bit like this robot, which can beat you at scissors-paper-stone every time - by cheating, i.e. by reacting very quickly to the shape your hand makes. Would you have thought that that could be done (a few years ago)? Probably not. Are you impressed? Well, I am. Do you think that the robot has obtained a massive insight into human psychology? Clearly, no way. I suppose another example is to compare satnavs with a London cabbie - pretty impressive, but clearly not the same thing at all.
Hofstadter, meanwhile, wants to replicate intelligence. As one of Google’s directors of research says in the article, “I thought he was tackling a really hard problem.” (The big-data-plus-processing-power approach is "an easier problem.”) I understand his motive. There's a great bit in the article that says that the current mainstream commercial approach to AI has become too much like the man who tries to get to the moon by climbing a tree: “One can report steady progress, all the way to the top of the tree.”
The only problem that I see is that Hofstadter just seems to be climbing a different tree. We are told that he uses "Jumbo, a program that Hofstadter wrote in 1982 that worked on the word jumbles you find in newspapers". The way this works is not, as modern AI would do, to search through all the combinations of letters against a dictionary, but to try to model what happens when a person approaches this sort of puzzle. Instead "The architecture Hofstadter developed to model this automatic letter-play was based on the actions inside a biological cell. Letters are combined and broken apart by different types of “enzymes,” as he says, that jiggle around, glomming on to structures where they find them, kicking reactions into gear. Some enzymes are rearrangers (pang-loss becomes pan-gloss or lang-poss), others are builders (g and h become the cluster gh; jum and ble become jumble), and still others are breakers (ight is broken into it and gh). Each reaction in turn produces others, the population of enzymes at any given moment balancing itself to reflect the state of the jumble."
All very interesting, no doubt, and quite like what happens in my mind when I try anagrams. But is that the key to human intelligence? It just sounds like a different, gnarlier tree to me.
Anyway, no doubt it it more likely that Hofstadter is right and I am wrong. In any event, it's an interesting story.