So, the question still remains – after spending endless nights in front of a bright screen, surrounded by empty pizza boxes and the soft hum of cooling fans – when does your program finally count as “intelligent”? It may have started out as a simple “hello_world.c”, but now it’s a raging monstrosity, fully imbued with seven layered neural networks, evolutionary algorithms and a taste for human blood. Doesn’t that make it an artificial intelligence, even if it can’t pass the Turing test? (To be honest, I know many real-life people who wouldn’t pass the Turing test themselves).
Yes, many papers have been written on this, and the walls of a myriad rooms are covered with Eastasian symbols, but still, it will do no harm to spew out my own two cents. I guess I could search the relevant literature first, but there’s no fun in that, is there?
tl;dr? The main points, summarized:
- An intelligent machine must be able to solve problems it has never encountered before.
- If a machine can be reduced to an algorithm which humans understand, it is not intelligent.
These are necessary, but perhaps not sufficient, conditions.
Since this is just a definition, and we won’t do anything rigorous with it, I can afford to simply look at some specific cases and use inductive arguments in rashful abundance.
We’ll start with an example. Consider the task of sorting numbers: we have a list of numbers, and we want to arrange them from smallest to largest.
I think it’s safe to agree that a preprogrammed computer running “heapsort” would not be considered intelligent. After all, it’s just using a predefined algorithm, one which was mathematically proven to work, line after line. There’s no intelligence in that. It’s a simple mechanical process, with a sole and not-very-surprising outcome.
But give a child (i.e – anyone with no knowledge of heapsort) a stack of cards and ask her to sort them, and she’ll eventually succeed. Perhaps she won’t do it in the most efficient way possible, but let’s look at the circumstances – she was not preprogrammed with a built-in sorting algorithm. All she had to start with was the goal – “I’ll give you some chocolate if you sort these cards for me” – and eventually she found a way to solve the problem – perhaps by gathering on her previous experiences, or perhaps by some Divine Inspiration – who am I to judge?
I say then, that an intelligent being is one which is not designed to specifically solve one type of task – except, perhaps, for the task of solving other tasks. An artificial intelligence must be capable of solving problems it has never encountered before. In that sense, in my opinion, programs whose sole purpose is to pass the Turing test should not be considered intelligent (then again, I don’t think that the Turing test is a definitive way to decide artificial intelligence in any case).
You might get a bit angry and interject, “but a Turing-test solving-machine may encounter questions and dialogues that it was never pre-programmed to answer!” To this I say, that such a machine is closer to a linear-equation solver, which, much to its delight, was just given as input a set of coefficients which it was never given before. I think solving the Turing test is much closer to that side of the spectrum, rather than to the “solve types of problems it never saw before”. You might say that a general problem solver is the same, only with problems as inputs and not coefficients, but there is a vast difference between “solve the problem of talking to humans” and “solve the problem of solving problems”.
One thing I do take from the Turing test, is the fact that it puts a main emphasis on humans. We don’t really work hard on creating a machine that is indistinguishable from a dog, or perhaps, an ant. I bet most people are willing to crush ants, but are less willing to crush dogs, partly for the reason that ants are much more mechanistic than dogs. But humans, now, those are a different matter. How does a human work? What inner mechanisms drive thought forward? How can it be that a set of neurons firing can lead to differentiation of continuum and discrete infinities? We don’t know, and hereby lies our intelligence.
To put it plainly: if we have an algorithm for solving new problems, and we understand how that algorithm works (perhaps, by some miraculous wonder, we also proved that it is correct!), then a machine implementing that algorithm is not intelligent. How would it be any different than the machine implementing heapsort?
Now, eventually, all machines (both silicon and carbon based) can be reduced to transistors opening and closing / if-else clauses / neurons firing, so we have to be careful here – reducing a machine to mechanical actions is not enough to deprive it of its intelligence – indeed, I do not claim that there is some non-physical or non-algorithmic process involved, or that our intelligence is found in some sort of “soul” or whatnot. Rather, the keyphrase here, is “we understand how that algorithm works”. To put it in other words, intelligence is subjective.
Consider our favorite pastime learning technique of artificial neural networks. We create a large batch of randomly connected neurons, and give them zounds of data sets to learn. At the end, the weights between the neurons are such, that the network is capable of solving the specified problem rather well.
But given the network, we cannot (well, at least I cannot) say that we understand why it works. It’s just the way the weights, the reverse-feedbacks, the interconnected relations turn out. Hell, we even started with a random network! We have programmed the network using our design; we understand the learning algorithm; but the network itself? Given the connections and weights, it’s very hard to say why these particular values work; it’s also hard to say why these are the ones that we got, and not any other ones.
I am aware that there is a not-so-comfortable disadvantage in this argument, stemming from its conclusion. If ever we were to reduce, in totality, the human brain to a series of provable algorithmic steps – we would be forced by this criterion to revoke our own status of intelligence. That would totally suck for our collective ego. Also, if intelligence is subjective – if ever there was an alien race which could comprehend our brain – they would consider us mindless automata, while we currently consider ourselves the prime of all Creation. Humbling, isn’t it?
The conditions I give are necessary, but I would not hasten to call them sufficient. If you merely obfuscate your algorithm, making it incomprehensible, that wouldn’t make the machine running the code any smarter. Likewise, solving new problems by brute forcing through a gargantuan configuration space does not grant further IQ points.
If you show me a neural network program which successfully solves a large set of new unencountered problems, I’ll probably agree on calling it intelligent. But most things less than that are either too narrow, or too comprehensible.
What good is this definition? I don’t know, what good is any definition of artificial intelligence? We are still far away from machine ethics, and until then, such definitions have no practical use. Perhaps it would deem to be useful, in case we ever prove, mathematically, that the brain contains inherently unprovable algorithms. But that’ll take a while.