Intelligence is inexplicable

So, the question still remains – after spending endless nights in front of a bright screen, surrounded by empty pizza boxes and the soft hum of cooling fans – when does your program finally count as “intelligent”? It may have started out as a simple “hello_world.c”, but now it’s a raging monstrosity, fully imbued with seven layered neural networks, evolutionary algorithms and a taste for human blood. Doesn’t that make it an artificial intelligence, even if it can’t pass the Turing test? (To be honest, I know many real-life people who wouldn’t pass the Turing test themselves).
Yes, many papers have been written on this, and the walls of a myriad rooms are covered with Eastasian symbols, but still, it will do no harm to spew out my own two cents. I guess I could search the relevant literature first, but there’s no fun in that, is there?

tl;dr? The main points, summarized:

– An intelligent machine must be able to solve problems it has never encountered before.
– If a machine can be reduced to an algorithm which humans understand, it is not intelligent.

These are necessary, but perhaps not sufficient, conditions.

Since this is just a definition, and we won’t do anything rigorous with it, I can afford to simply look at some specific cases and use inductive arguments in rashful abundance.

We’ll start with an example. Consider the task of sorting numbers: we have a list of numbers, and we want to arrange them from smallest to largest.
I think it’s safe to agree that a preprogrammed computer running “heapsort” would not be considered intelligent. After all, it’s just using a predefined algorithm, one which was mathematically proven to work, line after line. There’s no intelligence in that. It’s a simple mechanical process, with a sole and not-very-surprising outcome.
But give a child (i.e – anyone with no knowledge of heapsort) a stack of cards and ask her to sort them, and she’ll eventually succeed. Perhaps she won’t do it in the most efficient way possible, but let’s look at the circumstances – she was not preprogrammed with a built-in sorting algorithm. All she had to start with was the goal – “I’ll give you some chocolate if you sort these cards for me” – and eventually she found a way to solve the problem – perhaps by gathering on her previous experiences, or perhaps by some Divine Inspiration – who am I to judge?

I say then, that an intelligent being is one which is not designed to specifically solve one type of task – except, perhaps, for the task of solving other tasks. An artificial intelligence must be capable of solving problems it has never encountered before. In that sense, in my opinion, programs whose sole purpose is to pass the Turing test should not be considered intelligent (then again, I don’t think that the Turing test is a definitive way to decide artificial intelligence in any case).
You might get a bit angry and interject, “but a Turing-test solving-machine may encounter questions and dialogues that it was never pre-programmed to answer!” To this I say, that such a machine is closer to a linear-equation solver, which, much to its delight, was just given as input a set of coefficients which it was never given before. I think solving the Turing test is much closer to that side of the spectrum, rather than to the “solve types of problems it never saw before”. You might say that a general problem solver is the same, only with problems as inputs and not coefficients, but there is a vast difference between “solve the problem of talking to humans” and “solve the problem of solving problems”.

One thing I do take from the Turing test, is the fact that it puts a main emphasis on humans. We don’t really work hard on creating a machine that is indistinguishable from a dog, or perhaps, an ant. I bet most people are willing to crush ants, but are less willing to crush dogs, partly for the reason that ants are much more mechanistic than dogs. But humans, now, those are a different matter. How does a human work? What inner mechanisms drive thought forward? How can it be that a set of neurons firing can lead to differentiation of continuum and discrete infinities? We don’t know, and hereby lies our intelligence.
To put it plainly: if we have an algorithm for solving new problems, and we understand how that algorithm works (perhaps, by some miraculous wonder, we also proved that it is correct!), then a machine implementing that algorithm is not intelligent. How would it be any different than the machine implementing heapsort?
Now, eventually, all machines (both silicon and carbon based) can be reduced to transistors opening and closing / if-else clauses / neurons firing, so we have to be careful here – reducing a machine to mechanical actions is not enough to deprive it of its intelligence – indeed, I do not claim that there is some non-physical or non-algorithmic process involved, or that our intelligence is found in some sort of “soul” or whatnot. Rather, the keyphrase here, is “we understand how that algorithm works”. To put it in other words, intelligence is subjective.

Consider our favorite pastime learning technique of artificial neural networks. We create a large batch of randomly connected neurons, and give them zounds of data sets to learn. At the end, the weights between the neurons are such, that the network is capable of solving the specified problem rather well.
But given the network, we cannot (well, at least I cannot) say that we understand why it works. It’s just the way the weights, the reverse-feedbacks, the interconnected relations turn out. Hell, we even started with a random network! We have programmed the network using our design; we understand the learning algorithm; but the network itself? Given the connections and weights, it’s very hard to say why these particular values work; it’s also hard to say why these are the ones that we got, and not any other ones.

I am aware that there is a not-so-comfortable disadvantage in this argument, stemming from its conclusion. If ever we were to reduce, in totality, the human brain to a series of provable algorithmic steps – we would be forced by this criterion to revoke our own status of intelligence. That would totally suck for our collective ego. Also, if intelligence is subjective – if ever there was an alien race which could comprehend our brain – they would consider us mindless automata, while we currently consider ourselves the prime of all Creation. Humbling, isn’t it?

The conditions I give are necessary, but I would not hasten to call them sufficient. If you merely obfuscate your algorithm, making it incomprehensible, that wouldn’t make the machine running the code any smarter. Likewise, solving new problems by brute forcing through a gargantuan configuration space does not grant further IQ points.
If you show me a neural network program which successfully solves a large set of new unencountered problems, I’ll probably agree on calling it intelligent. But most things less than that are either too narrow, or too comprehensible.
What good is this definition? I don’t know, what good is any definition of artificial intelligence? We are still far away from machine ethics, and until then, such definitions have no practical use. Perhaps it would deem to be useful, in case we ever prove, mathematically, that the brain contains inherently unprovable algorithms. But that’ll take a while.

7 thoughts on “Intelligence is inexplicable

  1. I don’t agree with your second criterion. It would then imply that the definition of intelligence would vary across species, because as you’ve said, some species more intelligent than us would then classify the same system differently than we would, if they understood our “algorithm”.

    Clearly not a very scientifically useful criterion, if the classification of the object depends on the observer, wouldn’t you agree?

    It also has some inconsistencies. If a team of say two scientists discovers our “algorithm”, do they instantly become unintelligent? And when they communicate their ideas further, will less and less people be unintelligent as they begin to understand how their intelligence is supposed to function? Again, the definition doesn’t seem very useful here.

    Personally I’m more interested in sentience, why are we able to perceive things, and it’s possible that our intelligence stems from it.

    So that could be a fundamental criterion of my definition. But again, not very useful with our current knowledge, is it? Sigh, intelligence is hard…

    1. So, the question is, useful for what? If you are only interested in practicalities, then you really don’t care if you understand the algorithm or not – you want you edges detected, your cars auto-driven, and your problems solved; you could hardly care about definitions of artificial intelligence (unless you think there might be theorems about what an “intelligence” can or cannot do? In that case, sure, but then you probably would want a very rigorous, very “is able to use second order formal logic” kind of definition, and this is not about that).

      What I noted at the end (maybe it’s too hint-y), is that a definition would be needed for machine ethics (sentience also comes into play here, if you will). But ethics is concerned solely with interaction of man and his surroundings (aka modern moral convention is that its ok to kill cows for various purposes, but not so humans), so it’s perfectly reasonable to have something specific to man.

      (On that note – if we *do* ever reach a complete algorithmic understanding of the brain – then the definition is indeed worthless ethically, since it won’t allow you to differentiate between man and heapsort).

      Anyway, regarding the “inconsistencies” – since we’re pretty self-contained as a species, let’s take it that it doesn’t depend *who* in particular has the knowledge. It may be that only the shared knowledge of many people combined is enough. In that sense it is global – the moment “mankind” reached understanding, then “mankind” has lost its status. (This avoids the really-not-interesting cases of “spread of information”, people who are not smart enough to understand the proofs, etc). In short, the scenario you described, while amusing as a story of Stanislaw Lem, is not at all what I intended.

  2. Very interesting. Did you read GEB in the end? It offers some insightful revelations about this exactly.

    I too do not agree with your second condition, I can’t imagine how our understanding of ourselves would suddenly turn us non-intelligent. Create a machine that is exactly identical to a human being, and you’ve got AI, haven’t you?

    1. Well, at least people disagree with the first condition! I guess its a bit like the “how can there be free will in a totally classically mechanic deterministic world?”; the appropriate question is “how can there be intelligence in an algorithmically mechanic world?”.

      GBE is slowly moving higher on the list, and we even have it (in Hebrew) in our student lounge thingie. But as matters are standing today, I read just the crab canon 😛 (I actually think it’s better in Hebrew than in English).

  3. Nice post.

    A few remarks though.

    Based on what you claim that the girl wasn’t pre-programmed to handle the task?
    She’s definitely using some pre-programmed abilities. Otherwise, she wouldn’t be able to do anything.

    After we come up with some pre-programmed ability candidates, we can start looking for educated excuses in different levels.
    For example, it seems reasonable to claim that human sorting involves magnitude comparison which has a clear evolutionary advantage – distinguishing between threatening and nonthreatening organisms.

    Maybe even the whole sorting process can be reduced to a set of these binary comparisons, much like most sorting algorithms work.
    Furthermore, if I were to design an experiment that will make an ant sort an array, would that mean it’s intelligent?

    I think this notion of un-pre-programmness is only due to our lack of understanding the complexity of the system and stems from our inability to disassemble the big process and give each sub-process a name.

    In addition, you claim that ‘an intelligent being is one which is not designed to specifically solve one type of task’.
    Why would you say that? What is your motivation to believe in such a claim? How do you know that we are not perfectly designed to do exactly what we can do?
    In other words, I don’t understand why solving a never-before-seen problem, means that we are not designed to solve it.

    If a red bee caused us pain, and now we avoid all red things, does the fact that all these red things are not bees (different problems), imply that we are intelligent?
    No (I believe), it only implies that our brains (and definitely not only ours) can create an association between color and pain/danger.

    In other words, I don’t think it’s a question of design or purpose. I think we are designed to deal with these tasks and the question should be how can we generalize them and show that these are just private cases of the same thing along with the primitive abilities that we are “designed” to use.

    I don’t think that the fact that our design is very goal-oriented, contradicts the possibility that we are also designed (it’s the same design. same brain circuits) to deal with new situations and specifically, such that have no evolutionary purpose.

    The fact that we have abilities that allow us to solve unnecessary problems (e.g it’s hard to claim that sorting an array raises our chance of survival. at least outside the technion), doesn’t mean we are not designed to solve them.
    I don’t think that these tasks require new abilities. These are just higher order problems that can be generalized to the same basic principles along with the more primitive abilities (e.g sorting an array and magnitude comparison).

    Finally, I know it’s a common statement in this field, but you didn’t convince me that ‘If a machine can be reduced to an algorithm which humans understand, it is not intelligent’.

    1. Here is how I understand the things that are bothering you: It seems like your main point is that we are indeed designed to do what we do: “In addition, you claim that ‘an intelligent being is one which is not designed to specifically solve one type of task’. Why would you say that? What is your motivation to believe in such a claim? How do you know that we are not perfectly designed to do exactly what we can do?”.

      I’ll treat the second part first: we are not perfectly designed to do what we can do; we can do quite a lot of things, and eliminates the “one task” which I put forth in my original post. I gave the sorting of an array just an example, and I think you are putting a bit too much emphasis on it. If you think that it’s too easily reducible to “magnitude comparison” or something very primordial, man can do many other things: play a violin, construct an airplane, hover in space, distinguish between the cardinality of the rationals and the reals, establish ruling systems that govern millions of people, solve a Rubik’s cube, create logic paradoxes, discover the mathematical description of the laws of physics, transmit information across the entire planet in less than a second…
      I think it’s fair to say that there was no selection pressure on any of these specifically, so we were not designed to do any of those specifically. Perhaps our underlying abilities are such as to let us solve such difficult problems. That’s quite fine with me, that is exactly what I would mean by “capable of solving problems never before seen (which is close enough to “not selected for” in our 4-billion year heritage).
      I would also note that we are clearly not perfectly designed for some of the things we do; for example, we can only keep about ~7 items in working memory at the same time, and this is clearly not optimal for arithmetic / proofs / programming.
      To conclude very shortly: if we are “designed to solve new problems”, that’s quite fine with me; it stands together with my definition of intelligence.
      (Note: here, it is assumed that biological things are designed by evolution and natural selection, and robots are designed by men. If you still claim that “humans are designed to solve a Rubik’s cube because we have underlying skills that are selected for”, then I think that it’s stretching the definition of design, and I meant it in a narrower sense than that).

      So much for the point about design. As to the first part: what is my motivation to believe in such a claim (that we are not designed to solve just one type of task): well, this stems from my example about a sorting machine (or if you want: a violin-playing robot, a space-shuttle, a robotic-arm in a car factory, a Lego Rubik’s cube solver…). I can only say that it seems plausible, because I do not consider these goal oriented machines as intelligent; and when I look at things which I do consider intelligent (such as humans, or crows), I see that they can solve problems they never encountered before.
      You may say that a real motivation would require actually making use of the definition, which I so far have not done. Fine by me.

      Finally, “I know it’s a common statement in this field, but you didn’t convince me that ‘If a machine can be reduced to an algorithm which humans understand, it is not intelligent’.” I didn’t know it’s a common statement :).

  4. In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
    “What are you doing?”, asked Minsky.
    “I am training a randomly wired neural net to play Tic-tac-toe”, Sussman replied.
    “Why is the net wired randomly?”, asked Minsky.
    “I do not want it to have any preconceptions of how to play”, Sussman said.
    Minsky then shut his eyes.
    “Why do you close your eyes?” Sussman asked his teacher.
    “So that the room will be empty.”
    At that moment, Sussman was enlightened.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s