Not many people know this, but I was briefly involved with artificial intelligence research in the early 1980’s. In fact, I had to miss my first wedding anniversary because I was at an AI conference in Austin, something I’m still living down almost 27 years later.
I didn’t quit doing AI, it was more that the field with which was involved was kicked out of the club.
I worked in the area of symbolic mathematical computation. Whereas in a spreadsheet you might have rows and columns of decimals that you add or multiply or compute net present value or standard deviations, in symbolic computation you might have an equation like x2 – 1 = 0 and you want to solve for x. (The answer is that x can be 1 or -1.)
In practice you have much bigger polynomials with more variables, elaborate functions, matrices, integrals, differential equations, and so forth, and you want to compute with them. So the field is not focused on strictly numeric computation as in a spreadsheet but also on more advanced mathematical objects that might include symbols or variables such as the x above.
In practice, representing these kinds of objects in computer systems is not trivial but not necessarily that hard because people have been doing it since at least the 1970s at MIT and elsewhere. An example is representing integers that can get arbitrarily large. The number 21000 is too big to fit in a computer’s hardware registers:
but you can implement it in software.
You store the number as chunks of smaller numbers, do the arithmetic on the smaller chunks, and then carry or borrow among the chunks. Think about how you would do regular addition or multiplication by lining up the numbers in rows and then working on the columns. For example
where you carry a 1 from the sum in the rightmost column to the column to the left of it. In practice, you don’t deal with single digits in each column but larger numbers when you implement so-called “big integers” or “bignums” in software.
I digress, but my point is that going from the math you do by hand to building software that can handle it is real work, sometimes maddeningly tricky to get right, but is it artificial intelligence?
Doing addition, subtraction, and multiplication as above is very straightforward and algorithmic, but often people get snagged when they need to implement division. This is often true in real life when children are learning how to do long division because it involves some guessing and back tracking. Maybe this guessing, when implemented in a computer, is AI?
One of my favorite quotes is from Donald Knuth’s The Art of Computer Programming: Seminumerical Algorithms where he talks about this problem with division:
Here the ordinary pencil-and-paper method involves a certain amount of guesswork and ingenuity on the part of the person doing the division; we must either eliminate this guesswork from the algorithm or develop some theory to explain it more carefully.
Extending this to much more difficult mathematics such as symbolic integration is the crux of the difference between the field being AI or just being computation computer science. At what point do you know how to do enough algorithmically so that the guesswork and perhaps many of the heuristics are explained away?
By the mid-1980s, the powers that be pretty much decided that this was no longer part of AI, but also the whole notion of what is or is not artificial intelligence has changed through the years. Symbolic computation is not a less important field for not being AI per se, and significant theoretical work and important algorithm development and tuning continues to this day.
What might still be AI is understanding a problem given in natural language and then knowing what mathematical techniques should be used to solve it, expressing the result in the same terms originally given. For example, in school you might have had to solve word problems about trains or coins or ages. The answer had to be given in time, or some currency, or years, and you had to learn the algebraic methods to get to that answer. Extended to much more difficult physics or engineering problems might get you closer to real AI in action.
Aside regarding that AI conference in Austin: One evening several of us sat in a hot tub discussing what we thought about the conference and what had happened that day. There was one man we didn’t know, but he was an active member of the conversation. After 20 minutes the rest of us realized that he was not at the AI conference but instead one about lawn mowers. It changed my views about AI because it took us so long to figure that out.