A Perspective on Artificial Intelligence

Personal Background

Simulated and Real Intelligence


Copyright © 2019 Savantics Inc . All rights reserved.

I’ve been interested in autonomous robots pretty much my whole life. I have an educational and practical background in mechanical engineering, electrical engineering and computer programming. But my specialty is simulation.

I used to believe that autonomous robotics was primarily about mechanical design and some clever software. After building a few robots and attempting to make them do useful things, I realized that the real enabling technology was artificial intelligence.

Looking at the use of AI in robots, the biggest single problem seemed to be sensing for world modeling. And vision is the primary sensory mode that most mobile creatures use to deal with the world. So, several years ago I decided to pursue the goal of artificial vision.  The TENSAV project is the result.

Frank Jenkins, CTO, Savantics Inc.

In any computer simulator, there's always a balance between model complexity and program efficiency (i.e., memory usage and execution speed).

Evolution has shown that huge numbers of interconnected nodes is a key factor in intelligence (as defined above). Evolution has also made brains astoundingly complex in other ways. But evolution doesn’t necessarily find the best solutions; it makes changes that enable species to survive. Evolution also creates artifacts that serve little purpose, but don’t affect survival, so there are undoubtedly some neurological features that are unnecessary for "intelligence". For computational efficiency, the “trick” is figuring out what features need to be modeled, and which can be ignored. The goal is for TENSAV to achieve the right balance of performance and fidelity.

In order to discuss artificial intelligence, it's necessary to first define "intelligence". There have been many attempts at a definition,  but for this discussion, it's posited that:

                   "Intelligence is something that human brains can exhibit”.

The early “Good Old-Fashioned AI” methods assumed that people could somehow program intelligence into a computer program. But for all the work by a bunch of brilliant people, GOFAI didn’t seem to be progressing past toy problems and limited domains. However, IBM's Watson has shown that GOFAI methods combined with learning, statistical methods and massive memory storage can produce practical results in a limited domain. 

Artificial neural networks re-emerged with a bang in the mid-1980’s and looked to be more promising since they were “based” on brain architecture and used learning.  The early simple NN’s did appear to be able to solve some relatively narrow real-world problems, but didn’t seem to be advancing toward “intelligence”.

If you look at brain physiology, it seems apparent that one key to intelligence is the almost-unimaginable complexity of the human brain. Also, learning (like humans do) looks like the obvious approach to modelling intelligence in a computer.

The now-hot field of “Deep Learning” using massive convolutional neural nets has shown some impressive results, and indicates the utility of learning and complexity. However, DL's use of simplistic “neurons” and connectivity may ultimately limit their overall robustness and generality.

A more sophisticated approach is needed that is more like biological neural networks.