Copyright © 2019 Savantics Inc . All rights reserved.

TENSAV: Evaluation of Neural Structures for Artificial Vision

TENSAV is the name for a suite of three large main programs, and four smaller utility computer programs currently being developed and tested.

The name describes the basic function of the program suite. However, TENSAV is not yet another "deep learning" simulator. The program uses learning, but is designed to have a combination of features and capabilities that are not available on any existing simulators.

The project at this point consists of basically-complete program code for the three main programs. There is also a 1000+ page illustrated programmer's Design Document. A User Manual will be derived from the design document when basic testing and debugging of the program has been completed.


Development was delayed for about 2 months due to a significant change in the program structure. Initial testing and debugging has begun, but the programs are not yet debugged sufficiently to declare them at "alpha" stage.


The About page gives a limited description of TENSAV and the Background page describes the motivations for pursuing it. Additional details on the project will be released as testing and development continues.

Evaluator of Neural Structures for Artificial Vision

TENSAV is designed and optimized primarily for visual object recognition. However, it's intended to be adaptable to other types of sensory input (e.g., audio, tactile). In addition to image inputs, it has the capability to input sets of numerical values.

The plan is for TENSAV to be able to emulate complex neuronal networks on a desktop PC or workstation with acceptable training times. The conflicting requirements of high-performance, large network sizes and minimal memory usage necessitates algorithms and data structures that are very complex.

TENSAV can model neurons ranging from simple point cells up to complex variable-state models. Networks can have multiple layers of sources and nodes with many connectivity options. As is typical, training the network is done by presenting repeated examples over multiple epochs using Hebbian-style learning. The hope is that the unique capabilities of the program will allow the number of training epochs to be fewer than is typical for standard ANN programs.

Project Goals