Neural Network Thoughts: Robot Dreams

I’m not an Artificial Neural Network expert. I don’t have a degree, and have only done self-study. That said, I’ve been researching them for 16 years, and I’ve had a few thoughts on the subject that I want to explore. Please, leave a comment if you agree or disagree. I’d love to get as much information as I can.

Now that that’s out-of-the-way, let’s proceed.

What do Artificial Neural Networks consist of?

Neural networks are a collection of neuron objects. Neuron objects consist of a few things. First, an activation gate, which is basically a math formula that tells the neuron if it should process the input data or not. A processing algorithm, which is used by the neuron to manipulate the input that is passed to it. A weight, which is used by the processing algorithm to change the input. Finally, they have a transfer gate, which determines whether or not the neuron will output the data it has processed.

Different structures and neural network “flavors” will have different methods for doing the above, but in nearly every network structure each of those is present.

What are the types of Neurons?

Node types are determined by where they receive input from, and what they provide output to.

Input Nodes receive input from outside of the network, and provide output to nodes within the network. Generally input nodes also have an activation function that is always true, and a transfer function that is always true. Their processing algorithm is also generally just a pass through. In short, the input and output of an input node is generally the same.

Hidden Nodes / Processing Nodes take input from other nodes within the network, and output to other nodes within the network. They may take input from the input nodes, or other hidden nodes. They may also output to hidden nodes or output nodes. These nodes are where the real work of a neural network gets done. The formulas that they use, the weights they contain, all of these things are manipulated through the training process to determine what their output will be, and eventually what the output of the network will be.

Output Nodes take input from other nodes within the network, and output to processes outside of the network. These nodes will sometimes be pass through nodes, but generally they have weights and processing algorithms inside of them.

Neural Network “Flavors” and What We’re Doing Wrong

We determine the “flavor” of a neural network by their node structure, and the math that is used to transmit data from input to output nodes. Neural networks are run as one-off processes, traditionally. Information enters the input nodes, is processed by the processing nodes, and exits the output nodes. Processing ends when the output is found. Loops are used to batch process data, but as they are handled outside of the network, they don’t change the fact that the “thought process” of a neural network has a defined beginning, middle, and end.

I believe true artificial intelligence is possible. I also believe that we are nowhere near that. True intelligence requires an infinite loop system. The brain is constantly processing every thought that it ever had. Every experience, insight, visual, sound, etc. Everything you have ever done is trapped inside of your head, looping ad infinitum.

Why, then, do we think we can achieve true artificial intelligence from a process that starts and stops in a single processing thread? Or are we even trying?

What I Envision

I picture a structure for neural networks that combines multiple inputs, separated hidden processing layers, and multiple outputs. I call this structure a Neural Web. Neural webs apply an additional value to the Neuron object. This value plays into the transfer function to keep thoughts moving inside the hidden layers, the processing layers, while only activating the output nodes when necessary. I call this value a repeat which is a truly original name, I know. It would be a numerical value that would diminish over time, to a minimum, to allow the full data of the nodes to move through the network, while minimizing the impact of that data as it aged.

Now let’s look at the structure:

Neural Network Web

An example–very basic–of what I mean

The green nodes are Input, red is Output, the blue are Processing nodes, with three special processing nodes directly before the output acting as filters. There are four special nodes, two in each network, that act as “communication” nodes between the different networks. These nodes are proxies for the lobes of a brain. They allow information to bubble between the audio and visual networks, allowing the neural “web” to learn that Person X is speaking while it hears a voice, therefore it is hearing Person X’s voice.

Uses of this Structure

This is only one example. It would be useful in robotics. Possibly in an interface for a true intelligence. You can take nearly any number of inputs and generate a neural network with this same “neural web” structure, and with the right math, have it setup to learn. This structure allows self-learning by the simple fact that everything it ever does is processing in it’s brain, if only minimally.

Obviously I am being theoretical with that statement. I have never built this structure, nor do I know how it would work. I want to see it built, however.

Leave a Reply