Evolving the best network
I have been evolving neural networks like there’s no tomorrow in an attempt to find out if there are any kind of architectural features in a neural network that allow it to exploit the presence of noise. I also refined the fitness function in order to get results I can be more confident in. The fitness function takes three things into account. The optimum information transfer rate with respect to increasing noise, the difference between optimum and minimum and the slope of the information transfer rate post-optimum. In short, the fitness function evaluates the network’s “best”, just how good its “best” is and how quickly its best performance deteriorates under increasingly bad conditions. Below is a typical example of what the population’s evolutionary trajectory looks like.
Here is also the performance of all original epoch winners (ie. highest scoring individuals).
Finally, the highest scoring network of the very last epoch and its architecture.
As you can see, the network is a rather small one, only 14 neurons large, and it has a rather sparse connectivity featuring feed-forward, recurrent, lateral and self-connections. An interesting observation is that the vast majority of the connections are not strong enough to make their post-synaptic target fire on their own.
The most interesting result however, is the fact that not all of the above characteristics seem to be necessary for a network with good performance in terms of noise. Without drawing any conclusions just yet, I would have to say that more than anything, a variety of features seems to be the best feature a network could have rather than any single one type of connection or characteristic. This brings me to the next step in this set of experiments which will be the tracking of features throughout an evolutionary run. A sort of digital comparative anatomy if you will. I’ll track the appearance, disappearance and overall evolution of connectivity features in the population and each epoch’s winner and see if that sheds any more light.
After that, I intend to apply a few more sophisticated approaches in order to study the computational capabilities of the evolving networks, but that will be a whole other, and much more complicated chapter. The approaches I plan to use are presented in the papers below.
 O. Sporns and G. Tononi, “Classes of network connectivity and dynamics” Complexity, vol. 7, Sep. 2001, pp. 28-38.
 M. Rubinov and O. Sporns, “Complex network measures of brain connectivity: Uses and interpretations” NeuroImage, vol. 52, Sep. 2010, pp. 1059-1069.