Below is a great part of my work. Tens of lines of Python code that overlap with the last year of my PhD and all the research I’ve done on my own since then. I cleaned it up a bit and provide a rudimentary “documentation” so that people will know what they can do with it. Some of this was really difficult to find (when I looked for it) and some was impossible to find so I wrote it myself. The last four procedures, smooth, _rand_sparse, sprand and sprandn are not mine so I cannot take credit for them but I put them there for completeness’ sake. Everything else is more or less “mine”. This coincides with an awesome update on WordPress: embedding of Gists. Consequently, I saw this as a great opportunity to put up my code here before I start doing nasty things to it for my next piece of research. Take it, use it, repurpose it and let me know if something isn’t working, or needs fixing or if you did something cool with it. Enjoy.
A collection of functions for the generation of neural networks and the setup of various experiments.
Visualising the structural and functional connectivity of a neural network. This is a simple function that calculates the covariance matrix of a neural network based on its activity. It then reorders the covariance matrix to obtain a depiction of functional connectivity and based on that reordering also rearranges the connectivity matrix in order to obtain a clearer picture of its structural connectivity. Put simply, it brings closer for us to see neurons that are structurally and functionally connected. All it needs is the network’s connectivity matrix and a sample of its activity in order to calculate the covariance matrix.
A simple function for obtaining a spiking neural network from a binary genome and evaluating its information transmission capabilities with respect to stochasticity.
A simple function for converting a binary genotype into the architecture characteristics of a spiking
neural network. It takes as input L, which is the number of layers in the network for the particular case of layered networks, gn which is the number of genes in the genome, gl, which is the length of the binary gene and gen is the current individual’s genotype. It returns the network’s characteristics, N, the number of neurons per layer, C, the connection strength for each set of connections and dn, the connectivity density for each set of connections.
A simple function for evaluating a spiking neural network’s information transmission capabilities with respect to noise amplitude. This function takes cf_stor as input, a vector of stored coding fractions each in response to a different value in noise level. It removes NaNs, calculates the coding fraction
at optimum noise level opt, finds the optimum noise level index, the trends of pre- and post-optimum coding fractions trajectories and their slopes. It can then use any of those values to compute a fitness score depending on some fitness function and returns it.
A simple function for stimulus estimation and evaluation. The function takes x and y as input where x and y are both signals (in this particular case an input signal and the response of a neural network) and it returns cf which is the coding fraction between the input signal x and its estimate Iest.
A simple function for simulating a single Izhikevich spiking neuron model. This function simulates a single stochastic Izhikevich neuron. It takes T, D and I as input where T is the simulation length in ms (positive integer), D is the noise amplitude (sigma of Gaussian distribution) and I is the input signal. It returns firings which is the neural response (a binary sequence).
A simple function for stimulus estimation/reconstruction in a neural system using a Wiener-Kolmogorov filter. This function takes two 1-by-N arrays as input, the input signal presented to the neuron or neural net and the neural response. It also takes two integers nfft and tstep, where
nfft is the number of data points used in each block for the FFT and tstep is the sampling frequency. nfft must be even and a power of 2 is most efficient. tstep is an integer declaring the time step. It creates a WK filter that is then convolved with the neural response in order to produce an estimate of the input that produced it. It returns the input signal estimate, the zero-centered input signal and the filter.
A simple function for evaluating a spiking neural network. This is a network of Izhikevich neurons arranged with the architecture produced by netgen. It takes as input the architecture produced by netgen, the input signal produced by input_gen, the length of the simulation T and the noise level D. It returns a matrix of 0 and 1 where 1 signifies a spike and 0 the lack of neural activity.
A simple input generation function. This function generates an analogue signal for the neural net. It uses a Gaussian distribution for the values and then smooths them using a moving window method. The input Ia is the input amplitude T is the length of the simulation in ms, K is the number of neurons
in the population, ind are the indices of the neurons the input will be presented to, smin is the length of the smoothing window and norm is the option of whether to bring the entire input signal above zero
or not. It outputs a matrix with the signal values for all rows with index ind and zeros for every other neuron.
A generic network connectivity architecture. This is a simple function that generates a variety of network connectivities and consequently architectures anywhere from a three-layer feedforward network to a neural pool of dynamics. It supports recurrent connections, lateral and self-connections, feedforward connections (obviously), any degree of sparsity (0 to 100% connectivity) and both excitatory and inhibitory connections. This function takes three arguments: N, C and D. N is a
1 by n list of integers larger than 1, where n is an integer from 1 to 3 and signifies the number of layers
in the architecture of the network. Each value in N is the number of neurons in each layer. C and D are L by L lists where L is the number of layers in the architecture. C stores the synaptic strength of each set of connections and D stores the sparsity of each set of connections. This function builds
subsets of connections with individual strengths and sparsities which connect any one layer to another. The subsets are then concatenated into a big matrix which describes the connectivity and the overall architecture of the network.
Smooth the data using a window with requested size. This method is based on the convolution of a scaled window with the signal. The signal is prepared by introducing reflected copies of the signal (with the window size) in both ends so that transient parts are minimized in the beginning and end part of the output signal.
Used to generate an M-by-N sparse matrix of any chosen density.
Used to build a sparse uniformly distributed random matrix.
Used to build a sparse normally distributed random matrix.
If you remember from my last post, I was trying to find a decent way to solve a system of Ordinary Differential Equations (ODEs). Scipy has a lot of issues with Runge-Kutta methods so I decided to opt for the simpler, more generic version of ODE solver, odeint(). The great thing about this setup, even though it is not as accurate as I would want, is that I can switch the system of ODEs for another one (given some restrictions of course) relatively easily. This means that I can switch from simulating a pendulum to a pole-cart to an even more complex system with a few lines of code. I don’t have to build a simulation and the solution of its ODEs from the top.
Why would I want to do that? Neurons of course. It’s always about neurons. I want to test how well Spiking Neural Networks (SNNs) will fare against some physical problems. I will find out what the best SNNs are for controlling systems with varying degrees of freedom using Genetic Algorithms (GAs). And then I will dissect them. I will study their topology, their connectivity properties, their information processing capabilities, everything and anything that can clearly demonstrate why some SNNs would do better than others as control systems.
This is going to be a long project. The first step was to find a sort of “physics engine” for my simulated “real-world problems”. Now that is done, I will revamp my neuroevolution Python code to work for this set of experiments and hopefully work better. After that I should be able to start evolving SNNs that will solve physical problems and then I will have to start collecting tools for analysing them. As always, I will show you everything I’m doing as I slowly work on this project. Stay with me!
While heavily evolving neural networks, I decided that I need a clever and simple way to depict what the network looks like. So I started researching and I found in my dusty pile of papers (by dusty I mean well organised and by pile my digital archive maintained by two different kinds of software, Zotero and Mendeley, papers still means papers) a really cool paper (Sporns & Tononi, 2001) about Classes of Network Connectivity and Dynamics. Yay.
In this very excellent paper, is a great idea about how to visualise a neural network’s structural and functional connectivity. Structural connectivity, obviously, is the sum total of all actual connections between neurons and functional connectivity is the totality of connections that make neurons part of similar, or the same, processes. I decided to test this idea and below is a sample.
Both of these pictures are produced by reordering the covariance matrix of the neural network using a clustering algorithm. What this does is that it brings closer neurons that are functionally related. This way we get a depiction of the functional circuitry in the network. Using this transformation, we can also reorder the structure of the network and get a visualisation of what that functional circuitry looks like in terms of structure. Here is a simple version of the Python code I’m using to do the matrix reordering.
It doesn’t look like much right now, but if I use the right criterion for evolution, I could perhaps see lobes and sub-populations forming around specific functionality. This could also help in discerning what structural and functional differences neural networks that benefit most from noise have from all the rest.
I have to admit though that even if it is hard to understand what the above pictures are showing, the symmetry is enthralling.
 O. Sporns and G. Tononi, “Classes of network connectivity and dynamics,” Complexity, vol. 7, Sep. 2001, pp. 28-38.
I have been evolving neural networks like there’s no tomorrow in an attempt to find out if there are any kind of architectural features in a neural network that allow it to exploit the presence of noise. I also refined the fitness function in order to get results I can be more confident in. The fitness function takes three things into account. The optimum information transfer rate with respect to increasing noise, the difference between optimum and minimum and the slope of the information transfer rate post-optimum. In short, the fitness function evaluates the network’s “best”, just how good its “best” is and how quickly its best performance deteriorates under increasingly bad conditions. Below is a typical example of what the population’s evolutionary trajectory looks like.
Here is also the performance of all original epoch winners (ie. highest scoring individuals).
Finally, the highest scoring network of the very last epoch and its architecture.
As you can see, the network is a rather small one, only 14 neurons large, and it has a rather sparse connectivity featuring feed-forward, recurrent, lateral and self-connections. An interesting observation is that the vast majority of the connections are not strong enough to make their post-synaptic target fire on their own.
The most interesting result however, is the fact that not all of the above characteristics seem to be necessary for a network with good performance in terms of noise. Without drawing any conclusions just yet, I would have to say that more than anything, a variety of features seems to be the best feature a network could have rather than any single one type of connection or characteristic. This brings me to the next step in this set of experiments which will be the tracking of features throughout an evolutionary run. A sort of digital comparative anatomy if you will. I’ll track the appearance, disappearance and overall evolution of connectivity features in the population and each epoch’s winner and see if that sheds any more light.
After that, I intend to apply a few more sophisticated approaches in order to study the computational capabilities of the evolving networks, but that will be a whole other, and much more complicated chapter. The approaches I plan to use are presented in the papers below.
 O. Sporns and G. Tononi, “Classes of network connectivity and dynamics” Complexity, vol. 7, Sep. 2001, pp. 28-38.
 M. Rubinov and O. Sporns, “Complex network measures of brain connectivity: Uses and interpretations” NeuroImage, vol. 52, Sep. 2010, pp. 1059-1069.
The title refers to the branch of science that evolves neural networks for various purposes. For the past few days, this is exactly what I have been trying to do. As I mentioned before, I’m trying to evolve spiking neural networks which will demonstrate positive phenomena, such as Stochastic Resonance (SR), as a consequence of only their architecture. Put simply, I want to find out what structural characteristics make spiking neural networks more robust to increased noise levels and which ones help them maximise their information transmission capabilities.
I had to dust my Genetic Algorithm programming skills a bit and I fought with plenty of bugs but I think I’ve done it. I wrote a simple genetic algorithm which evolves spiking neural networks which demonstrate SR and general robustness to noise. Below is the evolutionary trajectory of the population fitness:
Also, here is the Python code for the GA. It’s very simplistic but it works and it can form the basis for a more sophisticated approach. What I want to do now is to run a few longer experiments and study what the successful, in terms of fitness, individuals look like. I want to find out what makes neural networks robust to noise and whether they can exploit it on their own without complicated algorithms. More on that as the results become available.
I have now translated all of my code for Matlab to Python and I’m testing it. The first thing I did, and this was also the first thing I did when I first set up these experiments for my PhD work, is to verify that I can observe any positive effects of noise in a neuron or neural network. The most famous, observable positive effect of stochasticity is Stochastic Resonance (SR). This phenomenon describes an increase in information transmission in neural systems with the addition of some noise. After a certain point any further addition of noise decreases the rate of information transmission. This means that there is an optimum, non-zero noise level at which a neuron or neural network maximises its information transmission capabilities. We want to take advantage of phenomena like this one.
The reason I needed to verify that I can observe this phenomenon in my experimental setup is because I’m using a methodology known as stimulus estimation to measure information transmission online. If I can observe this phenomenon online, I can optimise noise levels with respect to information transmission online. And that is very good. This is in fact what I did for my PhD and I will post my original results as soon as my thesis is defended.
So without further ado, here is what SR looks like for a single neuron:
And here is what SR looks like for a whole spiking neural network:
Notice that the two curves aren’t smooth search spaces so optimising based on this measure isn’t perfect but I’m working on that. In my PhD work, I’ve achieved optimisation with a simple gradient ascent algorithm but while I was on the last set of experiments, where I was trying to optimise noise for whole neural networks, I observed different behaviours in the neural populations in terms of noise optimisation depending on various structural characteristics like recurrent or lateral connections. So I decided that while I wait for the date of my Viva to be arranged I would try and determine what kind of network architecture takes most advantage of its surrounding noise. After all, neural networks have evolved to exist and function in a noisy environment and they are clearly coping very well, so how do they do it?
In short, I’m going to use Genetic Algorithms (GA) to evolve spiking neural networks which will take advantage of their surrounding stochasticity as best they can and try and determine which structural characteristics are best for this task. Right now most of the experimental setup is ready and I’m currently working on the genotype-to-phenotype mapping. As soon as that is done, I’ll have to design the actual GA which shouldn’t be too difficult since I’m aiming for a simple, minimal one. Stay tuned, I’ll post the code to the GA next and after that, hopefully, some results about how neural networks cope with noise.
As I mentioned previously I’ve been working on translating my code from Matlab into Python and I was having some trouble with stimulus estimation (aka signal reconstruction). Well, my problems are no more. Fortunately, there is a whole library in Python called Scipy which includes a set of functions called Signal and will cover most of your needs for signal processing in Python. I’ve been using this library extensively and I’ve found so far that it is excellent for scientific programming or at least for my work. It definitely eases the transition from Matlab to Python.
Using this library I can build a Wiener-Kolmogorov filter which allows me to estimate the input that was presented to a spiking neuron model given only the neural response. Below is a picture of what the WK filter looks like and an example of stimulus estimation using this filter.
As you can see it works pretty well. There is even a very good way to quantify how well it works but I won’t go into that right now. This was the hardest part so far and I think I can safely say that I am now almost as proficient with Python as I am with Matlab at least as far as scientific programming goes.
So, to help others that may need it and maybe to feel a bit proud of myself and my work, here is a simple function for stimulus estimation/reconstruction in a neural system using a Wiener-Kolmogorov filter.
The last few days I’ve been working on translating my Matlab code into Python so that I can continue some of the research I did for my PhD. I mailed my thesis just a few days ago so any time soon I should be hearing about the date of the viva. Consequently, I have some free time and I decided that learning Python while continuing the work I did for my PhD would be a good idea, hence the translation. All was fine and the translation was slowly but steadily moving forward until I came to a really hard piece of work. Signal reconstruction. The basic idea behind signal reconstruction is that if you have a system that is presented with a signal and produces an output, you can use the output to predict the original signal. This concept can be used for neurons and neural networks too.
In my work, I did this by presenting a neuron or neural network with a signal, then creating a Wiener-Kolmogorov filter based on the neural activity obtained. I then use that filter to create an estimate of what the input was. The degree of commonality between the actual input signal and its estimate gives me an approximation of how much information was transferred through the system. It works and it works well. In fact, I am writing a paper about it. However, creating a WK filter in Python is not as easy as in Matlab. The calculation of the filter itself is a tricky piece of work and when you don’t know a programming language fluently, as is the case with me and Python, it becomes even trickier. I intend to crack this problem and when I do I’ll post the Python code here for public scrutiny and maybe it will help someone else too.
F. Rieke, D. Warland, R. de Ruyter van Steveninck, and W. Bialek, Spikes: Exploring theNeural Code. The MIT Press, 1999.
I started my career as a computational neuroscientist when I chose to model a small network of basal ganglia neurons involved in the feeding mechanism of a pond snail as a final year project for my neuroscience degree. I did pretty well (it got a prize for being the best final year project that year) considering I had never coded seriously before and I had only dealt with neuroscience from a purely biological standpoint. Of course I had great help. My supervisor was (probably still is) a computational neuroscientist and a good one at that and he taught me how to code.
I dare say I’ve improved since then. A lot. Throughout my career in academia I’ve used Matlab with the occasional foray into something else. C/C++ I always found uninteresting and lacking in artistry. I had similar feelings for other languages mainly because I demanded things that they couldn’t give me. Occasionally, I fell in love with other, elegant and powerful languages like Lisp/Scheme, although I couldn’t really make them do what I wanted to do, probably due to lack of experience. That was until I met Python.
Python does what I want it to do because it can do whatever you want it to do. It’s supported and grows with the open source movement. It is supported and worked on by a growing subgroup of computational neuroscientists and a growing number of neuroscientific tools have been and are being developed using it. It’s perfect for an independent scientist like me.
So I am slowly making the transition from full Matlab code to full Python code. I’m learning really quickly and I’m slowly reconstructing all of the work I did for my PhD in Python. I will put it up bit by bit for various reasons. I want it to be accessible by everyone. This way anyone can use it and instead of being a sterile piece of code on my hard drive it can be alive. This will also help me. People that read my code can point out mistakes or ways to improve it and maybe save me time and trouble (oh, how much time I’ve wasted discovering mistakes later than I should have). Also, I want the code out there because it’s not mine. I write it and work on it but I did not create it in a vacuum, it belongs to me as much as it belongs to everyone.
So without further ado, here is the first piece of code that I wrote in Python that will help me in some of my future work. It makes my life a little easier especially when it comes to generating a network architecture on the fly without caring what it looks like exactly or when you need to generate a few hundred (or more) networks and study their average behaviour. It may have other uses of which I haven’t thought of yet, so go ahead and give it a try. If you have any comment on it please let me know, I really do care and I want to become better at coding in Python.