Below is a great part of my work. Tens of lines of Python code that overlap with the last year of my PhD and all the research I’ve done on my own since then. I cleaned it up a bit and provide a rudimentary “documentation” so that people will know what they can do with it. Some of this was really difficult to find (when I looked for it) and some was impossible to find so I wrote it myself. The last four procedures, smooth, _rand_sparse, sprand and sprandn are not mine so I cannot take credit for them but I put them there for completeness’ sake. Everything else is more or less “mine”. This coincides with an awesome update on WordPress: embedding of Gists. Consequently, I saw this as a great opportunity to put up my code here before I start doing nasty things to it for my next piece of research. Take it, use it, repurpose it and let me know if something isn’t working, or needs fixing or if you did something cool with it. Enjoy.
A collection of functions for the generation of neural networks and the setup of various experiments.
Visualising the structural and functional connectivity of a neural network. This is a simple function that calculates the covariance matrix of a neural network based on its activity. It then reorders the covariance matrix to obtain a depiction of functional connectivity and based on that reordering also rearranges the connectivity matrix in order to obtain a clearer picture of its structural connectivity. Put simply, it brings closer for us to see neurons that are structurally and functionally connected. All it needs is the network’s connectivity matrix and a sample of its activity in order to calculate the covariance matrix.
A simple function for obtaining a spiking neural network from a binary genome and evaluating its information transmission capabilities with respect to stochasticity.
A simple function for converting a binary genotype into the architecture characteristics of a spiking
neural network. It takes as input L, which is the number of layers in the network for the particular case of layered networks, gn which is the number of genes in the genome, gl, which is the length of the binary gene and gen is the current individual’s genotype. It returns the network’s characteristics, N, the number of neurons per layer, C, the connection strength for each set of connections and dn, the connectivity density for each set of connections.
A simple function for evaluating a spiking neural network’s information transmission capabilities with respect to noise amplitude. This function takes cf_stor as input, a vector of stored coding fractions each in response to a different value in noise level. It removes NaNs, calculates the coding fraction
at optimum noise level opt, finds the optimum noise level index, the trends of pre- and post-optimum coding fractions trajectories and their slopes. It can then use any of those values to compute a fitness score depending on some fitness function and returns it.
A simple function for stimulus estimation and evaluation. The function takes x and y as input where x and y are both signals (in this particular case an input signal and the response of a neural network) and it returns cf which is the coding fraction between the input signal x and its estimate Iest.
A simple function for simulating a single Izhikevich spiking neuron model. This function simulates a single stochastic Izhikevich neuron. It takes T, D and I as input where T is the simulation length in ms (positive integer), D is the noise amplitude (sigma of Gaussian distribution) and I is the input signal. It returns firings which is the neural response (a binary sequence).
A simple function for stimulus estimation/reconstruction in a neural system using a Wiener-Kolmogorov filter. This function takes two 1-by-N arrays as input, the input signal presented to the neuron or neural net and the neural response. It also takes two integers nfft and tstep, where
nfft is the number of data points used in each block for the FFT and tstep is the sampling frequency. nfft must be even and a power of 2 is most efficient. tstep is an integer declaring the time step. It creates a WK filter that is then convolved with the neural response in order to produce an estimate of the input that produced it. It returns the input signal estimate, the zero-centered input signal and the filter.
A simple function for evaluating a spiking neural network. This is a network of Izhikevich neurons arranged with the architecture produced by netgen. It takes as input the architecture produced by netgen, the input signal produced by input_gen, the length of the simulation T and the noise level D. It returns a matrix of 0 and 1 where 1 signifies a spike and 0 the lack of neural activity.
A simple input generation function. This function generates an analogue signal for the neural net. It uses a Gaussian distribution for the values and then smooths them using a moving window method. The input Ia is the input amplitude T is the length of the simulation in ms, K is the number of neurons
in the population, ind are the indices of the neurons the input will be presented to, smin is the length of the smoothing window and norm is the option of whether to bring the entire input signal above zero
or not. It outputs a matrix with the signal values for all rows with index ind and zeros for every other neuron.
A generic network connectivity architecture. This is a simple function that generates a variety of network connectivities and consequently architectures anywhere from a three-layer feedforward network to a neural pool of dynamics. It supports recurrent connections, lateral and self-connections, feedforward connections (obviously), any degree of sparsity (0 to 100% connectivity) and both excitatory and inhibitory connections. This function takes three arguments: N, C and D. N is a
1 by n list of integers larger than 1, where n is an integer from 1 to 3 and signifies the number of layers
in the architecture of the network. Each value in N is the number of neurons in each layer. C and D are L by L lists where L is the number of layers in the architecture. C stores the synaptic strength of each set of connections and D stores the sparsity of each set of connections. This function builds
subsets of connections with individual strengths and sparsities which connect any one layer to another. The subsets are then concatenated into a big matrix which describes the connectivity and the overall architecture of the network.
Smooth the data using a window with requested size. This method is based on the convolution of a scaled window with the signal. The signal is prepared by introducing reflected copies of the signal (with the window size) in both ends so that transient parts are minimized in the beginning and end part of the output signal.
Used to generate an M-by-N sparse matrix of any chosen density.
Used to build a sparse uniformly distributed random matrix.
Used to build a sparse normally distributed random matrix.
As I mentioned previously I’ve been working on translating my code from Matlab into Python and I was having some trouble with stimulus estimation (aka signal reconstruction). Well, my problems are no more. Fortunately, there is a whole library in Python called Scipy which includes a set of functions called Signal and will cover most of your needs for signal processing in Python. I’ve been using this library extensively and I’ve found so far that it is excellent for scientific programming or at least for my work. It definitely eases the transition from Matlab to Python.
Using this library I can build a Wiener-Kolmogorov filter which allows me to estimate the input that was presented to a spiking neuron model given only the neural response. Below is a picture of what the WK filter looks like and an example of stimulus estimation using this filter.
As you can see it works pretty well. There is even a very good way to quantify how well it works but I won’t go into that right now. This was the hardest part so far and I think I can safely say that I am now almost as proficient with Python as I am with Matlab at least as far as scientific programming goes.
So, to help others that may need it and maybe to feel a bit proud of myself and my work, here is a simple function for stimulus estimation/reconstruction in a neural system using a Wiener-Kolmogorov filter.