Software

My tool of choice is usually MATLAB: it's the de facto common language shared by neuroscientists and engineers. I have released a few software tools I hope you might find useful, including the following.

Electronic Hardware

I like to build electronics useful for neuroscientists. I built an electronic system for delivering auditory stimuli on a T maze in David M. Smith's lab, and I am currently building an electrode impedance meter that rejects noise extremely well. For now I am focusing on tools useful to the labs I am in, but at some point I might start trying harder to disseminate my designs.

Like my grandfather, I have a soft spot for analog electronics, but I'm also starting to take a shine to digital micro-controllers like the Arduino.

Tiny Cameras for Neuroscience

Also on the neuroscience hardware front, I would love to make a brain-implantable version of my tiny camera to measure calcium ion concentrations inside neurons. Existing dyes such as Fura 2 AM make neurons fluoresce when they fire a spike due to increased calcium ion concentration. Neurons thus stained each produce a discernable optical signal for approximately 10 milliseconds. If we could image this activity while recording electrically from a large ensemble of many neurons, we could determine which neurons are firing when by spike-sorting with the aid of optical information from the calcium signal. The tiny camera I invented would make a perfect optical probe for this project because of its small volume: 105 times smaller than the smallest focusing camera, meaning that hybrid electrical and optical recordings could be taken using just one chip in a behaving organism. The information-to-invasiveness ratio of this system is unprecedentedly high, and the type of data it provides is exactly the kind needed to ask how collections of neurons can be so smart.

Currently I lack the personnel and the funding to develop this technology. However, find me a well-supported tenure-track position in a research university, and watch out!

CT Scanning

I'm in the early stages of looking into how we could use math tricks to lower CT scan dosage while improving its resolution. We have shown that L1 regularization can be used to improve the dose/fidelity trade-off in the generation of CT scan images, but the methods we've used so far are too computationally slow to be practical. The best algorithms prior to in-crowd take approximately an hour per CT slice to computer: too slow to be clinically useful. The in-crowd algorithm, being much faster than any other BPDN solver especially for sparse problems with high mutual coherence, should be the best way of generating L1-regularized CT scan images. Moreover, it looks like there are ways to tune the in-crowd algorithm specifically for CT scanning to further reduce computation times. If I had the time and the money, I bet within 5 years we could achieve 10-second reconstructions of 1024 x 1024 CT images with much lower noise and dose than current methods.

I have begun working with Masoud Hashemi at the University of Toronto to investigate ways of finding faster algorithms for CT reconstructions.