Machine learning doesn't just model human brain activity – scientists are also using ML-driven technologies to actually look at the brain itself and the individual neurons that these systems are built on.
A Wired article talks about ongoing efforts to look into the brain and actually identify the properties of individual neurons. Writer Robbie Gonzalez talks about a 2007 effort that illustrates some of what's still on the cutting edge of machine learning development today.
Free Download: Machine Learning and Why It Matters |
In a way, these projects also show the labor-intensive nature of supervised machine learning. In supervised machine learning programs, the training set data has to be carefully labeled in order to help set up the project for success and accuracy.
Gonzalez talks about a situation where various members of a team get together to carry out the massive labor effort that's required to get the kind of labeling that these projects need – describing a collection of summer students, graduate students and postdoctoral individuals, molecular neuroscientist Margaret Sutherland describes how data annotation helps to prepare the data set. The National Institute of Neurological Disorders and Stroke, of which Sutherland was the director, was one of the funders of the study.
Using a deep neural network, a team led by San Francisco neuroscientist Stephen Finkbeiner and some of the experts at Google observed images of cells with and without various types of florescent marking tags. The technology looked at individual parts of a neuron, like axons and dendrites, and tried to isolate various types of cells from one another, in a process that Finkbeiner and others called in silico labeling or ISL.
This type of research can be particularly confusing to those who are new to the machine learning process. That's because the idea of machine learning and artificial intelligence is highly based on neural networks, which are themselves digital models of how neurons work in the human brain.
The artificial neuron, which is built on the biological neuron, has a set of weighted inputs, a transformation function and an activation function. Similarly to biological neurons, it takes in some form of data-driven inputs and returns an output. So it's a little bit ironic that scientists can use these biologically inspired neural networks to actually look at biological neurons.
In a way, it goes a certain way down the rabbit hole of recursive technology – but it also helps to speed up the learning process in this industry – and it also proves to us that in the end, neuroscience and electrical engineering are becoming very closely linked. In the opinions of some, we are approaching the singularity talked about by great IT mind Ray Kurzweil where the lines between humans and machines will become steadily blurred. It's important to look at how scientists are applying these very powerful technologies to our world, to better understand how all of these new models function.