An Intro to Machine Learning for IT Pros

Types of Neural Networks

Another thing to understand with machine learning is that neural networks are built in different ways to achieve different results.

The feedforward neural network is the most basic type of network and relatively easy to understand. Here the data filters through three or more layers of artificial neurons in a straightforward and consistent way. Other types of networks add systems like backpropagation and specific design layouts to get different kinds of functionality.

One of the most popular flavors of neural networks is called a convolutional neural network (CNN). It's made specifically for things like image processing and computer vision. All sorts of neat technology applies to the convolutional neural network design – items like pooling and feature selection that work on the basis of exploring how to use these filters to assess and classify and work with images.

Tutorials and demos show how convolutional neural networks are set up to filter parts of an image through the layers of the network in order to provide all of those interesting results in which computers demonstrate their visual and understanding capabilities. CNNs demonstrate particular kinds of artificial intelligence.

Another specialized type of neural network is the self-organizing neural network. Many of these networks are based off of ideas by Kohonen, a Nordic engineer who helped to pioneer the idea that neural networks can automate some of their own precision work.

Another common type of neural network is the recurrent neural network – this interesting type of network actually preserves memory through the layers, in order to make logical matches for outcomes. A very basic way to think of this is that the recurrent neural network is a stateful network, one that saves various bits of information on their way through the process.

In addition to all of the above, there are neat new kinds of neural networks coming down the pike. Scientists are working on a set of “third-generation” neural networks that add an element of time impulse – that is, they apply chronology to the impulses sent through the artificial neurons, and that adds a whole new dimension to the research and the work. Similar types of networks include an echo state network or liquid state machine or other temporal neural network where engineers apply a sort of “black box” model – these are more opaque, but they do machine learning in a different way. Experts talk about how some of these networks are based on the idea of ripples from a solid object thrown into a liquid pool. In a sense, engineers are looking at those ripples as the function of the network, instead of knowing exactly how artificial neurons are set up. They're operating more blindly, but they're getting more different types of sophisticated results. Another way to talk about some of these networks is that instead of using predetermined input weights, they use random weights or other randomized inputs.


Share this:
Written by Justin Stoltzfus
Profile Picture of Justin Stoltzfus

Justin Stoltzfus is a freelance writer for various Web and print publications. His work has appeared in online magazines including Preservation Online, a project of the National Historic Trust, and many other venues.

 Full Bio