How does NeuroEvolution of Augmenting Topologies contribute to genetic machine learning?

Q:

How does NeuroEvolution of Augmenting Topologies (NEAT) contribute to genetic machine learning?

A:

NeuroEvolution of Augmenting Topologies (NEAT) contributes to genetic machine learning by providing a cutting-edge innovative model based on the principles of genetic algorithms that help to optimize networks according to both the weights and the structures of a network.

Genetic algorithms in general are artificial intelligence and machine learning models that are in some way based upon the principle of natural selection – models that work by iterative processing of that principle of selecting the best result for a given need. These are part of a broader category of "evolutionary algorithms" in what professionals called the "evolutionist school" of machine learning – one that is highly structured around biological evolutionary principles.

The NeuroEvolution of Augmenting Topologies network is a Topology and Weight Evolving Artificial Neural Network (TWEAN) – it optimizes both the network topology and the weighted inputs of the network – subsequent versions and features of NEAT have helped to adapt this general principle to specific uses, including video game content creation and planning of robotic systems.

With tools like NeuroEvolution of Augmenting Topologies, artificial neural networks and similar technologies can involve in some of the same ways that biological life has evolved on the planet – however, the technologies can generally evolve very quickly and in many sophisticated ways.

Resources like a NeuroEvolution of Augmenting Topologies users group, a software FAQ and other elements can help build a fuller understanding of how NEAT works and what it means in the context of evolutionary machine learning. Essentially, by streamlining the structure of a network and changing input weights, NEAT can get human handlers of machine learning systems closer to their goals, while eliminating a lot of the human labor involved in setup. Traditionally, with simple feedforward neural networks and other early models, the structuring and setting of weighted inputs relied on human training. Now, it is automated with these systems to a high degree.

Have a question? Ask us here.

View all questions from Justin Stoltzfus.

Share this:
Written by Justin Stoltzfus
Profile Picture of Justin Stoltzfus
Justin Stoltzfus is a freelance writer for various Web and print publications. His work has appeared in online magazines including Preservation Online, a project of the National Historic Trust, and many other venues.
 Full Bio