Question

Why is so much of machine learning behind the scenes – out of sight of the common user?

Answer
Why Trust Techopedia

This fundamental question about machine learning takes into account many different aspects of how these complicated programs work, and what role they play in today’s economy.

One of the easiest ways to explain the lack of prominence of machine learning systems is that they are easy to hide. These back-end systems lurk behind recommendation engines and more, allowing consumers to forget that there’s any machine learning going on at all. For all the end users know, some humans could be carefully selecting choices instead of a neural network running sophisticated algorithms.

Beyond that, there’s also lack of a systemic education on machine learning, partly because it’s so new, and partly due to a lack of investment in STEM training as a whole. It seems that as a society we’re generally OK with selecting key individuals to learn about technology in any great detail, and to become the “technological priests” of our population. A broader spectrum strategy would be to include detailed machine learning and technology instruction on a secondary level in high schools as a matter of course.

Another problem is the lack of accessible language around machine learning. Jargon abounds — from the labels of the algorithms themselves, to the activation functions that power artificial neurons and result in neural networks. Another great example is the labeling of layers in a convolutional neural network — padding and striding and max pooling and more. Hardly anybody really understands what these terms mean, and that makes machine learning all the more inscrutable.

The algorithms themselves have become couched in the parlance of mathematicians. As with modern and classical physics, students of these disciplines are supposed to master the art of reading complex equations, rather than putting the algorithm functions into plain language. That also serves to make machine learning information much less accessible.

Finally, there’s the “black box” problem where even the engineers don’t really fully understand how many machine learning programs work. As we have scaled the complexity and capability of these algorithms, we have sacrificed transparency and easy access to evaluation and analytical results. With this in mind, there is a big movement toward explainable AI — toward keeping operational machine learning and artificial intelligence accessible, and keeping a handle on how these programs work in order to avoid unpleasant surprises in a production environment.

All of this helps to explain why, although machine learning is burgeoning in today’s tech world, it’s often “out of sight, out of mind.”

Related Terms

Justin Stoltzfus
Contributor
Justin Stoltzfus
Contributor

Justin Stoltzfus is an independent blogger and business consultant assisting a range of businesses in developing media solutions for new campaigns and ongoing operations. He is a graduate of James Madison University.Stoltzfus spent several years as a staffer at the Intelligencer Journal in Lancaster, Penn., before the merger of the city’s two daily newspapers in 2007. He also reported for the twin weekly newspapers in the area, the Ephrata Review and the Lititz Record.More recently, he has cultivated connections with various companies as an independent consultant, writer and trainer, collecting bylines in print and Web publications, and establishing a reputation…