Part of:

Deep Learning: How Enterprises Can Avoid Deployment Failure

Why Trust Techopedia
KEY TAKEAWAYS

The enterprise is now fully engaged with the deployment of AI in its many forms. From machine learning to natural language processing to artificial neural networks, the race is on to get a jump on the next evolutionary jump in digital technology.

The enterprise is now fully engaged with the deployment of artificial intelligence (AI) in its many forms. From machine learning (ML) to natural language processing (NLP) to artificial neural networks (ANN), the race is on to get a jump on the next evolutionary jump in digital technology.

The broad impact that AI will have on businesses processes and the business model itself cannot be understated, with many observers predicting entirely new market opportunities from the advanced level of service that AI brings to users.

One of the key advantages that AI delivers is its ability to absorb data from its environment, interpret it autonomously, and then alter itself to improve productivity or even set new goals for itself.

This form of “learning” consists of two broad categories at the moment: ML and deep learning (DL). (Also read: What is the difference between deep learning and machine learning?)

Of the two, DL is the more advanced because it employs a hierarchical approach to data analysis rather than a linear approach, giving it the ability to draw highly abstract, almost human-like conclusions from complex data sets.

Plug and Train and Play

Naturally, implementing DL in the enterprise is not as simple as booting up a new software application. Not only does deep learning represent a paradigm shift in the way knowledge work is performed, it creates an all-new member of the business team — literally, the enterprise itself — which must be coached, managed and even rewarded just like human employees.

Advertisements

As can be expected, this creates a wide range of pitfalls for the enterprise to navigate in order to produce successful outcomes from what is likely to be a substantial and ongoing investment in DL.

According to Suman Nambiar, head of strategy, partner alliances, and offerings at Mindtree, the single biggest mistake enterprises make with DL — with any form of AI, in fact — is deploying the technology first and then deciding what to do with it later.

In an interview with Tech Republic, Nambiar pointed out that intelligent systems are not like their dumb brethren in that they don’t start performing their desired function right out of the box. They must be trained for the appropriate methodologies and applied to the proper use cases.

Without a clear vision of what you hope to get from AI tools like deep learning, organizations will find themselves with an expensive, unwieldy apparatus that serves no useful purpose and can, in fact, diminish performance and bring harm to the business model.

Because of this need to train DL tools, access to and conditioning of the proper data has also emerged as a key pain point for most enterprises. Even small organizations are churning out reams of data on a daily basis these days, most of it unstructured and spread out over multiple disparate platforms.

As Analytics India’s Vishal Chawla notes, new skillsets among the human workforce will need to be developed in order to first capture the data then separate out the noise so that only relevant information is applied to any given processes.

The tendency to simply let AI do its thing runs strong in most organizations that have deployed the technology, but in fact it requires constant governance to ensure it is not drawing false conclusions from inaccurate or incomplete data.

DevOps as Your Deep Learning Guide

For deep learning in particular, MissingLink’s Yuval Greenfield said organizations would do well to apply some of the lessons learned from DevOps to overcome a number of key challenges.

For one thing, the early days of DevOps were hampered by an inability to automate version control. Practitioners had to run multiple experiments using different data, code and processing environments in order to produce usable results.

By taking these tasks away from people, organizations can streamline the functions that give deep learning its power, such as saving and versioning data, processing and manipulating large data sets and providing continuity across projects and DevOps teams.

When it comes to actual coding for DL projects, Deep-Med co-founder Ali S. Razavian offered a number of suggestions to avoid issues that have plagued early adopters.

For example: His preferred library is PyTorch rather than TensorFlow, even if you have to deploy the results to a TensorFlow or Azure back-end.

Another useful tip: Run code in Dockers.

Considering that default behaviors are likely to change with each major release, Dockers helps to preserve code performance over time.

As well, a dataset compiler is a must for keeping track of data, annotations and predictions for each model so that new datasets can be quickly deployed for each new training iteration.

Final Thoughts

Every enterprise’s implementation of deep learning will be unique to its own goals and objectives, which makes it impossible to predict all of the pitfalls that lie ahead.

Indeed, one of the key aspects of working with DL is that failure is inevitable. The challenge will be learn from that failure so that both the human operators and the DL environment itself become stronger over time.

In this light, the biggest pain point is getting started, and this is also the most crucial to overcome because organizations that fail to implement AI technologies like DL will quickly find themselves out-played, out-hustled and out-smarted in a fast-moving, digitally driven economy.

Advertisements

Related Reading

Related Terms

Advertisements
Colyn Emery
Editor
Colyn Emery
Editor

Colyn is a writer and digital artist from Southern California. He writes about topics like AI, UX/UI, big data and blockchain technology. He has written articles, blogs, web copy and whitepapers for many different tech companies and organizations, and has worked in digital media professionally since 2007. He is a graduate of Chapman University and Art Center College of Design.