What are some of the dangers of using machine learning impulsively?
Machine learning is a powerful new technology – and it's something that a lot of companies are talking about. However, it's not without its problems in terms of implementation and integration into enterprise practices. Many of the potential problems with machine learning come from its complexity and what it takes to really set up a successful machine learning project. Here are some of the biggest pitfalls to watch out for.
One thing that can help is hiring an experienced machine learning team to help.
One of the worst outcomes in using machine learning poorly is what you might call “bad intel.” This is a nuisance when it comes to ironing out the kinds of decision support systems that machine learning provides, but it's much more serious when it's applied to any kind of mission-critical system. You can't have bad input when you're operating a self-driving vehicle. You can't have bad data when your machine learning decisions affect real people. Even when it's purely used for things like market research, bad intelligence can really sink your business. Suppose machine learning algorithms do not make precise and targeted choices – and then executives go along blindly with whatever the computer program decides! That can really mess up any business process. The combination of poor ML outcomes and poor human oversight raises risks.
Another related problem is poorly performing algorithms and applications. In some cases, the machine learning might work right on a fundamental level, but not be entirely precise. You might have really clunky applications with extensive problems, and a bug list a mile long, and spend a lot of time trying to correct everything, where you could've had a much tighter and more functional project without using machine learning at all. It's like trying to put a massive high-horsepower engine in a compact car – it has to fit.
That brings us to another major problem with machine learning inherently – the overfitting problem. Just like your machine learning process has to fit your business process, your algorithm has to fit the training data – or to put it another way, the training data has to fit the algorithm. The simplest way to explain overfitting is with the example of a two-dimensional complex shape like the border of a nation-state. The fitting of a model means deciding how many data points you're going to put in. If you only use six or eight data points, your border’s going to look like a polygon. If you use 100 data points, your contour is going to look all squiggly. When you think about applying machine learning, you have to choose the right fitting. You want enough data points to make the system work well, but not too many to mire it down in complexity.
Resulting problems have to do with efficiency – if you do run into problems with overfitting, algorithms or poorly performing applications, you're going to have sunk costs. It can be hard to change course and adapt and maybe get rid of machine learning programs that aren't going well. Buy-in for good opportunity cost choices can be an issue. So really, the path toward successful machine learning is sometimes fraught with challenges. Think about this when trying to implement machine learning in an enterprise context.
More Q&As from our experts
- Why is machine bias a problem in machine learning?
- Why are some companies contemplating adding 'human feedback controls' to modern AI systems?
- How does Occam's razor apply to machine learning?
- Machine Learning
- Autonomous Vehicle
- Business Process
- Training Data
- Natural Language Processing
- Artificial Intelligence
Tech moves fast! Stay ahead of the curve with Techopedia!
Join nearly 200,000 subscribers who receive actionable tech insights from Techopedia.