Artificial intelligence is coming to a data center near you, and it will likely start performing many of the tasks that human operators spend the bulk of their time doing.

But rather than view this inevitable development as a threat, today’s IT worker would do better to learn the fundamentals of AI now so that when it does arrive it can be used as a tool to enhance the value of human effort to the organization, not replace it.

First off, it helps to know that there are many different types of AI that serve various functions. Tech journalist Michael Copeland views the technology as a series of concentric circles, with AI as the outermost circle and more specialized forms like machine learning (ML) and deep learning falling within.

The differences lie in the levels of complexity exhibited by each form of AI and the specific functions they are designed to enable.

A Brief History of AI

AI, for example, has roots dating back to the 1950s but only started to gather steam earlier this decade with the concept of “narrow AI.” This is where technology is focused on completing specific tasks like image classification and facial recognition but lacks the ability to evolve its processes using experience and other data input the way a human brain does. (For more on this, check out Will Computers Be Able to Imitate the Human Brain?)

For that, we need to turn to machine learning, which uses algorithms to parse data to make predictions about its environment. With ML, programmers no longer need to hand code each and every action a system should take, but rather the system itself can determine the best course of action given the available data. Even at this stage, however, the term “intelligence” is used very loosely, since it still requires a lot of human input for ML to arrive at rational conclusions.

This is where deep learning and neural networks come in. Unlike machine learning, these technologies seek to emulate the workings of the human brain. Using advanced layering, connectivity and data propagation, they process data sets in numerous ways to produce weighted probabilities for a given result. Since this is a very heavy computation workload, it isn’t surprising that this level of AI was kept on the back burner until GPUs and parallel processing entered the mainstream.

Available Platforms

The budding AI programmer should also become familiar with the leading platforms on the market. While the plethora of solutions is growing larger by the day, some of the more basic systems offer a fairly easy learning curve for those who are already familiar with common programming languages.

Sitepoint.com has listed some of the more established platforms, each of which caters to the various ways in which AI will interact with data-driven processes. Perhaps the most popular are Google’s TensorFlow and the Melissa platform built for the Raspberry Pi entry-level computing environment. Both provide an easy on-ramp to AI programming, although Melissa requires a bit more skill in programming languages like Python.

As well, there are services like Wit.ai and Api.ai that utilize voice recognition to convert verbal commands into text. They also employ simple programming elements called “intents” and “entities” that are used to define the action to be taken and the device and/or service to be acted upon. In this way, developers can instruct the AI engine to remove data from drive A and paste it to drive B. Both Wit and Api have their own set of templates for intents and entities, so much of the work has already been done. However, aside from iOS and Android, they tend to support different programming languages, with Wit leaning toward Ruby, Python, C and Rust and Api backing Unity, C++, Python and JavaScript.

Practice Makes Perfect

While it never hurts to study up on AI in the traditional way – taking a class, learning the nuances of the various platforms, studying past practices – memorizing a set of disconnected facts will only take you so far, says Kaggle CTO Ben Hamner. Rather, a more practical approach is to select a particular problem in your workday and try to figure out how an intelligent automation system can alleviate it, if not solve it altogether. (To learn about some current uses for AI, see 3 Amazing Examples of Artificial Intelligence in Action.)

This is a lot more difficult than it sounds. The ideal problem must have three criteria:

  • It must cover an area you are personally interested in;
  • It must use a readily available data set that is well-suited to addressing the problem, and
  • The data or a relevant subset must sit comfortably within a single machine.

Once you’ve identified a suitable problem, Hamner says it’s time to make a quick and dirty hack – nothing fancy, just enough to provide an end-to-end fix for the basic problem. This should cover steps like reading the data, converting it into something that a machine learning algorithm can understand, training a basic model, creating a result and evaluating performance.

Once this functional baseline is complete, you can always go back and improve each component, perhaps by examining individual rows and visualizing distributions to gain a better understanding of structure and anomalies. In many cases, you’ll find that improving the data cleaning and preprocessing steps produce better results than optimizing the machine learning models.

It also helps to see what others are doing with AI at the moment and then perhaps sharing your creations publicly to foster even further development. Google recently started an AI sandbox called AI Experiments that offers open source code and other resources to help get you started plus a showcase for AI developments in art, language, music and other disciplines. In addition to TensorFlow and the Cloud ML API, the site features a version of the DeepMind 3D gaming lab and a set of openFramework apps and scripts for developing machine learning tools in C++.

The biggest change that artificial intelligence will bring to the knowledge workforce, and IT in particular, is the removal of all of the rote, repetitive tasks that make up the bulk of the workday. But make no mistake, AI will not make humans redundant, nor will it allow mankind to live a life of leisure while machines do all the work.

There will be plenty to keep the human brain occupied in an AI-driven economy, but this will largely encompass the creative, intuitive projects that mathematical algorithms will never be able to master.

With AI as a partner, expect the workday to become more interesting and rewarding for individuals, while the organizations they serve should see greater value from human activity and higher productivity overall.