Part of:

Understanding Self-Supervised Learning in Machine Learning

Why Trust Techopedia
KEY TAKEAWAYS

Teaching machines how to learn is the basis of self-supervised learning.

Self-supervised learning (SSL) is gaining a larger foothold in the world of machine learning (ML). As learning models are refined and expanded, machines that teach themselves, understand context and are able to fill in the blanks where there are holes in the information are the next step.

Machine Learning Models

Machines are taught to analyze, predict and advise on possible outcomes. The most common learning styles that data scientists use are:

  • Supervised learning – Practitioners train the machine on inputs paired with labelled outputs, teaching it to make associations. Example: A shape with three sides is labelled ¨triangle¨. Supervised learning is the more common model used for situations that involve classification, regression modelling and forecasting.

  • Unsupervised learning – The algorithm sifts through the given unlabeled data identifying underlying structure of data, such as how many sides a shape has. Unsupervised learning is ideal for tasks like clustering and anomaly detection.

  • Semi-supervised learning – It is a hybrid of supervised and unsupervised learning. The semi-supervised model is fed a blend of small labelled and large unlabelled data. The combination helps the algorithm identify successive labelled data.

  • Reinforcement learning – It deals with how an agent takes actions in an environment to maximize positive reinforcers (a.k.a cumulative reward). Reinforcement learning is often used in robotics.

Problems With Machine Learning

The problem with supervised learning is that you need a huge amount of data – something that’s expensive and time-consuming. (Read also: Debunking the Top 4 Myths About Machine Learning.)

Models fed on unsupervised learning, on the other hand, are limited in their mental capacity. They’re given only a small batch of information and left to their devices to reach their own conclusions. Result? Algorithms spit out relatively simple tasks like clustering and grouping.

Semi-supervised learning blends both problems. Finally, with reinforcement learning, algorithms are indoctrinated by their environment, often producing biased or distorted results.

What you need is self-supervised learning, an approach where machines can teach themselves.

Advertisements

What is Self-Supervised Learning?

Developed by computer scientist Yann LeCun in 2017, self-supervised learning has crept into tech echelons like Facebook, Google and Microsoft, as well as smaller cutting edge institutions. It’s the hottest thing in artificial intelligence (AI).

Essentially, LeCun suggested that machines model children. Just as children are immersed in certain environments and pick up the cultural and developmental influences as their brains mature, machines, he suggested, could too.

Children are exposed to the natural equivalent of supervised and unsupervised learning. Supervised learning can be seen when teachers train them on batches of labelled data. They’re shown images and told, for example, that “this is a theropod dinosaur called Giganotosaurus” and “this man was George Washington.”

At the same time, they naturally and automatically learn to deduce, induce, correlate and predict as an innate function of their brains/minds. That’s where self-supervised learning comes into play. Humans encounter all sorts of unlabelled data – incidents and concepts – as part of their development and symbiotically form their own conclusions. Essentially, self-supervised learning is a class of learning methods that use supervision available within the data to train a machine learning model. The self-supervised learning is used to train transformers—state-of-the-art models in natural language processing and image classification.

Transformers

Transformers are a complicated ML-driven model that use natural language processing (NLP) principles to “transform” a simple image or caption, into a font of insights, capable of making informed decisions, by probing part of a data example to figure out the remaining part. That data can be text, images, videos, audio, or anything.

Transformers are essentially a sequence-to-sequence model, which transform an input sequence into an output sequence, for example translating a sentence from a source language into a target language. Transformers involve two components: encoder and decoder. The encoder learns to process input sequence by modeling dependencies across input sequences in order to better represent inputs for translations. The dependencies are modelled using a technique known as self-attention mechanism. The decoder learns to map input sequence onto output using a technique known as attention mechanism.

The end results are the same as ML programs fed on extensive batches of data. Namely, models learn to form associations, correlations, recognize patterns and perform statistical estimation, among other functions. (Read also: Why might some machine learning projects require enormous numbers of actors?)

In other words, self-supervised learning models extract and use the organic context and embedded metadata to formulate relevant and real-time insights.

Ramifications of Self-Supervised Learning

Self-supervised learning mostly focuses on improving computer vision and NLP capabilities. Its capacities are used for the following:

  • Colorization for coloring grayscale images.

  • Context Filling, where the technology fills a space in an image or predicts a gap in a voice recording or a text.

  • Video Motion Prediction where it provides a distribution of all possible video frames after a specific frame.

However popular self-supervised learning is, it’s still far from understanding the human language or from getting an intuitive understanding of the context or nuance of the image. (Read also: Can a chatbot really pass for a person?)

That said, self-supervised learning has contributed to innovations across fields.

Examples of Self-Supervised Learning

  • In healthcare and medicine, self-supervised learning has contributed to robotic surgery and to monocular endoscopy by estimating dense depths in the human body and the brain. It also enhances medical visuals with improved computer vision technologies such as colorization and context filling.

  • With autonomous driving, self-supervised AI helps cars “feel” the roughness of the terrain when off-roading. The technology also provides depth estimation, helping cars identify the distance to other cars, people, or objects while driving.

  • With chatbots, where transformers are used to implant them with mathematical symbols and language representations. All the same, these chatbots grapple with context.

Final Thoughts

Enthusiasts of self-supervised learning say their learning model is the first step for machines to become more human. Machines that can evaluate and understand data to fill in the missing gaps are complex and far from perfect at this point, but the implications for the future of technology are incredible. It is an exciting prospect, but one that is also fraught with its own set of complications and questions. Will engineers and scientists be able to strike the balance and create “humanity” in machine learning?

That’s for time to tell…

Advertisements

Related Reading

Related Terms

Advertisements
Leah Zitter
Contributor
Leah Zitter
Contributor

Dr. Leah Zitter is a recognized FinTech writer and researcher with more than 10 years experience writing for media outlets, small-scale businesses, ICOs, non-government organizations, multinational corporations and governments.After having received practicum training in journalism from The Center for Near East Policy Research, Leah gained her Bachelors in Liberal Arts, her Masters in Philosophy/ Advanced Logic and her Ph.D. in Psychology/Scientific Research (focus: Behavioral Neuroscience). She is also ExpertRating-certified in Search Engine Optimization (SEO) and Search Engine Marketing (SEM) and has Yeda School of High-Tech accreditation in Technical Writing.Leah innovated the "Deep Web Method" to help job-seekers find hidden online…