Self-learning is an exciting part of machine learning. While the human programmer still has their say over variable selection and choosing an appropriate algorithm, the decision-making part is made by the machine.
By combing the data for patterns and using this information to inform its predictions, the machine is able to achieve what’s called self-learning. This ability to self-learn serves as a major distinction from traditional computer programming where computers are designed to perform set tasks in response to pre-programmed commands. These commands – both input and output – are set by human experts, an example that I saw this weekend at the Toyota Commemorative Museum of Industry and Technology in Nagoya.
The job of the Main Body Welding Machine (pictured) below attaches parts to the auto body. The robotic arms you see are fitted with jigs (a device that holds the body and guides the various tools operating on it). On the auto assembly line, these jigs determine the exact spot for quickly and accurately attaching parts to the auto body.
These decisions follow set commands programmed into the assembly system, making this a good example of traditional computing programming with pre-defined outcomes.
Supervised, Semi-supervised, Unsupervised & Reinforcement Learning
In machine learning, there are four types of self-learning: supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning.
In supervised learning, the objective is to decode the relationship between the input variables and the desired output. This works by giving the machine sample data with input features (expressed as X) and the correct output value (expressed as y). The fact that the output and input values are known makes the dataset “labeled.” The algorithm then deciphers patterns that exist in the data and creates a model that can interpret new data based on the same underlying rules.
This type of learning is similar to how Toyota (then called Toyoda Automatic Loom Works) designed their first car model. Preparations for Toyota’s first vehicle prototype and the beginning of an automatic production division began by taking apart a Chevrolet car in the corner of the family-run loom company.
By comparing the finished car (output) and pulling apart its individual components (input), Kushiro Toyoda and his engineers taught themselves how to design and build a car.
This innately human method of learning has been reconstructed in machine learning in the form of supervised learning. By showcasing examples of both the output (the assembled car) and an inventory of its individual components, the machine model can devise rules to produce a new car (output) using the same components (input).
In the case of unsupervised learning, the input variables are available but there is no output variable. Imagine sending individual Chrysler car components to the assembly line without any reference material as to how the car should look. Without any labeled input-output examples to draw from, there is no way of knowing how to produce the car.
Without the aid of known outputs, unsupervised learning attempts to generalize the patterns of the original variables to find connections in the data and create new labels. To this effect, the machine model might devise its own output values by grouping similar data points to find connections (known as clustering analysis) or by synthesizing a large number of variables into a lower number of principal components (descending dimension algorithms).
Continuing with the car analogy, this might mean sorting parts by size and color to generate new insight, such as the ratio of small parts to large parts or the
Therefore, rather than producing a finished car as the output (as is the case with supervised learning), unsupervised learning helps you to better understand data input and create new output, including interesting findings that you may wish to analyze further using other techniques such as supervised learning.
A hybrid form of unsupervised and supervised learning exists in the form of semi-supervised learning, which can be used on datasets containing a mix of labeled and unlabeled cases. With the “more data the better” as its core motivator, the goal of semi-supervised learning is to use unlabeled cases to improve the reliability of the prediction model.
One technique is to build an initial model using the labeled cases (supervised learning) and then use the same model to label the remaining cases (that are unlabeled). The model can then be retrained using a larger dataset (with fewer or no unlabeled cases). Alternatively, the model might be iteratively re-trained using newly labeled cases that meet a set threshold of confidence. This means new cases are added to the training data once they meet the defined threshold.
Having said this, there’s no guarantee a semi-supervised model will outperform a supervised model trained with less data (labeled cases only).
Reinforcement learning is the fourth and most advanced algorithm category of self-learning. The goal of reinforcement learning is to continuously improve the models’ predictions by leveraging feedback via random trial and error. Here, previous iterations are not tagged but are instead graded. In the case of self-driving vehicles, avoiding a crash or incident results in a positive score. Conversely, committing a mistake on the road would result in a negative score and the model would be revised to mitigate and avoid this problem from reoccurring.
To see this process in action, you can watch the live training of a car learning to drive using reinforcement learning techniques, sensors, and a
If we return to our Toyota example, reinforcement learning can also be used to assemble a car. Unlike supervised learning where the relationship of input and output variables are known (i.e. the model can easily work out where each part goes), the exact input variables remain unknown. This is the opposite dilemma to unsupervised learning where only the input variables are known.
Through reinforcement learning, the machine model can trial a large multitude of car components from various different cars to produce the desired outcome. No one knows which components the model will eventually select, but through this trial and error process, it may find combinations of components that have never been seen before.
To Sum Up
Choosing which type of self-learning to solve a problem depends heavily on your data and what variables you have available. If you have well-defined combinations of input and output variables in your data, supervised learning is effective at taking those combinations apart and learning the existing relationships.
If you have a lot of input variables but don’t know what their output is, such as logs of user behavior but no idea of which user is a malicious bot or what malicious behavior might look like, then unsupervised learning can help you to group and organize those inputs into new categories that can be then monitored and analyzed further.
Or, if you have a mix of labeled and unlabeled data, you can try a semi-supervised approach to enlarge your training data.
The lat approach is reinforced learning and this technique excels at finding a new combination of inputs to generate a defined output through mass-scale trial and error. But you can only use reinforcement learning when you have both a massive amount of computing power and a lot of data at your disposal.