Self-learning is an exciting part of machine learning. While the human programmer is responsible for variable selection and setting algorithm hyperparameters (settings), the machine is in charge of deciphering patterns and decision-making.
By combing the data for patterns and using this information to inform its predictions, the machine is able to accomplish what’s called self-learning. This ability to self-learn serves as a major distinction from traditional computer programming where computers are designed to perform set tasks in response to pre-programmed commands. These commands – both input and output – are set by human experts, an example that I saw this weekend at the Toyota Commemorative Museum of Industry and Technology in Nagoya.
The Main Body Welding Machine (pictured) below attaches parts onto auto bodies. The robotic arms you see are fitted with jigs (a device that holds a piece of work and guides the tool operating on it). These jigs on the auto assembly line determine the exact spot for attaching parts quickly and accurately to the auto body.
But rather than problem-solve and self-learn on their own accord, they follow the meticulous details programmed into their assembly system, making this a good example of traditional computing programming where everything if predefined.
SUPERVISED, UNSUPERVISED & REINFORCEMENT LEARNING
There are four categories of self-learning used in machine learning: supervised learning, unsupervised learning, semi-supervised, and reinforcement learning.
In supervised learning, the objective is to decode the relationship between the input variables and the desired output. This works by feeding the machine sample data with input features (expressed as X) and the correct output value (expressed as y). The fact that the output and input values are known qualifies the dataset as “labeled.” The algorithm then deciphers patterns that exist in the data and creates a model that can interpret new data based on the same underlying rules.
This type of learning is similar to how Toyota designed their first car model, which was called Toyoda Automatic Loom Works at this time. Preparations for Toyota’s first vehicle prototype and the establishment of an automatic production division began by taking apart a Chevrolet Car in the corner of the family-run loom company.
By comparing the sight of the finished car (output) and pulling apart its individual components (input), Kushiro Toyoda and his engineers taught themselves how to design a car.
This is a task we can achieve now by using supervised learning. By showing the machine model examples of both the output (the design of a finished car) and its individual components, the model can devise rules to reproduce a new car (output) using the same components (input).
In the case of unsupervised learning, the input variables are available but there is no output value. Think of this as sending Chrysler car components to the assembly line but without any image or design examples of the output, which is what the car should look like. Without any labeled input-output examples to draw from, there is little way of the machine performing the same task as supervised learning.
Without the aid of known outputs, unsupervised learning attempts to generalize the patterns of the original variables to find connections in the data and create new labels. To this effect, the machine model might devise its own output values by grouping similar data points to find connections (known as clustering analysis algorithms) or by synthesizing a large number of variables into a lower number of principal components (descending dimension algorithms).
Continuing with the car analogy, this might mean sorting parts by size and color to generate new insight, such as the ratio of small parts to large parts or the
A hybrid form of unsupervised and supervised learning is also available in the form of semi-supervised learning, which is used with datasets that contain a mix of labeled and unlabeled cases. With the “more data the better” as a core motivator, the goal of semi-supervised learning is to leverage unlabeled cases to improve the reliability of the prediction model.
One traditionally popular technique is to build the initial model using the labeled cases (supervised learning) and then use the same model to label the remaining cases (that are unlabeled) in the dataset. The model can then be retrained using a larger dataset (with less or no unlabeled cases). Alternatively, the model could be iteratively re-trained using newly labeled cases that meet a set threshold of confidence and adding the new cases to the training data after they meet the set threshold.
There is, however, no guarantee the semi-supervised model will outperform a model trained with less data (based exclusively on the original labeled cases).
Reinforcement learning is the fourth and most advanced algorithm category of self-learning. The goal of reinforcement learning is to continuously improve the models’ predictions by leveraging feedback via random trial and error. Here, previous iterations aren’t tagged but graded. In the case of self-driving vehicles, avoiding a crash or incident would earn a positive score. Conversely, committing a mistake on the road would result in a negative score and the model would be revised to mitigate this problem from reoccurring.
You can actually watch the live training of a car learning to drive using reinforcement learning techniques, sensors, and a
If we return to our Toyota example, reinforcement learning could also be used to design a car. Unlike supervised learning where the relationship of input and output variables are known (i.e. the model can easily work out where each part goes), the exact input variables remain unknown. This is also the opposite dilemma to unsupervised learning where only the input variables are known.
But through reinforcement learning, the machine model can trial a large multitude of components from 1,000’s of existing car designs to produce the desired outcome. No one knows which components the model will then select but through this trial and error process it may find combinations of components that have never ever been conceived of before.
TO SUM UP
The type of self-learning you decide to use depends heavily on your data and what variables you have available. If you have well-defined combinations of input and output variables in your data, supervised learning is effective at taking those combinations apart and learning existing relationships.
If you have a lot of input variables but don’t know what they mean in terms of output value, such as logs of user behavior but no idea of which user is a malicious bot and what malicious behavior looks like, unsupervised learning can help you to group and organize inputs to create new and unseen outputs.
Or if you have a mix of labeled and unlabeled data, you can try an unsupervised approach to enlarge your training data.
The most advanced approach is reinforced learning and this technique excels at finding a new combination of inputs to generate a known output. This also only comes into play when you have a massive amount of computing power and data at your disposal for mass-scale trial and error.