There are hundreds of statistical-based algorithms to choose from in machine learning. Popular algorithms include *k*-means clustering, association analysis, and neural networks. But in this post, we look at the three overarching algorithm categories found in machine learning.

**Supervised Learning**

Supervised learning refers to algorithms guided by pre-existing patterns and feedback based on known outcomes. In practice, supervised learning works by showing data to the machine including the correct value (output) of the data. The machine then applies a supervised learning algorithm to decipher patterns that exist in the data and develops a model that can reproduce matching results with new data.

As an example, suppose you wish to separate SMS messages into spam and non-spam clusters. In a supervised learning environment, you already have data that you can feed the machine to describe both categories. The machine understands the characteristics of both spam and non-spam messages and will sort incoming messages into these two categories based on known outcomes.

Or to predict who will win a basketball game, you might create a model to analyze previous games over the last three years. The games could be analyzed by the total number of points scored and points scored against. These scores can then be used to predict who will win the next game.

This data can be plotted on a scatterplot, with ‘points for’ represented on the x-axis and ‘points against’ represented on the y-axis. Each data point represents an individual game, and the score for each game can be found by looking up the x and y coordinates.

Linear regression (which we will learn in detail very soon) can next be applied to predict the expected winner based on the average of previous performances. As with the first example, we have instructed the machine which categories to analyze (points for, and points against). The data is therefore already pre-tagged, and we know the final outcome of the existing data. Each previous game has a final outcome in the form of the match score.

The challenge of supervised algorithms is having sufficient data that is representative of all variations, as well as potential outliers and anomalies. The data should also be relevant and if taken from a larger dataset should be selected at random to avoid any bias.

Supervised learning algorithms include regression analysis, decision trees, *k*-nearest neighbors, neural networks, and support vector machine.

** **

**Unsupervised Learning**

In the case of an unsupervised learning environment, there are no such known patterns and outcomes from which to base your analysis. Instead, the model must uncover hidden patterns through the use of unsupervised algorithms.

A commonly used application of unsupervised learning is *k*-means clustering. This algorithm creates discrete groups of data points that are found to possess similar features.

For example, if you cluster data points based on the weight and height of 16-year old high school students, you are likely to see two clusters emerge. One large cluster will be male and the other large cluster will be female. This is because girls and boys tend to have noticeable differences in relation to weight and height.

A major advantage of unsupervised learning is that it enables you to discover patterns in the data that you weren’t aware existed – such as the presence of two genders. Clustering can then provide the springboard to conduct further analysis after particular groups have been discovered.

Unsupervised learning algorithms include *k*-means clustering, association analysis, social network analysis, and descending dimension algorithms.

**Reinforcement**** Learning **

Reinforcement learning is the third and most advanced algorithm category in machine learning. Unlike supervised and unsupervised learning, reinforcement learning continuously improves its model by leveraging feedback from previous iterations. This differs in comparison to supervised and unsupervised learning, which both reach an endpoint after a model is formulated.

Reinforcement learning is best explained through analogies to video games. As a player progresses through the virtual space of the game, they learn the value of various actions under different conditions and become more familiar with the field of play. Those learned values then inform and influence subsequent behavior within the game. As the player progresses their performance naturally improves based on learning and experience.

Reinforcement learning is very similar. Algorithms are set to train the model through continuous learning. A standard reinforcement learning model will have measurable performance criteria where outputs are not tagged but are instead graded. In the case of self-driving vehicles, avoiding a crash receives a positive score. In the case of chess, avoiding defeat receives a positive score.

A popular algorithm for reinforcement learning is Q-learning. In Q-learning, you start with a set environment of *states*. In Pac-Man, for instance, states could be the challenges, obstacles or pathways that exist in the game. A wall might exist to the left, a ghost to the right, and a power pill above – each representing different *states*. States are represented in Q-learning by the symbol ‘S’.

The set of possible actions to respond to these states is then referred to as ‘A’. In the case of Pac-Man, actions are limited to left, right, up, and down movements, as well as multiple combinations of these four movements.

The third important symbol is *Q*. Q is your starting value and which has an initial value of 0.

As Pac-Man explores the space inside the game, two main things will happen:

– Q drops as negative things occur after a given state/action

– Q increases as rewards happen after a given state/action

In Q-learning, the machine will learn to match the action for a given state that generates or maintains the highest level of Q. It will learn initially through the process of random movements (actions) under different conditions (states). The machine will record its results (rewards and penalties) and how they impact its Q level, and store those values to inform and optimize its future actions.

While this sounds simple enough, implementation is a much more difficult task and beyond the scope of a beginner’s introduction to machine learning. However, I will leave you with a link to reinforcement learning and Q-learning following the Pac-Man scenario.

https://inst.eecs.berkeley.edu/~cs188/sp12/projects/reinforcement/reinforcement.html