Machine Learning in Action – Word Prediction

In my previous blog on machine learning, I explained the science behind how a machine learns from its parameters. In this week, I will delve on a very common application which we use in our day to day life – Next Word Prediction.

When we text with our smartphones all  of us would have appreciated how our phones make our typing so easy by predicting or suggesting the word which we have in mind. And many would also have noticed the fact that, our phones predict words which we tend to use regularly in our personal lexicon. Our phones have learned from our pattern of usage and is giving us a personalized offering. This genre of machine learning falls under a very potent field called the Natural Language Processing ( NLP).

Natural Language Processing, deals with ways in which machines derives its learning from human languages. The basic input within the NLP world is something called a Corpora, which essentially is a collection of words or groups of words, within the language. Some of the most prominent corpora for English are Brown Corpus, American National Corpus etc. Even Google has its own linguistic corpora with which it achieves many of the amazing features in many of its products. Deriving learning out of the corpora is the essence of NLP. In the context which we are discussing, i.e. word prediction, its about learning from the corpora to do prediction. Let us now see, how we do it.

The way we do learning from the corpora is through the use of some simple rules in probabilities. It all starts with calculating the frequencies of words or group of words within the corpora. For finding the frequencies, what we use is something called a n-gram model, where the “n” stands for the number of words which are grouped together. The most common n-gram models are the trigram and the bigram models. For example the sentence “the quick red fox jumps over the lazy brown dog” has the following word level trigrams:(Source : Wikipedia)

the quick red
quick red fox
red fox jumps
fox jumps over
jumps over the
over the lazy
the lazy brown
lazy brown dog

Similarly a bi-gram model will split a given sentence into combinations of two word groups. These groups of trigrams or bigrams forms the basic building blocks for calculating the frequencies of word combinations. The idea behind calculation of frequencies of word groups goes like this. Suppose we want to calculate the frequency of the trigram “the quick red”. What we look for in this calculation is how often we find the combination of the words “the” and “quick” followed by “red” within the whole corpora. Suppose in our corpora there were other 5 instances where the words “the” and “quick” was followed by the word “red”, then the frequency of this trigram is 5.

Once the frequencies of the words are found, the next step is to calculate the probabilities of the trigram. The probability is just the frequency divided by the total number of trigrams within the corpora.Suppose there are around 500,000 trigrams in our corpora, then the probability of our trigram “the quick red” will be 5/500,000.The probabilities so calculated comes under a subjective probability model called the Hidden Markov Model(HMM).By the term subjective probability what we mean is the probability of an event happening subject to something else happening. In our trigram model context it means,the probability of seeing the word “red” subject to having preceded with words “the” and “quick”. Extending the same concept to bigrams, it would mean probability of seeing the second word subject to have seen the first word. So if “My God” is a bigram, then the subjective probability would be the probability of seeing the word “God” followed by the word “My”

The trigrams and bigrams along with the calculated probabilities arranged in a huge table forms the basis of the word prediction algorithm.The mechanism of prediction works like this. Suppose you were planning to type “Oh my God” and you typed the first word “Oh”. The algorithm will quickly go through the n-gram table and identify those n-grams starting with word “Oh” in the order of its probabilities. So if the top words in the n-gram table starting with “Oh” are “Oh come on”,”Oh my God” and “Oh Dear Lord” in decreasing order of probabilities, the algorithm will predict the words “Come” ,”my” and “Dear” as your three choices as soon as you type the first word “Oh”.After you type “Oh” you also type “my” the algorithm reworks the prediction and looks at the highest probabilities of n-gram combinations preceded with words “Oh” and “my”. In this case the word “God” might be the most probable choice which is predicted. The algorithm will keep on giving prediction as you keep on typing more and more words. At every instance of your texting process the algorithm will look at the penultimate two words you have already typed to do the prediction of the running word and the process continues.

The algorithm which I have explained here is a very simple algorithm involving n-grams and HMM models. Needless to say there are more complex models which involves more complex models like Neural Networks. I will explain about Neural Networks and its applications in a future post.
images

Machine Learning: Teaching a machine to learn

In my previous post on recommendation engines, I fleetingly mentioned about machine learning. Talking about machine learning, what comes to my mind is a recent conversation I had with my uncle. He was asking me on what I was working on and I started mentioning about machine learning and data science . He listened very attentively and later on told my mother that he had absolutely no clue  what I was talking about. So  I thought it would be a good idea to try and unravel the science behind machine learning.

Let me start with an analogy. When we were all toddlers whenever we saw something new ,say a dog, our  parents would point and tell us “Look , a dog”. This is how we start to learn about things around us,from inputs such as these that we receive from our parents and elders . The science behind  machine learning works pretty similar. In this context, the toddler is the machine and the elder which teaches the machine is a bunch of  data .

In very simple terms the setup for a machine learning context works  like this. The machine is fed with a set of data. This data consists of two parts, one part is called  features and the other labels. Let me elaborate a little bit more. Suppose we are training the machine to identify the image of a dog. As a first step we feed multiple images of dogs to the machine. Each image which is fed, say a jpeg or png image, consists of millions of pixels. Each pixel in turn is composed of some value of the three primary colors Red, Blue and Green. The values of these primary colors ranges between 0 to 256. This is called the pixel intensity. For example the pixel intensity for the color orange would be (255,102,0), where 255 is the intensity of its red component, 102 its green component and 0 its blue component. Like wise, every pixel in an image will have various combinations of these primary colors.

RGB

These pixel intensities are the features of the image which are provided as inputs to the machine. Against each of these features, we also provide a class or category describing the features we provided. This is the label. This data set is our basic input. To  visualize the data set, think of it as a huge table of pixel values and its labels. If we have,say 10 pixels per image and there are 10 images. Our table will have 10 rows, corresponding to each image and for each row there would be 11 columns. The first 10 columns would correspond to  pixel values and the 11th column would be the label.

Now that we have provided the machine its data, let us look at how it learns. For this let me take you back to your school days. In your basic geometry, you would have learnt the equation of a line as Y = C + (theta * X). In this equation, the variable C is called the intercept and theta the slope of the line. These two variables govern the  properties of the line Y . The relevance of these variables is that, if we are given any other value of X, then by our knowledge of C and theta we will be able to predict or create a line. So by learning  two parameters we are in effect predicting an outcome. This  is the essence of machine learning. In a machine learning, setup the machine is made to learn the parameters from the features which is provided.Equipped with the knowledge of these parameters the machine will be able to predict the most probable values of Y(Outcomes) when new values of X(features) are provided.

In our dog identification example, the X values are the pixel intensities of the images we provided, Y denotes labels of the dogs. The parameters are learned from the provided data. If we are to give the machine new values of X’s which contain say  features of both dogs and cats, the machine will correctly identify which is a dog and which is a cat, with its knowledge of the parameters. The first set of data which we provide to the machine for it to learn parameters is called the training set and the new data which we provide for prediction is called the test set. The above mentioned genre of machine learning is called Supervised Learning. Needless to say, the earlier equation of the line is one among multiple types of algorithm used in machine learning. This type of algorithm for the line is called linear regression. There are multiple algorithms like these which enables machines to learn parameters and carry out predictions.

What I have described herewith is a very simple version of machine learning. Advances are being made in this field and scientists are trying to mimic the learning mechanism of human brain on to machines. An important and growing field aligned to this idea is called Deep Learning. I will delve on deep learning in a future post.

The power of machine learning is quite prevalent in the world around us and quite often the learning  is inconspicuous. As a matter of fact, we are all party to the training process inconspicuously. A very popular example is the photo tagging process in Facebook. When we tag pictures which we post on Facebook, we are in fact providing labels enabling a machine to learn.  Facebook’s powerful machines will extract features from the photos we tag. Next time we tag a new photo, Facebook will automatically predict the correct tag through the parameters which it has learned. So next time you tag a picture on Facebook, realize that you are also playing your part in teaching a machine to learn.