A Must-Read NLP Tutorial on Neural Machine Translation – The Technique Powering Google Translate

A long dry period followed this miserable report.

Finally, in 1981, a new system called the METEO System was deployed in Canada for translation of weather forecasts issued in French into English.

It was quite a successful project which stayed in operation until 2001.

The world’s first web translation tool, Babel Fish, was launched by the AltaVista search engine in 1997.

And then came the breakthrough we are all familiar with now – Google Translate.

It has since changed the way we work (and even learn) with different languages.

Source: translate.

google.

com   Understanding the Problem Statement Let’s circle back to where we left off in the introduction section, i.

e.

, learning German.

However, this time around I am going to make my machine do this task.

The objective is to convert a German sentence to its English counterpart using a Neural Machine Translation (NMT) system.

We will use German-English sentence pairs data from http://www.

manythings.

org/anki/.

You can download it from here.

  Introduction to Sequence-to-Sequence (Seq2Seq) Modeling Sequence-to-Sequence (seq2seq) models are used for a variety of NLP tasks, such as text summarization, speech recognition, DNA sequence modeling, among others.

Our aim is to translate given sentences from one language to another.

Here, both the input and output are sentences.

In other words, these sentences are a sequence of words going in and out of a model.

This is the basic idea of Sequence-to-Sequence modeling wherein both the input and the output are sequences.

A typical seq2seq model has 2 major components – an encoder and a decoder.

Both these parts are essentially two different recurrent neural network (RNN) models combined into one giant network:   I’ve listed a few significant use cases of Sequence-to-Sequence modeling below (apart from Machine Translation, of course): Speech Recognition Name Entity/Subject Extraction to identify the main subject from a body of text Relation Classification to tag relationships between various entities tagged in the above step Chatbot skills to have conversational ability and engage with customers Text Summarization to generate a concise summary of a large amount of text Question Answering systems   Implementation in Python using Keras It’s time to get our hands dirty!.There is no better feeling than learning a topic by seeing the results first-hand.

We’ll fire up our favorite Python environment (Jupyter Notebook for me) and get straight down to business.

  Import the Required Libraries import string import re from numpy import array, argmax, random, take import pandas as pd from keras.

models import Sequential from keras.

layers import Dense, LSTM, Embedding, RepeatVector from keras.

preprocessing.

text import Tokenizer from keras.

callbacks import ModelCheckpoint from keras.

preprocessing.

sequence import pad_sequences from keras.

models import load_model from keras import optimizers import matplotlib.

pyplot as plt % matplotlib inline pd.

set_option(display.

max_colwidth, 200)   Read the Data into our IDE Our data is a text file (.

txt) of English-German sentence pairs.

First, we will read the file using the function defined below.

# function to read raw text file def read_text(filename): # open the file file = open(filename, mode=rt, encoding=utf-8) # read all text text = file.

read() file.

close() return text Let’s define another function to split the text into English-German pairs separated by ‘!.’.

We’ll then split these pairs into English sentences and German sentences respectively.

# split a text into sentences def to_lines(text): sents = text.

strip().

split(!.) sents = [i.

split( ) for i in sents] return sents We can now use these functions to read the text into an array in our desired format.

data = read_text(“deu.

txt”) deu_eng = to_lines(data) deu_eng = array(deu_eng) The actual data contains over 150,000 sentence-pairs.

However, we will use only the first 50,000 sentence pairs to reduce the training time of the model.

You can change this number as per your system’s computation power (or if you’re feeling lucky!).

deu_eng = deu_eng[:50000,:]   Text Pre-Processing Quite an important step in any project, especially so in NLP.

The data we work with is more often than not unstructured so there are certain things we need to take care of before jumping to the model building part.

(a) Text Cleaning Let’s first take a look at our data.

This will help us decide which pre-processing steps to adopt.

deu_eng array([[Hi.

, Hallo!], [Hi.

, Grüß Gott!], [Run!, Lauf!], .

, [Mary has very long hair.

, Maria hat sehr langes Haar.

], [“Mary is Toms secretary.

“, Maria ist Toms Sekretärin.

], [Mary is a married woman.

, Maria ist eine verheiratete Frau.

]], dtype=<U380) We will get rid of the punctuation marks and then convert all the text to lower case.

# Remove punctuation deu_eng[:,0] = [s.

translate(str.

maketrans(, , string.

punctuation)) for s in deu_eng[:,0]] deu_eng[:,1] = [s.

translate(str.

maketrans(, , string.

punctuation)) for s in deu_eng[:,1]] deu_eng array([[Hi, Hallo], [Hi, Grüß Gott], [Run, Lauf], .

, [Mary has very long hair, Maria hat sehr langes Haar], [Mary is Toms secretary, Maria ist Toms Sekretärin], [Mary is a married woman, Maria ist eine verheiratete Frau]], dtype=<U380) # convert text to lowercase for i in range(len(deu_eng)): deu_eng[i,0] = deu_eng[i,0].

lower() deu_eng[i,1] = deu_eng[i,1].

lower() deu_eng array([[hi, hallo], [hi, grüß gott], [run, lauf], .

, [mary has very long hair, maria hat sehr langes haar], [mary is toms secretary, maria ist toms sekretärin], [mary is a married woman, maria ist eine verheiratete frau]], dtype=<U380) (b) Text to Sequence Conversion A Seq2Seq model requires that we convert both the input and the output sentences into integer sequences of fixed length.

But before we do that, let’s visualise the length of the sentences.

We will capture the lengths of all the sentences in two separate lists for English and German, respectively.

# empty lists eng_l = [] deu_l = [] # populate the lists with sentence lengths for i in deu_eng[:,0]: eng_l.

append(len(i.

split())) for i in deu_eng[:,1]: deu_l.

append(len(i.

split())) length_df = pd.

DataFrame({eng:eng_l, deu:deu_l}) length_df.

hist(bins = 30) plt.

show() Quite intuitive – the maximum length of the German sentences is 11 and that of the English phrases is 8.

Next, vectorize our text data by using Keras’s Tokenizer() class.

It will turn our sentences into sequences of integers.

We can then pad those sequences with zeros to make all the sequences of the same length.

Note that we will prepare tokenizers for both the German and English sentences: # function to build a tokenizer def tokenization(lines): tokenizer = Tokenizer() tokenizer.

fit_on_texts(lines) return tokenizer # prepare english tokenizer eng_tokenizer = tokenization(deu_eng[:, 0]) eng_vocab_size = len(eng_tokenizer.

word_index) + 1 eng_length = 8 print(English Vocabulary Size: %d % eng_vocab_size) English Vocabulary Size: 6453 # prepare Deutch tokenizer deu_tokenizer = tokenization(deu_eng[:, 1]) deu_vocab_size = len(deu_tokenizer.

word_index) + 1 deu_length = 8 print(Deutch Vocabulary Size: %d % deu_vocab_size) Deutch Vocabulary Size: 10998 The below code block contains a function to prepare the sequences.

It will also perform sequence padding to a maximum sentence length as mentioned above.

# encode and pad sequences def encode_sequences(tokenizer, length, lines):          # integer encode sequences          seq = tokenizer.

texts_to_sequences(lines)          # pad sequences with 0 values          seq = pad_sequences(seq, maxlen=length, padding=post)          return seq   Model Building We will now split the data into train and test set for model training and evaluation, respectively.

from sklearn.

model_selection import train_test_split # split data into train and test set train, test = train_test_split(deu_eng, test_size=0.

2, random_state = 12) It’s time to encode the sentences.

We will encode German sentences as the input sequences and English sentences as the target sequences.

This has to be done for both the train and test datasets.

# prepare training data trainX = encode_sequences(deu_tokenizer, deu_length, train[:, 1]) trainY = encode_sequences(eng_tokenizer, eng_length, train[:, 0]) # prepare validation data testX = encode_sequences(deu_tokenizer, deu_length, test[:, 1]) testY = encode_sequences(eng_tokenizer, eng_length, test[:, 0]) Now comes the exciting part.We’ll start off by defining our Seq2Seq model architecture: For the encoder, we will use an embedding layer and an LSTM layer For the decoder, we will use another LSTM layer followed by a dense layer Model Architecture # build NMT model def define_model(in_vocab,out_vocab, in_timesteps,out_timesteps,units): model = Sequential() model.

add(Embedding(in_vocab, units, input_length=in_timesteps, mask_zero=True)) model.

add(LSTM(units)) model.

add(RepeatVector(out_timesteps)) model.

add(LSTM(units, return_sequences=True)) model.

add(Dense(out_vocab, activation=softmax)) return model We are using the RMSprop optimizer in this model as it’s usually a good choice when working with recurrent neural networks.

# model compilation model = define_model(deu_vocab_size, eng_vocab_size, deu_length, eng_length, 512) rms = optimizers.

RMSprop(lr=0.

001) model.

compile(optimizer=rms, loss=sparse_categorical_crossentropy) Please note that we have used ‘sparse_categorical_crossentropy‘ as the loss function.

This is because the function allows us to use the target sequence as is, instead of the one-hot encoded format.

One-hot encoding the target sequences using such a huge vocabulary might consume our system’s entire memory.

We are all set to start training our model.We will train it for 30 epochs and with a batch size of 512 with a validation split of 20%.

80% of the data will be used for training the model and the rest for evaluating it.

You may change and play around with these hyperparameters.

We will also use the ModelCheckpoint() function to save the model with the lowest validation loss.

I personally prefer this method over early stopping.

filename = model.

h1.

24_jan_19 checkpoint = ModelCheckpoint(filename, monitor=val_loss, verbose=1, save_best_only=True, mode=min) # train model history = model.

fit(trainX, trainY.

reshape(trainY.

shape[0], trainY.

shape[1], 1), epochs=30, batch_size=512, validation_split = 0.

2,callbacks=[checkpoint], verbose=1) Let’s compare the training loss and the validation loss.

plt.

plot(history.

history[loss]) plt.

plot(history.

history[val_loss]) plt.

legend([train,validation]) plt.

show() As you can see in the above plot, the validation loss stopped decreasing after 20 epochs.

Finally, we can load the saved model and make predictions on the unseen data – testX.

model = load_model(model.

h1.

24_jan_19) preds = model.

predict_classes(testX.

reshape((testX.

shape[0],testX.

shape[1]))) These predictions are sequences of integers.

We need to convert these integers to their corresponding words.

Let’s define a function to do this: def get_word(n, tokenizer): for word, index in tokenizer.

word_index.

items(): if index == n: return word return None Convert predictions into text (English): preds_text = [] for i in preds:        temp = []        for j in range(len(i)):             t = get_word(i[j], eng_tokenizer)             if j > 0:                 if (t == get_word(i[j-1], eng_tokenizer)) or (t == None):                      temp.

append()                 else:                      temp.

append(t)             else:                    if(t == None):                           temp.

append()                    else:                           temp.

append(t)        preds_text.

append( .

join(temp)) Let’s put the original English sentences in the test dataset and the predicted sentences in a dataframe: pred_df = pd.

DataFrame({actual : test[:,0], predicted : preds_text}) We can randomly print some actual vs predicted instances to see how our model performs: # print 15 rows randomly pred_df.

sample(15) Our Seq2Seq model does a decent job.

But there are several instances where it misses out on understanding the key words.

For example, it translates “im tired of boston” to “im am boston”.

These are the challenges you will face on a regular basis in NLP.

But these aren’t immovable obstacles.

We can mitigate such challenges by using more training data and building a better (or more complex) model.

  End Notes Even with a very simple Seq2Seq model, the results are pretty encouraging.

We can improve on this performance easily by using a more sophisticated encoder-decoder model on a larger dataset.

Another experiment I can think of is trying out the seq2seq approach on a dataset containing longer sentences.

The more you experiment, the more you’ll learn about this vast and complex space.

If you have any feedback on this article or have any doubts/questions, kindly share them in the comments section below.

You can also read this article on Analytics Vidhyas Android APP Share this:Click to share on LinkedIn (Opens in new window)Click to share on Facebook (Opens in new window)Click to share on Google+ (Opens in new window)Click to share on Twitter (Opens in new window)Click to share on Pocket (Opens in new window)Click to share on Reddit (Opens in new window)Like this:Like Loading.

(adsbygoogle = window.

adsbygoogle || []).

push({});.

. More details

Leave a Reply