Google Champions NLP by using Neural Networks to Help you Write Emails

Pranav Dar Last Updated : 17 May, 2018
3 min read

Overview

  • Google has revealed the technology behind it’s Smart Compose feature for Gmail – a combination of bag of words and RNN
  • The final model was trained on billions of text examples
  • The developers used TPUs to increase the computational power and consequently increase the speed of predictions

 

Introduction

A future where machines book appointments, write emails, and do other tasks for us – it is no more the talk of the future. The future is here and Google is leading the way with a slew of new products and services, backed by the awesome power of machine learning.

Google debuted it’s latest NLP development – Smart Compose – at last week’s Google I/O conference. It’s a Gmail feature that uses machine learning to predict the next words you are going to write and offers sentence completion suggestions accordingly. The aim is to help users write emails faster so they can focus on their daily work, rather than be stuck in the black hole of their inbox.

To develop such a technology, Google’s AI team faced three key challenges:

  • Latency: A critical challenge. The model should be able to predict the next words instantly for the user to not notice any delay or lag
  • Scale: Gmail is used by billions of users so the model should be able to customise it’s suggestions in different contexts
  • Fairness and Privacy: Bias has proven to be a headache for many AI systems. But when you’re building a model for billions of users, bias cannot be allowed otherwise the chances of failing and getting into a controversy are massive

Typical NLP techniques like n-grams, Bag of Words (BoW), and RNN (recurrent neural networks) were considered in order to learn and predict the next word that the user might type, based on the previous word. But this does not really add context to the sentence and the chance of getting it wrong is pretty high. So the team included the subject of the email as well as the email trail to understand the full context of the message.

The team also tried a sequence-to-sequence model but it failed to meet their strict latency constraints even though it did well with predictions. Speed of prediction So they combined two models – the bag of words and the RNN-LM, which was a significant improvement on the sequence-to-sequence model in terms of speed. Below is the structure of the final RNN model:

Once the final approach to model building was decided, the team then had to tune the requisite hyperparameters and do the most crucial part of any system – train the model. Google has a distinct advantage over most other companies in this regard – they have tons and tons of data to experiment with and train their models on, especially when it comes to NLP. They used billions of examples and to accelerate the training phase, they turned to the TPUv2 Pod which managed to finish the task in less than a day!

They had initially tested on a standard CPU but it gave them an average latency of hundreds of milliseconds, a time they deemed too long and costly for this task. By moving the majority of the computations power to TPUs, the average latency time came down to tens of milliseconds.

Next time you use Smart Compose, you might appreciate the work that went into building those sequence of words!

 

Our take on this

I really appreciate that Google reveals the technology behind it’s products. It gives data scientists and others aspiring to get into this field a good idea of how a leading company structures a problem statement and goes about building solutions for it.

Time will tell how well Google has done with the bias problem. It has been inherent in a lot of technologies recently and they cannot afford to let it ruin a perfectly good product. Their advantage lies in the amount of data they have collected over the years, which should definitely help in recognizing and eliminating bias.

 

Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!

 

Senior Editor at Analytics Vidhya.Data visualization practitioner who loves reading and delving deeper into the data science and machine learning arts. Always looking for new ways to improve processes using ML and AI.

Responses From Readers

Clear

Arvind Kumar Shukla
Arvind Kumar Shukla

old Team Leader regien & new Team Leader Join

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details