You can now Build your own 3D Digital Face Emoji using Deep Learning

Pranav Dar Last Updated : 16 Mar, 2018
2 min read

Overview

  • The deep learning model can build a remarkably accurate facial and hair 3D digital avatar
  • Buit using an extremely deep neural network with over 50 layers
  • Over 40,000 images of various hairstyles used to train the neural network
  • Check out the video below and the link to the research paper

 

Introduction

Have you ever wondered how the animojis on the iPhone X work? Don’t worry, deep learning has the answer again. How about a technique that doesn’t require any specific hardware, doesn’t need a video of you (just a picture), and generates a 3D digital avatar with remarkable accuracy that can be animated in real time?

This is not some far-off futuristic technology. This is now.

A group of developers have released a research paper demonstrating how they used deep learning to build a digital 3D avatar of a person’s head and face.

How does this work?

The input image is segmented into a face part and a hair part. Then, the har part is run through a neural network that attempts to extract attributes like length, curve, spikiness, baldness, etc. This is an extremely deep neural network with over 50 layers. It was trained on over 40,000 images of various hairstyles.

The framework is also robust to lighting. That is, even under differing light conditions, the same photo will lead to extremely similar outcomes. And the same applies to facial expressions.

The developers claim that their digitized models are fully rigged with intuitive animation controls, like blend shapes and joint-based skeletons, and can be readily integrated into existing game engines.

You can view the official research paper here and check out their video below:

To see the technology live in action, you can also download the PinScreen application on iOS devices (not available on Android currently).

 

Our take on this

Their model offers a practical end-to-end solution for avatar personalization in gaming and VR applications. I have previously used a similar application wherein we hosted avatars in virtual classrooms so people could connect from remote locations. But with the level of personalization this deep learning model offers, it could be a game changer in multiple industries.

As a person on the video has commented, it is amazing how machine learning is making hardware obsolete. From 3D face scanners to dual lenses (as we covered in the Pixel 2 article), hardware is just not a necessary component when algorithms are as intelligent as this one.

I strongly recommend checking out their research paper to understand how they went about building the deep neural network and then trying it on your own.

 

Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!

 

Also, go ahead and participate in our Hackathons, including the DataHack Premier League and Lord of the Machines!

 

Senior Editor at Analytics Vidhya.Data visualization practitioner who loves reading and delving deeper into the data science and machine learning arts. Always looking for new ways to improve processes using ML and AI.

Responses From Readers

Clear

Mani Poursadegh
Mani Poursadegh

Wher is the source link?

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details