Here are 8 Powerful Sessions to Learn the Latest Computer Vision Techniques

Sneha Jain Last Updated : 09 Jan, 2020
9 min read

Do you want to build your own smart city?

Picture it – self-driving cars strolling around, traffic lights optimised to maintain a smooth flow, everything working at the touch of your fingers. If this is the future you dream of, then you’ve come to the right place.

“If We Want Machines to Think, We Need to Teach Them to See.” – Fei-Fei Li

Now, I want you take five seconds (exactly five), and look around you. How many objects did you notice? We have a remarkably good sense of observation but it’s impossible to notice and remember everything.

Now take your time and look around again. I’m sure you’ll find something you missed in the initial glance. It happens – we’re human! But that’s where machines have become incredibly powerful tools thanks to advancements in computer vision.

The beauty about training our machines is that they notice even the most granular details – and they retain them until we want them to.

Think about it – from airport face detection applications to your local store’s bar scanner, computer vision use cases are all around us. Of course your smartphone is the most relatable example – we use it to unlock our phone. How does that happen? Face detection using computer vision!

Honestly, the use cases of computer vision are limitless. It is revolutionising sectors from agriculture to banking, from hospitality to security, and much more. In short, there is a lot of demand for computer vision experts – are you game to step up and fill the gap?

We’re thrilled to present you a chance to learn the latest computer vision libraries, frameworks and developments form leading data scientists and AI experts at DataHack Summit 2019! Want to learn how to build your own image tagging system? Or how to create and deploy your own yoga trainer? Or how about morphing images using the popular GAN models?

Well – what are you waiting for? Tickets are almost sold out so

RESERVE YOUR SEAT HERE!

Let’s take a spin around the various computer vision topics that’ll be covered at DataHack Summit 2019.

 

Hack Sessions and Power Talks on Computer Vision at DataHack Summit 2019

  • Morphing images using Deep Generative Models (GANs)
  • Image ATM (Automatic Tagging Machine) – Image Classification for Everyone
  • Deep Learning for Aesthetics: Training a Machine to See What’s Beautiful
  • Creating and Deploying a Pocket Yoga Trainer using Deep Learning
  • Content-Based Recommender System using Transfer Learning
  • Generating Synthetic Images from Textual Description using GANs
  • Haptic Learning – Inferring Anatomical Features using Deep Networks
  • Feature Engineering for Image Data

Hack sessions are one-hour hands-on coding sessions on the latest frameworks, architectures and libraries in machine learning, deep learning, reinforcement learning, NLP, and other domains.

 

Morphing Images using Deep Generative Models (GANs) by Xander Steenbrugge

Generative adversarial networks (GANs) are easily the most loved technique in the computer vision space. They really bring out a data scientist’s creative side!

GANs have seen amazing progress ever since Ian Goodfellow went mainstream with the concept in 2014. There have been several iterations since, including BigGAN and StyleGAN. We are at a point where humans are unable to differentiate between images generated by GANs and the original image.

But what do we do with these models? It seems like you can only use them to sample random images, right? Well, not entirely. It turns out that Deep Generative models learn a surprising amount of structure about the dataset they are trained on.

Our rockstar speaker, Xander Steenbrugge, will be taking a hands-on hack session on this topic at DataHack Summit 2019. Xander will explain how you can leverage this structure to deliberately manipulate image attributes by adjusting image representations in the latent space of a GAN.

This hack session will use GPU-powered Google Colab notebooks so you can reproduce all the results for yourself!

Here’s Xander elaborating on what you can expect to learn from this hack session:

I recommend checking out the two guides below if you are new to GANs:

 

Image ATM (Automatic Tagging Machine) – Image Classification for Everyone by Dat Tran

Labeling our data is one of the most time consuming and mind numbing tasks a data scientist can do. Anyone who has worked with unlabelled images will understand the pain. So is there a way around this?

There sure is – you can automate the entire labelling process using deep learning! And who better to learn this process than a person who led the entire project?

Dat Tran, Head of AI at Axel Springer Ideas Engineering, will be taking a hands-on hack session on “Image ATM (Automatic Tagging Machine) – Image Classification for Everyone”.

With the help of transfer learning, Image ATM enables the user to train a Deep Learning model without knowledge or experience in the area of Machine Learning. All you need is data and spare couple of minutes!

In this hack session, he will discuss the state-of-art technologies available for image classification and present Image ATM in the context of these technologies.

It’s one of the most fascinating hack sessions on computer vision – I can’t wait to watch Dat unveil the code.

Here’s Dat with a quick explainer about what you can expect from this hack session:


I would recommend going through the below article before you join Dat for his session at DataHack Summit 2019:

 

Deep Learning for Aesthetics: Training a Machine to See What’s Beautiful by Dat Tran

Source: TechCrunch

There’s more from Dat! We know how much our community is looking forward to hearing from him, so we’ve pencilled him in for another session. And this one is as intriguing at the above Image ATM concept.

Have you ever reserved a hotel room online from a price comparison website? Do you know there are hundreds of images to choose from before any website posts hotels for listing? We see the nice images but there’s a lot of effort that goes on behind the scenes.

Imagine the pain of manually selecting images for each hotel listing. It’s a crazy task! But as you might have guessed already – deep learning takes away this pain in spectacular fashion.

In this Power Talk, Dat will present how his team solved this difficult problem. In particular, he will share his team’s training approaches and the peculiarities of the models. He will also show the “little tricks” that were key to solving this problem.

Here’s Dat again expanding on the key takeaways from this talk:

I recommend the below tutorial if you are new to Neural Networks:

 

Creating and Deploying a Pocket Yoga Trainer using Deep Learning by Mohsin Hasan and Apurva Gupta

This is one of my personal favourites. And I’m sure a lot of you will be able to relate this as well, especially if you’ve set yourself fitness goals and never done anything about it. 🙂

It is quite difficult to keep to a disciplined schedule when our weekdays are filled with work. Yes, you can work out at home but then are you doing it correctly? Is it even helping you achieve your objective?

Well – this intriguing hack session by Mohsin Hasan and Apurva Gupta might be the antidote to your problems! They will showcase how to build a model that teaches exercise with continuous visual feedback and keeps you engaged.

And they’ll be doing a live demo of their application as well!

Here are the key takeaways explained by both our marvelous speakers:

This is why you can’t miss being at DataHack Summit 2019!

 

Content-Based Recommender System using Transfer Learning by Sitaram Tadepalli

Recommendation engines are all the rage in the industry right now. Almost every B2C organisation is leaning heavily on recommendation engines to prop up their bottomline and drive them into a digital future.

All of us have interacted with these recommendation engines at some point. Amazon, Flipkart, Netflix, Hotstar, etc. – all of these platforms have recommendation engines at the heart of their business strategy.

As a data scientist, analyst, CxO, project manager or whatever level you’re at – you need to know how to harness the power of recommendation engines.

In this unique hack session by Sitaram Tadepalli, an experienced Data Scientist at TCS, you will learn how to build content-based recommender systems using image data.

Sitaram elaborates in the below video on what he plans to cover in this hack session:

Here are a few resources I recommend going through to brush up your Recommendation Engine skills:

 

Generating Synthetic Images from Textual Description using GANs by Shibsankar Das

Here’s another fascinating hack session on GANs!

Generating captions about an image is a useful application of computer vision. But how about the other way round? What if you could build a computer vision model that could generate images using a small string of text we provide?

It’s entirely possible thanks to GANs!

Synthetic image generation is actually gaining quite a lot of popularity in the medical field. Synthetic images have the potential to improve diagnostic reliability, allowing data augmentation in computer-assisted diagnosis. Likewise, this has a lot of possibilities across various domains.

In the hack session by Shibsankar Das, you will discover how GANs can be leveraged to generate a synthetic image given a textual demonstration about the image. The session will have tutorials on how to build a text-to-image model from scratch.

Key Takeaways from this Hack Session:

  1. End to end understanding of GANs
  2. Implement GANs from scratch
  3. Understand how to use Adversarial training to solve Domain gap alignment
  4. Formulate business use-cases using adversarial training

I would suggest you go through this article to gain a deeper understanding of GANs before attending the session:

 

Haptic Learning – Inferring Anatomical Features using Deep Networks by Akshay Bahadur

For providing haptic feedback, users have been dependent on external devices including buttons, dials, stylus or even touch screens. The advent of machine learning along with its integration with computer vision has enabled users to efficiently provide inputs and feedback to the system.

A machine learning model consists of an algorithm that draws some meaningful correlation between data without being tightly coupled to a specific set of rules. It’s crucial to explain the subtle nuances of the network and the use-case we are trying to solve.

The main question, however, is to discuss the need to eliminate an external haptic system and use something which feels natural and inherent to the user.

In this hack session, Akshay Bahadur will talk about the development of applications specifically aimed to localize and recognize human features which could then, in turn, be used to provide haptic feedback to the system.

These applications will range from recognizing digits and alphabets which the user can ‘draw’ at runtime; developing state of the art facial recognition systems; predicting hand emojis along with Google’s project of ‘Quick, Draw’ of hand doodles, and more.

Key Takeaways from this Hack Session:

  1. Gain an understanding of building vision-based optimized models which can take feedback from anatomical features
  2. Learn how to proceed while building such a computer vision model

 

Feature Engineering for Image Data by Aishwarya Singh and Pulkit Sharma

Feature engineering is an often used tool in a data scientist’s armoury. But that’s typically when we’re working with tabular numerical data, right? How does it work when we need to build a model using images?

There’s a strong belief that when it comes to working with unstructured image data, deep learning models are the way forward. Deep learning techniques undoubtedly perform extremely well, but is that the only way to work with images?

Not really! And that’s where the fun begins.

image_data_machine_learning

Our very own data scientists Aishwarya Singh and Pulkit Sharma will be presenting a very code-oriented hack session on how you can engineer features for image data.

Key Takeaways from this Hack Session:

  1. Learn how to extract primary features from images, like edge features, HOG and SIFT features
  2. Extracting image features using Convolutional Neural Networks (CNNs)
  3. Building an Image classification model using Machine Learning
  4. Performance comparison among primary and CNN features using Machine Learning Models

 

End Notes

I can’t wait to see these amazing hack sessions and power talks at DataHack Summit 2019. The future is coming quicker than most people imagine – and this is the perfect time to get on board and learn how to program it yourself.

If you haven’t yet booked your seat yet, then here is a great chance for you to do it right away! Hurry, as there are only a few seats remaining for India’s Largest Conference on Applied Artificial Intelligence & Machine Learning.

RESERVE YOUR SEAT HERE!

I am looking forward to networking with you there!

The heart of every marketing campaign is great content and I love churning just that! I am a Data Science content marketing enthusiast. Exploring the field of applied Artificial Intelligence and Machine Learning and consistently being involved in editing the content at Analytics Vidhya is how I spend my day. I have always been fueled by the passion to do something different. The core of me is always eager to explore and learn more and more each day not only in the field of Data Science but also in the field of Psychology.

Responses From Readers

Clear

deepika singhal
deepika singhal

nice article

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details