For providing haptic feedback, users have been dependent on external devices including buttons, dials, stylus or even touch screens. The advent of machine learning along with its integration with computer vision has enabled users to efficiently provide inputs and feedback to the system. A machine learning model consists of an algorithm that draws some meaningful correlation between data without being tightly coupled to a specific set of rules. It’s crucial to explain the subtle nuances of the network also the use-case we are trying to solve. The main question, however, is to discuss the need to eliminate an external haptic system and use something which feels natural and inherent to the user.
To connect the dots, we will talk about the development of applications specifically aimed to localize and recognize human features which could then, in turn, be used to provide haptic feedback to the system.
These applications will range from recognizing digits, alphabets which the user can ‘draw’ at runtime; developing state of the art facial recognition system; predicting hand emojis along with Google’s project of ‘Quick, Draw’ of hand doodles.
In this hack session, we will discuss the development of such applications. First, we will start with formulating and addressing a strong problem statement followed by a thorough literature review. Once these things are taken care of, we will discuss the data gathering part, followed by the algorithm evaluation and future scope.