Building Machine Learning Pipelines and AI in Retail – A Powerful Interview with Rossella Blatt Vital

Sneha Jain Last Updated : 14 Jan, 2020
19 min read

Overview

  • How do machine learning pipelines work? What’s the role of AI in retail? How important is ethics in this field?
  • We are thrilled to present this insightful and powerful interview with Rossella Blatt Vital, a thought leader, and machine learning expert
  • This machine learning interview is a must-read for anyone wanting to make a career in the field

 

Introduction

“I often compare the job of a data scientist to Sherlock Holmes’s: there isn’t a single project that is the same and each one requires a huge amount of investigation,  troubleshooting and thinking outside the box.” – Rossella Blatt Vital

I love this analogy! There is no single definition or job description that captures the role of a data scientist. It varies from project to project, even within the same organization. This is what a lot of aspiring data scientists miss in their learning journey.

I am thrilled to present my interview with a thought leader in the machine learning and artificial intelligence (AI) space – Rossella Blatt Vital! Rossella is the Director of Data Science at Trunk Club and brings a wealth of industry and teaching experience to this interview.

She has worked at the cutting edge of AI in retail and that’s one of the topics we covered in this interview. The rapid rise and adoption of AI systems is disrupting and transforming multiple industries, and retail is at the forefront of this revolution. It’s a topic I’m deeply connected with and wanted to bring that out for our community.

Additionally, Rossella has built AI teams from scratch and has worked in multiple machine learning projects in the industry. She shares her experience of the different projects she has worked on and the key skills she looks for when hiring for her machine learning team.

This is a highly insightful and powerful interview with a top machine learning thought leader. Rossella loves using analogies to explain her ideas (like the one above), so take it all in and enjoy learning!

 

Topics we covered in this Interview

  1. Rossella’s Journey in Machine Learning
  2. Building machine learning pipelines and Rossella’s industry experience
  3. The applications of AI in the Retail industry
  4. Career advice: The key skills Rossella looks for while hiring people in her machine learning team

 

Rossella Blatt Vital’s Journey in Machine Learning

Sneha Jain (SJ): Your initial days were spent working in the Arts field. We’ve seen most folks coming from a non-technical field struggle to get the hang of the Artificial Intelligence field. What for you interested in the AI space and how did you make the transition?

Rossella Blatt Vital (RBV): I enrolled in Arts & Literature because I enjoy the intellectual challenge of literature, art, and philosophy. Although I loved it and found it extremely stimulating, I felt I was not pushing myself outside of my comfort zone. I decided to follow my passion for numbers and to enroll in an engineering program at Politecnico di Milano, one of the most prestigious engineering universities in Europe.

“It was during my graduate studies that I discovered the field of artificial intelligence, which I fell in love with from the very first class. I realized I was just scratching the surface of this immensely rich and fascinating discipline, so I decided to continue to learn in that space by pursuing a Ph.D. focused on Artificial Intelligence.”

What intrigued me the most about AI was both the ambitious goal of emulating human intelligence and how many of its approaches were humbly inspired by nature. I was also deeply energized by the fact that AI can be applied to an enormous plethora of use cases, from medical to personalization, from business decision making to entertainment.

I had a feeling I would never get bored in the field of AI and I was absolutely right!

 

SJ: There is a lot of focus on academia right now, especially because there is a sudden surge in people wanting to learn Artificial Intelligence and Machine Learning. How do we bridge the gap between this and what the industry requires? This is quite a challenge for hiring organizations right now.

RBV: Academia has been at the heart of AI breakthroughs for over 70 years. It is exciting to see how over the recent years the bridge between academia and industry has been consolidated. The AI academic space has always been very active in applying AI to the real world, but a few factors exponentially boosted these endeavors.

Among these, it is worth mentioning three:

  1. The internet-enabled research helps results to travel much faster, boosting cross-pollination of ideas,  innovation, and collaborations
  2. Data, which is a foundational pillar for machine learning, became more and more available, with research labs investing in building free data sources to accelerate research and compare results. Similarly, the open-source movement significantly contributed to research traveling faster and enabling to build on top of others’ results
  3. More and more companies crystallized their realization that AI had the potential to be a paramount source of value and had the potential to provide a competitive edge

Despite the momentum in academic research, the hype of AI in the industry, and an ever-growing number of university students interested in AI,  many companies struggled to implement their AI transformation.

This was due in part to a low supply of seasoned leaders that were also AI experts. Another reason was the fact that academia was not really preparing AI experts for the industry, and, on the other side, companies did not know how to leverage the value of AI experts with an academic background.

The hype around AI was high, but not many organizations knew how to transform it into value in the short term. The two major challenges were:

  1. The fact that academic programs were not preparing students for deploying AI solutions into production (a skill very much needed in the industry to create tangible value with AI), and
  2. The difference between the research lifecycle in academia (highly tolerant to uncertainty) and the one in the industry (more rigorous in risk mitigation)

Today, things are different. There is an increasing number of graduate programs in AI/ML/Advanced Analytics that offer a curriculum preparing students for both the academic and industry careers, giving exposure to emerging tools and programming languages used in the industry.

Furthermore, the number of companies incorporating greenfield research in their innovation strategy is growing, partially because they understood that’s the only way to thrive in the long run and because thought leadership (e.g., earning a name in the AI scientific community, contributing to relevant open source code, etc.) has a positive impact into the AI flywheel (think talents’ attraction, brand equity, etc.).

Personally, teaching has never stopped for me. I am deeply fulfilled by helping talents to grow, to discover their full potential and to reach it. Talent development lies at the very core of my philosophy on how to build a high performance team.

 

Building Machine Learning Pipelines and Rossella’s Industry Experience

SJ: I am intrigued by the first project you took up in the AI space. You built software from scratch for lung cancer diagnosis by analyzing the olfactory signal, using an electronic nose. Tell us about your end-to-end journey with this project and what are the tools and techniques you used to complete it.

RBV: I chose to work on this project as my Master’s degree dissertation. I was looking for something at the intersection of machine learning for good and dogs (my passion since I was a little kid). I found a documentary on dogs being able to detect cancer on their owners’ bodies thanks to their powerful olfactory sense.

“I realized that if dogs could detect it, then there was a good chance a machine could be trained to recognize the olfactory blueprint of cancer as well. I started doing some research and discovered that there were a couple of research labs with similar research (although not necessarily on lung cancer diagnosis).”

I knew that my expertise in machine learning was not enough to build a solution to predict lung cancer: I also needed the data, that is, samples collected from patients, and a way to collect such data, that is an electronic nose.

I embarked on a journey to build a partnership between my university (Politecnico di Milano), the major cancer center in Italy (Istituto Tumori di Milano) and Sacmi, a manufacturing company that had built an electronic nose for outdoor applications and material and food analysis.

Collecting Data for the Project

In a matter of a few weeks, the three organizations established a fruitful collaboration: SACMI adapted its electronic nose for the purpose of collecting breath samples from patients, while the Istituto Tumori di Milano integrated the data collection into an ongoing study protocol.

We managed to collect dozens of samples from both patients that had previously been diagnosed with different forms and stages of lung cancer and a control group.

“Establishing such a partnership was a tremendous learning opportunity for me. Similarly, being involved in the end-to-end machine learning lifecycle, including data collection, was a very enriching experience that I had not yet had exposure to during my years in school.”

To that end, I went through the traditional machine learning pipeline:

Since one of my goals was to learn as much as possible, I implemented a large variety of machine learning techniques at each stage.

“In particular, for feature projection, some of the most successful techniques were Principal Component Analysis (PCA), Fisher Linear Discriminant Analysis (LDA) and Non-Parametric Linear Discriminant Analysis (NPLDA). Classification was done by implementing several supervised pattern classification algorithms, based on different k-Nearest Neighbors (k-NN) approaches (classic, modified and fuzzy k-NN), on linear and quadratic discriminant functions classifiers and on feed-forward Artificial Neural Networks (ANNs).”

I soon became overwhelmed by all the possible combinations of data manipulation, learning algorithms, hyperparameters, etc., so I decided to build a genetic algorithm to identify the best combination of techniques for each step of the machine learning workflow.

Very aware of the “no free lunch theorem”, I was not expecting one pipeline to overshine the others, but I was interested in discovering the differences and advantages of the various approaches.

The best solution provided from the genetic algorithm has been the projection of a subset of features into a single component using the Fisher Linear Discriminant Analysis and a classification based on the k-Nearest Neighbors method. The best was defined both in terms of model evaluation metrics (i.e., F1 score, accuracy, precision, and recall), but also in terms of model interpretability.

“The observed results, all validated using cross-validation, were highly satisfactory, achieving an average accuracy of 92.6%, sensitivity of 95.3% and specificity of 90.5%.”

Not only are these results significantly higher than other non-invasive techniques, but they highlight the huge potential of electronic noses to become a noninvasive, small, low cost, and fast preventive instruments for early lung cancer diagnosis (see papers 1, 2, 3, 4, shared at the end of this article).

 

SJ: Data Scientists find the task of building a project pipeline very complex. You have rich experience in this space with your Predictive Speller and Autonomous Wheelchair projects. Can you share your project path for Machine Learning projects keeping these in mind?

RBV: No matter the project at hand, the project pipeline for machine learning that I follow is always very similar:

  1. Deeply understand the business problem: What are we trying to solve? How can the problem be framed in a machine learning context?
  2. Perform state-of-the-art research: What has already been done in this space? How has this problem been solved before? How have similar problems that can be adapted to ours been tackled? What is the expected performance? Learn what data is available
  3. Design a draft of the MVP and plan to iterate through it by building continuously better “champion models”:  The first champion model will be the result of a vanilla pipeline. However, going through the end-to-end workflow will provide insights into the highest value lever that we can pull to improve the model. This phase consists of multiple steps, often not linear:
    • Get the data and pre-process it: This includes basic cleaning, taking care of outliers, missing values, grouping data, etc.
    • Feature engineering, feature selection, and dimensionality reduction: During this step, productive interaction with subject matter experts will go a long way
    • Model training: I always like to start with a simple model to build a baseline and to minimize the layers of complexity at first. Based on the results, we then tune the model and explore more complex approaches
    • Performance analysis: Depending on the validation protocol, it is important to perform a series of performance analysis, beyond the traditional basic model evaluation metrics. The type of analysis will depend on the specific problem at hand, but some examples are performance as a function of data size (this will tell us if we need additional data), a deeper analysis of the confusion matrix (if dealing with classification) to understand if there is any pattern in misclassification, etc.
  4. Deployment to production and monitoring plan: This step focuses on the deployment to the production of the machine learning solution and the implementation of a monitoring plan. A monitoring plan will ensure that the model continues to perform as expected. This means monitoring model evaluation metrics as well as input data statistics to make sure that the model will score against data similar to the one it was trained on. Business metrics are also relevant to monitor: after all, we built a machine learning product to make an impact on a given business metric, so making sure that this is indeed happening is paramount (very often it will also indicate further opportunities). Finally, it is critical to building “safety nets”, that is, “plan b” solutions for when the model fails, underperforms, or for those edge cases where we can’t trust the model or the cost of error is too high

In the case of the projects on the brain-computer interface, we followed all the above steps. Due to the nature of the project, a particular emphasis was dedicated to data collection and data cleansing. Both the BCI speller (see paper 5) and BCI wheelchair projects (see papers 6 and 7) were particularly interesting not only because of the complexity of the problem and the high level of noise in the data but also because they both lived at the intersection of neuroscience and artificial intelligence.

Both projects required breakthroughs in traditional machine learning techniques applied to EEG signals and in more specialized AI fields (computer vision for the wheelchair project and NLP for the speller one).

 

SJ: Ethics in AI is a topic we feel very strongly about at Analytics Vidhya. This is especially important in today’s age where we have technology that can do good or do harm to society. Ethics is a topic you have worked on as well – would love to hear your experience.

RBV: Ethical responsibility should be a paramount consideration in any AI project. I cannot think of a single project where ethics in AI was not a significant factor. That being said, in some applications, the impact will be bigger than others and more caution is in order.

For example, during my research on lung cancer prediction using an electronic nose, there were primarily two ethical aspects that we were constantly considering:

  • The first one was the cost of misprediction (i.e., a false negative, that is, a sick patient wrongly diagnosed as healthy, is by far worse than a false positive where a healthy patient is wrongly predicted as sick – especially if additional tests will confirm the diagnosis before moving to invasive procedures)
  • The second one had to do with how to augment the doctors’ success in their decision-making process without giving them the impression that a machine was making a decision in spite of them

“The former is a very common problem in the machine learning space, with many established techniques to tackle it. The latter is handled with the sensitive design of the solution and with maximizing the interpretability of the model, to make the solution as trustworthy as possible.”

When I worked on a brain-computer interface at Politecnico di Milano, ethics was a critical compass in our decision making. A less than perfect solution could have had dramatic consequences (think of a paralyzed individual controlling a flawed wheelchair).

To avoid that, an ethically responsible design must include thorough testing, rigorous performance analysis, forward-thinking safety nets, and to work closely with subject matter experts. This is an extremely important part of the machine learning product development pipeline too often overlooked: a subject matter expert will be able to go beyond numbers and pinpoint when things don’t make sense.

“This is where the trade-off between interpretability and accuracy comes into place. Whenever possible, we should privilege interpretability to validate the logic of the model (although in some cases, where performance are dramatically different or the cost of misprediction is high, we may opt for a less interpretable but higher performance model).”

Ethics was also a major concern in my projects on crime prediction at IIT, fraud detection at Remitly and in building fair assessment tests at Wonderlic. In all these cases, demographic data (or proxy to that) was a major risk of bias and something we were very careful to avoid or mitigate.

 

SJ: I wanted to ask you about a very intriguing project you took up – using machine learning to predict crime. Our community will also be very interested in understanding how you went about building this machine learning model!

RBV: I have always been very passionate about socially relevant projects, which is why the application of machine learning to law enforcement was an extremely fascinating project for me. As a research associate at the Illinois Institute of Technology, I worked with Dr. Wernick on a collaboration with the Chicago Police Department to use machine learning to fight crime.

We implemented algorithms to analyze gang boundaries, forecast homes at high risk of burglary, and predict where crimes were more likely to occur based on a variety of elements such as weather patterns, events, and seasonality. The results helped to optimize the allocation of police resources ultimately contributing to keeping our city safer.

 

Artificial Intelligence in Retail

SJ: AI is disrupting multiple industries, with Retail among the frontrunners. Most of the big brands are implementing intelligent solutions that can enhance their customer experience and also reducing costs. What are the crucial AI and ML implementations in the retail field according to you?

RBV: There could not be a more exciting moment for the retail fashion industry! We are privileged to be at the frontier of the revolution that AI is driving in this space. The fashion industry is one of the biggest in the world, estimated at about 3 trillion dollars as of 2018; with a 5.5 percent annual growth in the past decade. And with the unprecedented opportunities that AI is unlocking, fashion is bound to reach unparalleled levels of innovation and success in the years to come. 

AI is transforming the fashion retail industry in every element of its value chains, such as designing, manufacturing, logistics, marketing, sales, customer experience, product discovery and more. Some of the major areas where AI is reinforcing the competitive edge of fashion companies are:

Product Discovery and Recommendation 

We are at the forefront of a new era of search experience. The traditional keywords based search is becoming obsolete thanks to the possibilities unlocked by computer vision and natural language processing.

Visual search is already a reality: users can upload a photo of the product they want, then the AI-driven search solution identifies the pictured product (or even a specific item in the picture, like an accessory worn by one of the people in the picture) and recommends similar items based on specific characteristics and styles.

Similarly, natural language processing is now allowing us to search using a more natural language, describing the style that the user is looking for, just like one would talk to someone in a store.

“One of the biggest impacts that AI is making in the retail space is in the field of recommendations. Advances in AI and the amount of data available empower retail companies to make increasingly better recommendations to customers, both in terms of single clothing and outfit.”

Not only can we now recommend clothing and accessories based on style, specific needs, events, similar customers and similar products, but we can also provide a more complete recommendation at the outfit level. Wardrobe management is becoming a new trend in the fashion space. By combining the purchase history of fashion purchases and by collecting visual data of customers’ wardrobe, we can now suggest looks and make new clothing recommendations.

Customization and Customer Experience 

We live in a world where data is more abundant than ever. AI allows us to make sense of all this data and transform it into insights to tailor every touchpoint of the customer’s journey. Besides recommending clothing that a customer may like, AI allows bringing customization to the next level, incorporating information on customer preferences, fit prediction, body shape, etc.

Chatbots or smart assistants help make the shopping experience more personalized and interactive, assisting customers in their shopping experience and helping to find the right product and fit. 

  • Trend Predictions: AI can be used to forecast and gain insights on emerging and future trends, incorporating complex data from a myriad of sources (e.g., sales, social media comments, customer reviews, fashion shows, street videos/pictures, etc.)
  • Inventory management: Machine Learning brings inventory management and demand forecasting to a new level, significantly reducing costs and optimizing sales processes
  • Manufacturing and Logistics: AI is expediting logistics, making the supply chain more efficient, reducing shipping costs and transit time and even augmenting the design and textile manufacturing process boosting creativity and improving quality assurance

Examples of additional emerging uses of AI in the retail space are smart mirrors, intelligent garment tags, and automated one-of-a-kind clothing designs.

It’s no surprise that AI is becoming an integral part of technology in the fashion retail industry. The future of fashion is being shaped in large part by advancements in AI.

 

SJ: Can you pick out your favorite transformations in the retail field? We would love to hear some real-world examples.

RBV: Machine learning in retail is on the rise, and I am proud that Trunk Club is at the forefront of the AI revolution in this fascinating field. At Trunk Club, we marry the art and science of fashion through the power of data and styling expertise. We deliver hand-picked style personalized to the customers’ taste, need and budget directly at their doorstep.

Customers keep what they love and return the rest. Trunk Club sets itself apart from other online personal styling services through its high level of personalization and quality offerings, with a rich variety of brands and styles pulled directly from Nordstrom’s extensive inventory.

There are many interesting applications of AI at Trunk Club, but some that particularly excite me (and contribute to our competitive edge) are those at the intersection of recommender systems, computer vision, and natural language processing (NLP).

“We use AI to recommend the best clothing and outfit for each customer based on their preferences, needs, budget, style, similar brands, similar customers, fit, purchase history, feedback on trunks and many additional factors like weather and occasion.

 

This is especially challenging (and therefore exciting!) from an AI perspective because it requires to combine advanced capabilities in recommender systems, computer vision (e.g., predict similar styles from a photo) and natural language processing (e.g., analyze customer’s feedback and integrate it into our recommendation system).”

Things get even more interesting when we go beyond the single clothing level and enter the outfitting one. Luckily, at Trunk Club, we love challenges and are deeply passionate about data,  machine learning and delivering a delightful experience to our customers, so tackling these challenges is what makes us love our job!

 

SJ: What components of AI and ML are you using to enable the customer experience at Trunk Club (in your role as the Director of Data Science)?

RBV: At Trunk Club, we understand the value of machine learning and apply it throughout our value chain wherever it makes sense and has the highest value. Being a very customer-centric organization, we prioritize the Data Science roadmap based on the value we can deliver to our customers. We want to make sure the customer’s shopping experience is delightful, flexible and silky as well as making sure that our customers love the products we choose for them.

For this reason, the majority of our focus from an AI perspective is around recommendation systems, computer vision, and natural language processing (NLP). Deep learning is pushing the boundaries of what is possible in all these fields: we are excited to keep up to date with all the emerging advances in these disciplines.

“We also make sure that the data science models get integrated into our products. Particularly, the data science models are deployed as microservices that interact with other microservices in the overall Trunk Club engineering architecture using RESTful APIs. The infrastructure uses all the latest technology (Kafka events, NoSQL databases, load balancers, etc.) to enable low latency and high scalability for the business.”

The entire stack is built on AWS which, along with the in-house DevOps, is extensively leveraged to enable rapid CI/CD as well as minimal downtime impact for the customers. Moreover, our internal Data Services team has ingeniously created an EMR/S3 based OLAP warehouse that not only enables rapid analytics and prototyping but also cutting edge research. 

Thanks to this empowering data infrastructure and platform, our research cycles are very agile allowing us to quickly try out all the ideas we come up with. We are currently exploring some thrilling innovative breakthroughs in these areas so stay tuned!

 

Key Traits Rossella Looks for while Hiring for her Machine Learning Team

SJ: You have successfully led multi-million dollar projects in your career. We are sure there must have been a solid team to support them! Tell us about the key skills you look for while hiring a data scientist?

RBV: Talent is the core idea behind what makes an AI strategy successful.

“Particularly passionate about talent development, I found that investing in talents, their growth and their fulfillment, is the highest ROI an AI leader can make for their organization.”

When I assess talents in the AI space, I make sure to properly balance the evaluation of both hard skills and soft skills, bearing in mind that some are easier to learn/coach.

Strong foundations in mathematics, statistics, probability theory, and machine learning theory are pillars of the hard skills I look for in a machine learning engineer. Ideally, I look for candidates with both breadth and depth of the ML workflow and of the various techniques at each step.

I am usually skeptical of those who learned the tools but not the underlying theory of how things work. This is usually a guarantee of building something wrong sooner or later and a major barrier when it comes to innovative solutions. Also, the underlying theory is not changing much, whereas tools are in constant evolution (especially nowadays) and much easier to learn/keep up with.

“For this reason, my interview questions are often focused on exploring beyond machine learning knowledge itself: I am interested in understanding how deep a candidate has internalized the theory, how they break down a problem and think through it, how well they can abstract concepts from one domain and transfer them to a different one and so on.”

In today’s industry, it is very valuable for a data scientist to also own some software engineering skills. A machine learning engineer does not need to be a software engineer, but it will boost productivity and velocity to be familiar with core computer science concepts, data models, data processing systems, data structures, software engineering best practices and to be able to learn quickly and transition between several machine learning frameworks.

Machine learning expertise is a necessary but not sufficient condition to be a successful data scientist. I deeply believe that soft skills are as important as the hard ones.

“AI requires to learn constantly, to study papers and to keep up to date with emerging technologies. That’s why the eagerness to learn and having a growth mindset are two of the key soft skills I look for.”

I value candidates with the cognitive power to learn fast and the curiosity to love it. Inquisitive spirit and creative thinking are also two paramount skills that I look for in candidates. I often compare the job of an AI engineer to Sherlock Holmes’s: there isn’t a single project that is the same and each one requires a huge amount of investigation,  troubleshooting and thinking outside the box.

AI projects come with multiple layers of uncertainty at different steps. That’s why bias for action and a passion for prototyping are extremely valuable in a machine learning expert. At the same time, it is important to know how to balance workarounds and hacky solutions with rigorous and optimal solutions as well as to recognize when to adopt which one.

Something often underestimated is the importance of bridging technical brilliance with business impact. I value candidates that understand and are passionate about the impact of their work. This is often reflected in their ability to collaborate cross-functionally and to appreciate diverse thinking, both skills that are gaining more and more weight in building high-performance teams.

Similarly, communication skills are also very important: the ability to explain complex concepts in simple terms and to tailor communication to a wide variety of audiences goes a long way. Last but not least, infrangible ethics is a necessary condition to join my team: beyond the simple yet fundamental expectation of doing the right thing, there are some ethical considerations in the responsibilities that an AI expert has.

“Just like a successful orchestra requires not only talented musicians but also a harmonious orchestration of the band, it is the responsibility of the AI leader to build an environment conducive to the right cultural values and behaviors.”

In order to build a successful AI flywheel, the AI leader must establish the operating model that will create a safe space, foster collaboration, constantly refine the team craft, increase the bar of excellence, and ultimately create a multiplier effect by empowering AI talents to make an impact in the organization.

 

References

Here are the articles/papers/resources mentioned in this interview:

  1. Pattern Classification Techniques for Lung Cancer Diagnosis by an Electronic Nose, Blatt R. et al., Book Chapter in Computational Intelligence in Healthcare 4, Studies in Computational Intelligence, vol 309, pp 397-423,  2010
  2. Lung Cancer Identification by an Electronic Nose based on an array of MOS Sensors, Blatt R. et al., Neural Networks, 20th International Joint Conference on Neural Networks (IJCNN), pp 1423-1428, 2007
  3. Pattern Classification Techniques for Early Lung Cancer Diagnosis using an Electronic Nose, Blatt R., et al., Frontiers in Artificial Intelligence and Applications (ECAI), vol 178, pp 693-697, 2008
  4. Fuzzy k-NN Lung Cancer Identification by an Electronic Nose, Blatt R. et al., Lecture Notes in Artificial Intelligence, International workshop on Fuzzy Logic and Applications, vol 4578, pp 261-268, 2007 
  5. A predictive speller controlled by a brain-computer interface based on motor imagery, Blatt R. et al., ACM Transactions on Computer-Human Interaction, vol 19, issue 3, 2012
  6. An Autonomous Wheelchair Driven by Event-Related Potentials, Blatt R. et al., International Conference on Robotics and Automation (ICRA), 2009
  7. Brain Control of a Smart Wheelchair, Blatt R. et al., Intelligent Autonomous Systems, 10th International Conference on Intelligent Autonomous Systems (IAS), pp 221-228, 2008

The heart of every marketing campaign is great content and I love churning just that! I am a Data Science content marketing enthusiast. Exploring the field of applied Artificial Intelligence and Machine Learning and consistently being involved in editing the content at Analytics Vidhya is how I spend my day. I have always been fueled by the passion to do something different. The core of me is always eager to explore and learn more and more each day not only in the field of Data Science but also in the field of Psychology.

Responses From Readers

Clear

Astha Puri
Astha Puri

Thanks for publishing this interview Sneha. Great points covered.

Philip
Philip

Thanks for Publishing this interview with us. Its so nice to read this, So many of the good points included in it.

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details