Blueprint for Data Science Success: Lessons from Karun Thankachan

Karun Thankachan Last Updated : 31 Mar, 2025
10 min read

Karun Thankachan is a Senior Data Scientist who specializes in Recommender Systems and Information Retrieval. Over the years, he has worked across multiple industries such as e-commerce, fintech, PXT, and edtech. Currently, he is part of the Walmart e-Commerce team, where he helps improve item selection and availability using machine learning. Outside of work, Karun is active in the data science community – as a mentor, speaker, and content creator.

In this exclusive interview, Karun shares insights from his journey, how he broke into data science, lessons from scaling systems at top companies, and his advice for those navigating the world of Data Science.

Karun Thankachan

Karun’s Notable Achievements

  • Published researcher with several papers in machine learning
  • Holds 2 patents in the field of artificial intelligence
  • Editorial board member for IJDKP and JDS
  • Data Science mentor on Topmate
  • Named a Top 50 Topmate Creator in North America (2024)
  • Recognized as a Top 10 Data Mentor in the USA (2025)
  • Perplexity Business Fellow
  • Followed by 70,000+ professionals on LinkedIn
  • Co-founder of BuildML, a community for research paper discussions and project-based learning

Checkout his LinkedIn profile here.

Q1. Can you share your journey into data science and what led you to your current position as a Senior Data Scientist at Walmart?

I started my career, like many others in India, as a Software Development Engineer at Dell Technologies. During that time, I met the Director of Software Engineering, who wanted to launch an analytics wing for our engineering projects. I was fortunate to be one of the first people chosen to work on it.

That led me to data analytics projects, building Hadoop clusters, and doing large-scale predictive analytics. Over time, this naturally evolved into machine learning work. After two years in the role, I felt I was hitting a plateau and decided to pursue a master’s in Data Science.

I prepared for the GRE, and with a score of 332/340, a solid GPA, and a patent to my name, I felt confident about getting into a top U.S. university.

But in 2018, I was rejected by every university I applied to. It was a humbling experience.

I took a step back and reassessed my application. I reached out to alumni, graduate students, and professors, many through LinkedIn for honest feedback. With their help, I improved my profile and was eventually admitted to my dream program: a Master’s in Computational Data Science at Carnegie Mellon University. The program was rigorous but incredibly rewarding. It helped me land my first Data Science role at Amazon.

The pace at Amazon was intense and pushed me to grow quickly. The experience I gained there led me to my current role as a Senior Data Scientist at Walmart e-commerce, where I lead the workstream focused on improving item availability.

Q2. From your experience, what’s your advice for data science professionals to increase callbacks and stand out in job applications?

Getting a callback does depend a bit on timing and luck. But there are still three important things you can control to increase your chances:

  • Choose relevant projects
  • Build a strong resume and LinkedIn profile
  • Grow a network that can refer you (more on this later)

Data Science Projects

When it comes to projects for Data Science roles, your goal is to show a few key skills:

  • The ability to extract insights and engineer features. This includes tasks like cleaning data (handling outliers, missing values, imbalance, encoding), spotting data patterns (like skewed distributions or dependencies), and creating useful features for modeling
  • Experience with model fitting and fine-tuning. You should be able to translate business problems into machine learning problems, pick the right metrics and models, and fine-tune those models for better performance
  • Skill in analyzing model errors and improving version one. This means knowing where your model fails, why it fails, and deciding whether to fix it through data improvements or better techniques
  • Comfort with building production-ready pipelines. This includes designing machine learning pipelines that can run at scale. You should understand:
    • Cloud platforms like AWS, GCP, or Azure (especially compute, storage, and model serving)
    • Orchestration tools like Airflow or SageMaker Pipelines
    • Containerization with tools like Docker

Once your portfolio looks strong, the next step is to market it well. That starts with writing a solid resume. Focus on three things: structure, content, and tailoring.

Resume Building Tips

For structure, your resume should have these main sections:

  • Career summary
  • Experience and projects
  • Education
  • Skills

It’s better to keep skills at the end. They matter more for an applicant tracking system than for a human reviewer.

For content, make sure each bullet point can stand alone. Don’t expect the reader to connect it with earlier bullets. Each point should follow the PSI format – problem, solution, and impact:

  • What was the problem you were solving? Be sure to mention the business domain
  • What was your technical solution? Be specific. Use model names (like XGBoost, VGG, Llama) and techniques (like segmentation analysis or root cause analysis)
  • What was the impact? Use numbers. If it was a business project, show business metrics. If it was a personal or academic project, compare your model’s results to a baseline

In the skills section, consider adding a subsection called “Competencies.” This is where you list keywords relevant to your role – like Python, SQL, A/B testing, segmentation analysis, and so on.

Finally, let’s talk about tailoring your resume. You don’t need to rewrite all your bullet points for each role. That takes too much time, especially when you’re applying to many jobs.

Instead, create themed resumes.

Make a few versions of your resume based on the business areas you’ve worked in. For example, if your experience is in both marketing and supply chain, you can create three versions: one focused on marketing, one on supply chain, and one that combines both.

Q3. How can aspiring data scientists effectively build their networks to get more referrals for jobs?

Here are three things I recommend for growing your network and getting referrals:

Connect on LinkedIn

You can send up to 100 connection requests per week. Try to use all of them. Keep your message short and relevant. For example:

Hi, I would like to request a referral for Job ID: ABC. Given my work on <briefly describe a related project using the PSI format>, I believe I’m a good fit for the role. If your profile is strong and the message is personalized, expect 4–6 responses out of every 100. When someone accepts your request, follow up with a polite message to start a conversation.

Cold Emails

This approach is also a numbers game. Reach out to people working at your target companies. Make your emails brief, clear, and respectful. For a good example, check out Leon Jose’s post on this topic.

In-Person Networking

Attend conferences, meetups, and hackathons. Use platforms like Analytics Vidhya, Meetup, Eventbrite, and LinkedIn Events to find both in-person and virtual events. Larger cities often host startup events and industry-specific conferences. These are great places to meet recruiters and other professionals. Competitions and hackathons are also useful for showing your skills and building meaningful connections.

Check out Angelica Spratley’s post for more on finding communities and in-person events.

Q4. What kind of projects do you recommend for someone aspiring to become a data scientist?

For Data Science roles, your project should demonstrate these key capabilities:

  • Feature Engineering & Data Insights: Show that you can clean data (handle outliers, missing values, imbalance, encoding), understand data nuances (skewed distributions, dependencies), and create predictive features
  • Model Development & Tuning: Convert business problems into ML problems, select appropriate models and metrics, and fine-tune models effectively
  • Error Analysis & Iteration: Analyze where the model is falling short, and decide whether to improve performance through better techniques/models or by revisiting the data
  • Production-Ready Pipelines: Highlight your ability to design scalable training and inference pipelines using:
    • Cloud Platforms: AWS, GCP, or Azure (focus on compute, storage, and serving)
    • Orchestration Tools: Airflow, SageMaker Pipelines
    • Containerization: Docker

You can use a guided project to get started. Here are a few – Forecasting Sales, Recommendation System, XGBoost Based Prediction.

Data Science Projects

Guided projects can help understand how to go about developing projects. However, you would need to dive into data on your own as well. Here are a few projects you can look into to demonstrate the above:

Instacart Market Basket Analysis and Next-Item Prediction

Predicting what the user will purchase next – an evergreen business problem. Also this competition has the solution for the top placed solution available publicly. As such, provides the opportunity to reproduce, analyze shortcoming and work on improvements.

Checkout this project here.

Walmart Sales Forecasting

Ample opportunity to showcase the ability to clean data (outliers), fit and tune models (can experiment with statistical models like ARIMA to DNN models like LSTM), and improve on v1 models by adding external data (sales info, SNAP days, weather, etc) Also, Sales Forecasting is a very well-understood area and makes for good conversation during interviews!

This is also a good project to build out batch model prediction pipelines for and host results on a Tableau dashboard – the insights from which a merchandiser could decide their upcoming assortment, or marketing team to decide what deals to push.

Checkout the project here.

Payment Fraud Detection

FinTech is probably the only field that consistently hires data folk, and fraud detection remains one of the most common use cases. The dataset is real-world e-commerce data, and the discussion board is littered with directions on feature engineering.

Checkout this project here.

Quora Insincere Question Identification

A project to showcase your NLP knowledge, including text-cleaning, handling embeddings, and extracting semantic meaning. Unlike typical NLP projects, this project provides ample room to analyze errors, dive deep into peculiarities of the English language, make hypotheses on how to account for these peculiaritie,s and improve a v1 model. Makes for great conversation during interviews!

Click here to explore the project.

H&M Fashion Recommendations

Great project to stand out in the RecSys space. Ample opportunity to learn basic methods – content/collaborative filtering to advanced model (Two-tower, WDNs, etc. In addition, datasets have images, allowing to demonstrate the  ability to handle multi-modal data

This is also a good project to create an inference pipeline for i.e. train model on data you have, a customer with a specific customer ID hits the model API endpoint and you serve the customer a “landing page” – a set of items personalized to them. It could even build out multiple carousels like

  • Customers who bought this also bought (”Cross-Selling”)
  • Styles you might like (based on their preferences)

Click here to checkout more details on this project.

Q5. What are your recommendations for preparing effectively for data science interviews?

These are the 7 areas you need to prepare for DS/ML interviews. Each company uses a different combination of these areas.

Coding

Probability and Statistics

SQL

  • If you are absolutely new to SQL, start with SQL 50
  • If you know your way around SQL, check out DataLemur SQL Interview Questions 

Machine Learning

Understand the basics which include:

  • Feature Engineering and Selection: Understanding missing value imputations, normalization/scaling, and few feature selection techniques.
  • Bias & Variance: Overfitting/Underfitting. Understand how to decide between models based on theory Know different regularization methods and the impact of each.
  • Loss Functions: Yes, you need to know the formulae of MSE, MAE, Log-Loss, etc.
  • Linear Regression, Logistic Regression, Tree models, k-means: What are the model assumptions and how do you decide when to apply what? The Best learning resource, in my opinion, is Introduction to Statistical Learning 

Deep Learning

  • Understand the basics such as optimizers, loss function, and basic architectures (MLP, CNN, RNN).
  • The best learning resource, in my opinion, is Deep Learning by Ian Goodfellow 

Case Studies

A case study round can be quite broad, e.g. “Assume you are a Data Scientist at Etsy. You want to increase the add-to-card rate. How would you go about it?”

The best way to approach such a question is to have a framework:

  • Clarify and narrow the problem
  • Define key business metrics
  • Identify appropriate ML formulation
  • Align model metrics with business goals
  • Suggest initial models
  • Explain productionization strategy
  • Outline A/B testing plan

For practice check out this video from Emma Ding and this playlist.

Behavioural

  • Learn how to tell compelling stories and demonstrate impact
  • Start with Levels.fyi for interview basics
  • For tough culture-fit prep, study Amazon’s behavioral expectations

Q6. Do data scientists need a strong understanding of data structures and algorithms? What’s your take on its importance?

For cracking ‘Applied Data Scientist’ roles – yes. Since the interview will have a DSA round. If in case you want to know the difference between different roles, check this article

For your day-to-day, DSA doesn’t play a heavy role. 

However, I do think most people who code should be able to solve LeetCode Medium level questions. This is because of my personal experience that folks who understand and can apply DSA patterns – dynamic programming, two-pointer, sliding windows, etc can better understand advanced coding patterns – factory, design injection, and in a general product better quality production code (i.e. more readable, maintainable etc)

Q7. What is your current GenAI tech stack, and how do you leverage it to scale and enhance your work?

My stack would be pretty simple at work. Github Copilot for coding/debugging and ChatGPT for research.

Copilot helps me spend less time switching between documentation. It’s also pretty good at helping understand legacy code, especially breaking down lengthy SQL statements. Its debugging feature also helps reduce the amount of time I spend on StackOverflow.

ChatGPT has been a game-changer. A significant amount of time is usually spent on deciding what models to experiment with for a specific ML problem. It helps provide a good starting list of methods to try out, and often I do find this list to be quite comprehensive. This reduced the amount of time I usually spend researching from days to hours.

The influence of world knowledge incorporated in LLMs to enhance features. This is especially true in Recommender systems where LLMs have been shown to help combat cold start better, and produce more informed embeddings.

The next trend is reducing the gap between what users want and how users use the system, especially for complex tasks. For instance, if you want to plan a birthday party and have to buy things for it – what would you do?

You would think of what all you needed to buy and manually search for them one by one on Walmart or Amazon. But now? You can directly state your need – ‘birthday party essentials’ on this website and they will understand your intent and provide suggestions for you.

This new opportunity of serving ‘broad intent’ is interesting.

The other trend I am actively involved in is ‘Mutli-agent Systems’. LLMs-powered agents that have been fine-tuned on a specific task and have reasoning capacity, which interacts with another agent to help users solve complex tasks. For example, if you are planning a trip you can have an agent that takes care of deciding what to do, another that decides where to stay, another that chooses between options based on budget or safety, etc.

This domain is rapidly advancing and am excited to see what new innovations occur here.

Q9. Can you describe a scenario where you had to make critical data-driven decisions with incomplete or ambiguous data? How did you navigate it?

Lack of sufficient data or dealing with poor quality data is a common challenge in production environments. Before addressing these issues, it’s crucial to adopt the mindset of understanding your dataset deeply:

  • Where does it come from?
  • How often is it updated?
  • Who maintains it?

Once you have clarity on these aspects and the dataset meets your basic requirements, you can start tackling quality issues.

Improving data quality typically involves close collaboration with multiple tech and business teams. Be patient, ask questions, and aim for steady progress. If progress stalls, escalate appropriately. If that still doesn’t help, it’s worth reassessing whether the effort is justified.

Ambiguity in decision-making is also common, especially when deploying new models to production that may impact downstream systems. In such cases, focus on what can be measured and ensure any observable impact is either positive or, at minimum, not negative. Roll out changes in phases to contain potential negative effects. Over time, with multiple iterations, you’ll develop stronger instincts and frameworks for navigating these decisions.

End Note

Karun Thankachan’s journey is a powerful blueprint for anyone looking to break into or grow within the field of data science. His story blends persistence, continuous learning, and strategic career moves – qualities that are essential in today’s competitive landscape. From navigating rejections to making critical decisions with ambiguous data, Karun’s insights offer both inspiration and practical advice.

For aspiring and early-career data scientists, this interview highlights the importance of building strong technical foundations, crafting relevant projects, networking intentionally, and preparing holistically for interviews. For professionals looking to scale, Karun’s experiences at top tech companies provide a valuable lens into how to think about impact, collaboration, and long-term growth.

If you found his perspectives helpful and would like to connect or learn more from him, feel free to reach out to Karun via LinkedIn.

Karun Thankachan is a Senior Data Scientist specializing in Recommender Systems and Information Retrieval. He has worked across E-Commerce, FinTech, PXT, and EdTech industries. He has several published papers and 2 patents in the field of Machine Learning. Currently, he works at Walmart E-Commerce improving item selection and availability.

Karun also serves on the editorial board for IJDKP and JDS and is a Data Science Mentor on Topmate. He was awarded the Top 50 Topmate Creator Award in North America(2024), Top 10 Data Mentor in USA (2025) and is a Perplexity Business Fellow. He also writes to 70k+ followers on LinkedIn and is the co-founder BuildML a community running weekly research papers discussion and monthly project development cohorts.

Login to continue reading and enjoy expert-curated content.

Responses From Readers

Clear

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details