Lord of the Machines, Analytics Vidhya’s recently concluded signature hackathon, was one of the most intriguing and challenging competitions we have hosted. It featured a real-world dataset and some really awesome innovative solutions from data scientists around the world.
Initially planned for a duration of 2 days, the hackathon was extended to span 9 days in total to give participants more time to fine tune and improve their models. An incredible 3500+ participants registered for the signature hackathon!
In this article, the top three winners have shared their approach through which they climbed to the top of the leaderboard. We have also provided the GitHub code link for each approach. All the three winning solutions were executed in python.
For all those who were not able to participate, you missed out on a cracking competition! Nevertheless, there’s always a next time. Stay tuned for upcoming hackathons.
The journey to become a data scientist is often a long, difficult and obstacle-filled path. There are problems to solve, models to be built and conclusions to be drawn.
Analytics Vidhya hosted “Lord of the Machines”, a data science / machine learning hackathon designed to discover the best data scientists in the community. In this hackathon, you – the participants – were given the opportunity to come up with innovative and exciting data science solutions to claim your supremacy.
Email Marketing is still the most successful marketing channel and the essential element of any digital marketing strategy. Marketers spend a lot of time in writing that perfect email, laboring over each word, catchy layouts on multiple devices to get them best in-industry open rates & click rates.
How can I build my campaign to increase the click-through rates of email? – a question that is often heard when marketers are creating their email marketing plans. Can we optimize our email marketing campaigns with Data Science? It’s time to unlock marketing potential and build some exceptional data-science products for email marketing.
Analytics Vidhya sends out marketing emailers for various events such as conferences, hackathons, etc. For this hackathon, we provided a sample of user-email interaction data from July 2017 to December 2017. Participants were required to predict the click probability of links inside a mailer for email campaigns from January 2018 to March 2018.
The evaluation metric used was AUC ROC.
And the wait is over! Below are the final top 3 winners of the Lord of the Machines:
Rank 1: Kunal Chakraborty
Rank 2: SRK and Mark
Rank 3: Aditya and Akash
Here are the final rankings of all the participants in the Lord of the Machines hackathon.
The top 3 winners have shared their approach with us. We have listed them below in their own words, along with the code, for your perusal.
Aditya and Akash worked on different models and then teamed up. Below are both their approaches.
Multiple classification model has been create to predict the click probability of links inside mailer for email campaign. Following derived features has been created, for training different model
Campaign_id has been used to split data into train and validation.
Manual tuning has been performed based on public leaderboard and cross validation score.
Modelling
I posed it as a problem of sequence prediction where we want to find whether a user will click on an email, given his past interactions on platform. The first thing that comes to mind when we think of sequence prediction problems is RNN or more specifically LSTM.
Features
I formed sequences of users’ actions (in form of clicked and opened). 4 sequences were formed:
These sequences acted as 4 features for sequential input.
Network Architecture(s)
The output of these models gave me a probability whether these users will click the next email or not. I used this probability across all emails sent to that user (I did not want to add prediction to sequence and predict again because that can cause errors to propagate further).
This allowed me to make predictions for the users for whom we have some data (previous behavior), but in the test set, 20% of the entries were for users for whom we do not have any data (aka cold start).
To deal with Cold Start, I grouped by campaign_id and sent_weekday and sent_quarter_of_day and filled the missing values by 90% quantile across each group.
All the model prediction has been rank averaged to reach the final submission.
Link to Code.
Most of our time was spent on creating new features. We did validation split based on campaign ids. Our best single model is a light GBM that scored 0.7051 in the leaderboard. The list of important features we used are:
Link to Code.
I created several features based on textual information and user behavior to arrive at my final solution. The features created were:
This became my general frame work for data preparation before feeding it into any model. An xgboost model with these set of features gave me a score of 0.695+ on the public leaderboard. What followed after this was sheer pragmatism. I created several models based on approximately the same framework and differentiated them by adding variability. Some of the important variations were:
These are just some of the features. I created many notebooks and added/dropped/modified many features and performed many experiments which most of the time gave me a public leaderboard score in the vicinity of (0.685 – 0.69). Even though the performance of all the models was similar, their predictions were not highly correlated. This gave me the opportunity to take advantage of weighted ensembles to arrive at a higher score.
I took the most similar scoring prediction files with the least correlation and took their weighted average. I continued this process in an uphill fashion. This led to my four best performing predictions with scores (0.699 – 0.7011). I again followed the same heuristic to arrive at my final score which gave me a public leader board score of 0.704. This entire process is very similar to model stacking where diverse base classifiers prediction is fed to a meta classifier to arrive at better predictions. Only in my case, it was me manually adjusting the weights assigned to different models by validating them against the public leader board.
Link to Code.
Below are the key learnings from this competition:
This was one of the most interesting and challenging competitions we have hosted so far on Analytics Vidhya. It saw great participation, some really good solutions. I highly recommend going through the approaches and code links mentioned above to gain a deeper understanding of how these competition winners structure their thinking.
For those who missed out this time, don’t worry! We will regularly host hackathons so be sure to head over to our DataHack platform and get cracking on the practice problems we have for you on various domains.
Thanks for sharing. This hackathon was a great learning.
Thank you so much for sharing..All of the approches are really great and very intuitive for learning..Thanks again!!
Thanks for sharing. This is very helpful. Where can we find the data-set (train and test csv files)?
Hi Braj, Glad you found this helpful! The contest is closed for now so the dataset is currently not available. We might float this as a practice problem in the future on our DataHack platform. Meanwhile you can take a look at the other practice problems on that platform. :)