Winner’s Solution from the super competitive “The Ultimate Student Hunt”

Kunal Jain Last Updated : 09 Oct, 2016
10 min read

Introduction

The Ultimate Victory in a competition is derived from the inner satisfaction, of knowing that you have done your best and made most out of it.

Last week, we conducted our first ever student only hackathon – The Ultimate Student Hunt. And it launched with a bang! Young machine learning champions across the world started battling it out to claim a spot at the leaderboard. During the competition, we had close to 1500 registrations and the participants made 15,000 submissions! The student community exchanged more than 20,000 messages during the contest!

Honestly speaking, we were both pleasantly surprised and amazed at the response we got from the participants. We had thought that we would need to do some hand holding for the student community. We couldn’t have been more wrong! The quality of codes, the sophistication of their solutions and the sheer thought process and its application took us by surprise. We can confidently say that the future of Machine Learning is in the right hands.

To enhance the experience for our participants, we conducted a live AMA with one of our Top Data Scientist – Rohan Rao, winner of last two AV hackathons and current AV Rank 4. Rohan was a great sport and a huge inspiration to the students. He not only agreed on a short notice, but also spent more than the promised time on AMA and did a rapid fire round at the end!

It was one-of-a-kind competition and if you didn’t participate in it, trust me you have missed out on a great opportunity. To share the knowledge, the top 5 rankers from the competition have shared their approach and code. Read on to measure what you missed out and where you could have improved.

 

The Competition

The Competition was launched on 24 Sept’16 with registrations from data science students from all over the world. The battle continued for 9 days and all participants were giving each other a cut-throat competition. We launched this competition with an aim to help students prove their mettle in machine learning hackathons. We were amazed by the interaction with these young machine learning enthusiasts.

strip-banner-sept-720

By witnessing the commotion created by the competition, students kept on participating even after it had already started. The competition ended on 02 Oct 2016 at 23:59 with a total of 1494 participants.

The participants were required to help a country – Gardenia to understand the health habits of their citizens. The evaluation metric used was RMSE. The problem statement was designed for young data science enthusiasts to help them explore, innovate and use their machine learning skills at the fullest.

 

The Problem Set

Gardenia is a country which believes in creating harmony between technology and natural resources. In the past few years, they have come up with ways to utilise natural resources effectively with technology advancements.

The country takes pride in the way, it has maintained their natural resources and gardens. And the government of Gardenia wants to use data science to generate insights about the health habits of it’s people.

The mayor of  Gardenia wants to know how many people visit the parks in a particular day. They provided some environmental information about the country Gardenia and wanted young data scientists to help them in generating these insights.

 

Winners

Although the competition was fierce and challenging, there are always few participants who use a slightly different approach, and come out as champions. Heartiest congratulations to all the winners, we know it wasn’t any easy win.

Below are the Top 5 winners from The Ultimate Student Hunt.

Rank 1 – Benedek Rozemberczki

Rank 2 – Aakash Kerawat & Akshay Karangale ( Team – AK )

Rank 3 – Mikel Bober ( a.k.a. anokas )

Rank 4 – Akhil Gupta

Rank 5 – Akash Gupta

Here are the final rankings of all the participants at the leaderboard .

All the five winners have shared their approach and codes they used in the competition.

 

Winner’s Solutions

Rank 5 : Akash Gupta ( Roorkee, India )

akash-gupta

Akash Gupta

Akash Gupta is an 4th year undergraduate students at IIT Roorkee, India. He is a data science enthusiast and participates in several competitions to test his knowledge and skills.

Here’s what he shared :

I started with filling the missing values with the pad method ( i.e copying from the previous row ) since the attribute of a park should be similar to what it was the last day. A better approach could have been, to do this individually for each park. Then for the feature selection, I observed that the month of the year and the day of the month were highly influential in determining the footfall.

I made a 0/1 feature for winters ( months 11,12,1,2,3 ). For the dates, I observed some pattern, in the variation of mean footfall with increasing dates. But there were some anomalies which I tried to treat by averaging across adjacent days. (I did this for all the parks together, a better approach could have been to do this for each park). I also binned the direction of wind to represent the 4 directions.

For the modeling part, I started with gradient boosted trees and further tuned it to get the best result in cv (by testing on years 2000-2001) and then I tried xgboost and tuned it. Finally, I made a 1 hidden layer neural network with a wide hidden layer. I averaged the results of these 3 models to get the final output.

In addition to all that, for the gbm and xgboost models,I trained the regressors for each park independently as I believed that each park will have an independent pattern and relationship with other variables. For the neural net model, I trained it for all parks together and giving in park id as a feature as it needed larger number of samples to be trained.

Link to Code

 

Rank 4 : Akhil Gupta ( Roorkee, India )

akhil-gupta

Akhil Gupta

Akhil Gupta is a 4th year undergraduate student at IIT Roorkee. He is interested in exploring in-depths of data and wants to popularize data science amongst young minds.

Here’s what he shared:

My solution approach was simple and straight forward. I focussed most on Data preprocessing and less on model implementation. It was a well balanced dataset with good number of categorical and continuous variables.

I spent my initial time on imputing the missing values because some variables had 40% data missing. Pattern observed was that any feature for a park depends on that feature for other parks in the same location on that day.

I submitted my first solution just by using Date, Month and Park ID which gave me a public LB score of 146. As I kept on imputing missing values and adding features, I got a huge boost to 113 just by missing values.
There was some noise which was cleaned. As the features varied a lot, I scaled them down to range between 0 to 10.

Binning of categorical variables was important as when you observe the median (using Boxplot), you seem decent similarities between parks, months and dates.I didn’t spend much time on my model owing to some commitments. Tried a GradientBoostingRegressor, but I am sure that XGB after parameter tuning could have given me a raise of 2-3 points more.

Cross-validation was key and I used data for 2000-01 to check my model.

Trust me, it was a really catch dataset and kept me busy for 4-5 days. My state of mind was relaxed and chilled out, but the LB was quite competitive which boosted me a lot regularly. 

Link to Code

 

Rank 3 : Mikel Bober ( London, UK )

mikel

Mikel Bober

Mikel Bober  aka anokas is a young machine learning champion, who at such a young age has achieved high accolades on his name. A Kaggle Master and super genius, he is an inspiration even to the professionals. He kept discussions going throughout the competitions and shared his knowledge with all participants.

Here’s what he shared:

In any competition my very first approach, is to submit a preliminary model to test data validations and set a benchmark score to compare it with future complex submissions. I began with XGBoost and I knew that a random split was not going to work. Because the competition is a time series prediction problem.

I also split the training set into train and validation sets based on time, taking the last three years of training data as a validation set. My first model had 190 RMSE in validation, and scored 200 on the leaderboard. After this I decided to do some quick feature engineering. The first obvious thing that I had missing was the date variable, which I had excluded previously as it was not numeric. I made one-hot features from the Park ID, so XGBoost could better model each park individually.Then I added a feature importance function to my XGBoost model, and took a look at the importances, Strangely, Direction_Of_Wind was the most important feature, and contributed a lot to the score.

After doing all the initial feature engineering, it usually takes some time for me to think of some new, super features. I find that forcefully staring at graphs etc. doesn’t help me find new things in the data, but rather that taking a step back, and looking at the problem from a simpler viewpoint usually helps me find the things that others don’t. The question you have to ask yourself is “What information would affect whether people go to the park?”

Based on my conclusion it was, if it rained for continuously a week it would have a high impact and I made ‘lead’ and ‘lag’ features, which meant that the features for the last two days and next two days in the dataset were also included as features for the current day, allowing the model to learn these cases. The features were a success, improving my score to 109.

To apply external weather conditions, I wrote a scraper to go through the weather channel’s website and download past data going back to the nineties about a bunch of major cities in India. However, none of the data seemed to match, and I hit a dead end.You mustn’t be afraid of trying things that will most likely fail. The reason people win is because they had more failed features than you, but they keep going.

The last step of any competition for me is to make an ensemble model or meta-model. For this model I went for a very simple ensemble. I used a “bagged” neural network in Keras. After every epoch, I saved the weights to disk, and after the network finished, I took all the epochs with less than a certain validation loss, and averaged them to make my final neural network predictions. At the end, I took a simple mean of my two models, and used this as the final submission.

You can read his complete approach here.

Link to Code

 

Rank 2 : Aakash Kerawat ( Roorkee, India ) & Akshay Karangle ( Roorkee, India )

aakash-karawat

Aakash Kerawat

akshay

Akshay Karangle

Aakash Kerawat & Akshay Karangle are 4th year undergrad students at IIT Roorkee, India. Aakash has participated in several machine learning competitions to test his skills in past.  They both participated in the competition together.

They shared :

We started by exploring the data and visualising the target variable with date, we found a clear pattern. Owing to this we created features from date such as day of week, day of year, day of month, month and year. Using these features with the given raw features we tried few models from linear to XGBoost. Among these XGBoost proved to be the best and gave us a public LB score of 133.xx.

After this we tried binning the continuous weather variables like “Average_Breeze_Speed”, “Direction_Of_Wind” and so on. Based on our intuition we also aggregated months into seasons since the footfall may directly be dependent on it. The visualisations also indicated a clear difference in the mean of footfall corresponding to different seasons. Adding to these we created range features for pressure, breeze speed etc. from given max and min values. These all features gave us an improved score of 123.xx. Tuning the model further improved the score to 118.xx. And this was the score on which we were stuck where nothing seemed to work further. Another set of features that we tried were grouped mean of footfall by “Park_ID and binned direction of wind”, “Park_ID and min moisture” etc., but they led to overfitting.

As nothing was now working we thought of diving deep into the data with more visualisations. We plotted nearly all variables with date and sensed some possible noise in them. To remove the noise we thought of smoothing the variables with their moving averages / rolling means with windows of 3, 7 and 30 days and this was something which gave us a huge jump in our score from 118 to 107.xx. We didn’t stop here and thought of another feature “percentage change” of weather variables from one day to next day since somehow we wanted to convey the algorithm the change in weather from a day to another. This worked too and our score further improved to 100.xx. We now played with different window for moving averages and checked how they reflected on CV while avoiding overfitting. Our final model had around 43 features which scored 96.xx and 86.xx on public and private LB respectively.

What we learned is feature engineering is one of the key elements to improve your score if done in the right way.

Link to Code

 

Rank 1: Benedek Rozemberczki ( Edinburg, UK )

benedek

Benedek

Benedek Rozemberczki is a data science research intern at The University of Edinburg. With his expertise and skills Benedek topped the competition.

Here’s what he shared: 

1. The fact that I was able to take the lead early on was motivating. In addition, when other teams gained on me, I was able to come up with novel feature engineering methods.

2. The main insight was that xgboost is able to deal with seasonality when the time series in the panel data is fairly stationary besides seasonality. Also it turned out that missing value imputation is useless if you are able to use across observational unit aggregates. Panel data gives neat opportunities for feature engineering.

3. Coding in a mindless way and sitting next to your computer is not helpful when you hit a wall. You should try to get inspiration from somewhere else. Even sketching ideas and organizing them on a whiteboard might help.

4. Understanding the missing values can help a lot. It also did in this case.

5. Automatization of the data cleaning process — writing custom hot-one encoders and column normalizers is also helpful. I have my own function that I gradually use for data science projects. It is important to have a set of functions that You can use everywhere.

Link to Code

 

End Notes

It was great interacting with all the winners and participants. We thoroughly enjoyed the competition. If you can’t wait for the next hackathon, then stay tuned and check out all the upcoming competitions here.

What are your opinions and suggestion about these approaches? Tell us in the comments below.

You can test your skills and knowledge. Check out Live Competitions and compete with best Data Scientists from all over the world.

Kunal Jain is the Founder and CEO of Analytics Vidhya, one of the world's leading communities of Al professionals. With over 17 years of experience in the field, Kunal has been instrumental in shaping the global Al landscape. His expertise spans diverse markets, from developed economies like the UK to emerging ones like India, where he has successfully led and delivered complex data-driven solutions. As a recognized thought leader, Kunal has empowered countless individuals to realize their Al ambitions through his visionary approach to Al education and community building. Before founding Analytics Vidhya, Kunal earned both his undergraduate and postgraduate degrees from IIT Bombay and held key roles at Capital One and Aviva Life Insurance across multiple geographies. His passion lies at the intersection of analytics, Al, and fostering a thriving community of data science professionals.

Responses From Readers

Clear

J_ratt
J_ratt

Was this a python competition. I cannot see any solutions in R. It would be helpful if you could publish a solution in R as well.

Magento Developer UK
Magento Developer UK

Thanks for sharing such an awesome post, learned a lot from all the winners and there way of solving things and how to approach and plan yourself .

Barbara Pantuso
Barbara Pantuso

Very informative article, thanks for sharing such an useful information with us.

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details