First, a confession – I’m a big fan of R! It has been a bit under the radar in recent years with the rise of Python, but it remains my go-to tool of choice for doing data science tasks.
Let me list down the key reasons why I’ve always appreciated R:
I feel like a kid on Christmas eve whenever someone pulls out R code on the stage! So when I came to know about the rstudio::conf last year (which I covered here), I felt giddy with joy. An entire conference on R!
This year’s conference, rstudio::conf 2019, promised an ever bigger and better show than 2018. And in this article, I have penned down my thoughts on my favorite topics presented at rstudio::conf 2019.
I have provided as many resources as I could find regarding each presentation. These resources should help you get started with the topic or package on your own.
Note: This year’s conference was held in Austin, Texas, from 15th to 18th January, 2019. The first two days, 15th and 16th, were exclusively for workshops. This article focuses on the main conference talks which were held on 17th and 18th January.
I have categorized the presentations according to the content.
Click on each presentation title to access the slide deck for that talk.
We don’t talk enough about model deployment in machine learning. It gets lost in the hackathons and competitions we participate in when we’re learning our way into this line of work. But without learning how deployment works, our model isn’t going to see the light of day.
So it was a welcome sight to see a practical presentation on this topic. James Blair introduced us to the ‘plumber‘ package, which converts our existing R code to a web API that other services can connect to. You should check out the package’s official documentation to scour through some examples and understand how to use it in your project.
You can install the ‘plumber’ package from CRAN via the below code:
install.packages("plumber")
R Markdown is a thing of beauty. But I have personally struggled to generate user-friendly HTML or PDF formats from it (perhaps I might need to work harder there). The ‘pagedown’ package certainly looks like it’s going to be my new best friend.
As Yihui and Romain showcased in their talk, you can use ‘pagedown’ to generate beautiful PDF documents, including business cards, resumes, posters, letters, e-books, research papers, and more. You can check out these examples in their presentation linked above. It looks really neat!
You can install the package from GitHub using the below command (it’s available via CRAN too):
remotes::install_github('rstudio/pagedown')
The concept of factors bamboozled me when I started out with R. I hadn’t heard about this anywhere else. But as I worked with categorical data, I started to see it’s value to the kind of analysis I used to do.
I do accept that it can still be a frustrating feature to grasp. Most of the categorical data questions I found on Stackoverflow and other sites pertained to the use of factors. There are workarounds, of course, but you can’t really get away for long without having to reply on factors.
This presentation by Amelia McNamara talked about the ‘forcats’ package, a suite of tools that solves common categorical data problems with factors. The package is part of tidyverse so if you have that installed, just load it up:
library(forcats)
Time series data is messy. Anyone who has even touched the topic has been through that experience. I have tried some shoddy tricks in the past to make my time series analysis presentable but it takes way too much time.
So when I saw Earo Wang’s presentation title, I was intrigued. Earo spoke about unifying the time series workflow using two packages – tsibble and fable. This presentation basically talks about these two packages using examples. Check out the two slides below which show the value of a tidy time series pipeline:
Yes, there’s a lot of code included in the presentation itself so you can replicate it on your own machine and watch the magic unfold.
Visualizing uncertainty? It can be intimidating for most data science professionals. And communicating that uncertain level to non-technical people? That’s just a whole new layer of difficulty added on top!
Hypothetical Outcome Plots are a veyr useful way of making non-experts understand uncertainty in your data. You should read UW Interactive Data lab’s paper on this concept to understand how HOPs work. According to their research, “HOPs enables viewers to infer properties of the distribution using mental processes like counting and integration”.
This presentation presents several ideas on how to generate these HOPs in R. It’s worth taking a few minutes to browse through their thinking and code. You can also install the ‘ungeviz’ package which is a set of tools to visualize uncertainty.
One of my favorite presentations from the conference. Puzzles play a key role in gauging how a person thinks, and whether he/she has the potential to thrive in data science. I can’t recommend them enough to all aspiring data scientists – practice, practice, practice.
This presentation, and the idea behind it, focus on the programming skills required for dealing with untidy data. Yep, I was intrigued too. Ms. Irene Stevens presented a series of puzzles she coined “Tidies of March”. The name comes from the popular ‘tidyverse’ package since these puzzles test your data wrangling skills in R.
There is a really cool sandwiches examples included in the above linked GitHub repository that you should check out. It was a real hit during Irene’s presentation.
Have you worked with addins inside RStudio before? If not, you will find this presentation a good place to start. The content is comprehensive enough and covers the following topics:
If you’ve ever created an R package before, you’ll find working with addins a breeze.
I’m a huge fan of how StitchFix has built their entire business strategy around data science. Check out their algorithms specific web page – it’s a walk down paradise lane for any data science professional.
Hilary Parker, a Data Scientist at StitchFix, walked us through the basic data science process in their organization. He meat of the talk comes at slide #25, where she introduces the ‘magick’ package. The aim of the package is to modernize and improve high-quality image processing in R.
There are a few design recommendations, tips and tricks in the presentation as well which you might find helpful.
Data Science project managers and CxOs – this one is right up your alley. What should you do when your small data science team starts expanding? That line of thought is inevitable once the team starts producing successful results. Should you integrate the team into a certain discipline?
This excellent presentation by Angela Bassa also covers questions like should you hire specialists for different roles or hold out for unicorn data scientists? What’s the trade-off of hiring junior talent as opposed to a few senior ones? What should be a data science team’s composition?
These aren’t just theoretical questions. As we enter a golden age in terms of data collection, leadership needs to understand how a data science team functions in the overall scheme of things. Failure to answer these questions effectively could potentially have long term implications for the organization.
The R universe has been notoriously slow in adopting deep learning frameworks. It’s one of the reasons Python has raced ahead in the industry. But most DL libraries have made their way into R in recent times.
In this presentation, Sigrid Keydana gave us an overview of the Keras framework. The talk then shifted to the main topic – TensorFlow Eager Execution in R. This was demonstrated on two key deep learning concepts:
There’s a ton of code available in the presentation. You should go through this post to understand how TensorFlow works in case you haven’t used it before.
Note: To view the full presentation, you’ll have to download/clone the GitHub repository I have linked above. Then unzip the folder and click the ‘.html’ file.
While this doesn’t fit any particular category, I’ve included this presentation to showcase how effective plots can be. This talk was all about RStudio’s survey around the globe to understand their user base and predict their next million users.
The four key questions they asked (and answered through plots) were:
I love the R community. There is a wonderful feeling of inclusivity among R users that I haven’t found anywhere else. Do you use (and love) R regularly? Hit me up in the comments section below this article and let’s discuss a few ideas!
These were my top picks among all the presentations at rstudio::conf 2019. Karl Broman has taken the time to put together resources from all the talks in one single place. Talks were divided into three parallel tracks so you can pick and choose depending on your interests.
This article was simply great Thanks Prqnav
Thanks, Amit!
Thank you Pranav. This is greatly organized stuff, as well educated me of R conferences. Would request the R community to make it bigger by more way of reaching to public, so my sort of new learners would participate and gain more knowledge.
Hi Pranav Thanks for making the R STudio conference available.The R conference proceedings consolidation is really useful. Kindly let me introduce myself as Krishnan Chari, a quantitative finance and Risk Management -Analytics professional with above 2 decades of experience in BFS Industry . Currently I am heading the Banking Risk Advisory practice at IMaCS. Through my career , I have been using traditional econometric and Multivariate statistical methodologies for performing Risk data analysis in the past. Though these techniques continue to be the bedrock of financial risk analysis, the ML and AI methods are a promising set to tools to use on large data . While I academically work on and understand the approach and mathematical foundations of ML techniques , I don't have much of coding skills. Hence not able to exploit R to its strength. Kindly request your guidance on the key aspects to focus in using R so that I can accelerate my productivity in analytics using R for problem solving. Kind regards krishnan Chari
Hi Krishnan, Nice to e-meet you. I appreciate your kind words. You're in the perfect field as far as applying data science is concerned. Out of curiosity, could you tell me which tool you have been using for risk analysis so far? Regarding learning R, I was in the same boat as you a couple of years back. I would suggest checking out this learning path to get started - https://www.analyticsvidhya.com/learning-paths-data-science-business-analytics-business-intelligence-big-data/learning-path-r-data-science/