This article was published as a part of the Data Science Blogathon.
Hello!, techies, I am sure this article will help you understand how to use Azure Databricks notebook to perform data-related operations in it. Let’s go!
Databricks Data Science & Engineering (sometimes called simply “Workspace“) is an analytics platform based on Apache Spark. It is integrated with Azure, AWS, and GCP to provide a one-click setup, streamlined workflows, and an interactive workspace that enables collaboration between data engineers, data scientists, and machine learning engineers.
Azure Databricks is a data analytics platform optimized for the Microsoft Azure cloud services platform. Azure Databricks offers two environments for developing data-intensive applications: Databricks Data Science & Engineering, and Databricks Machine Learning. Azure is the first-party service provider of Databricks(meaning all the support services for databricks will be provided by Azure on its cloud). You can see the databricks workspace below:-
Need to have at least an Azure free tier subscription.
Step 1: – Open the Azure portal (portal.azure.com)
Step 2:- To create the Databricks service you need to click on the “Create a Resource” icon.
Step 2.1:- Now search for the “Azure Databricks” service and then click on create button option.
Step 2.2:- Now fill up the details that are needed for the service creation in the project details section.
Step 2.3:- Now other things I will keep default and click on next in Networking, Advance, and Tag sections.
Step 2.4:- Finally, click on the “Review + Create” button.
step 2.5:- Once the message “Validation passed” is displayed, click on the “create” button.
step 2.6:- Now click on go to service and you will be redirected to your azure databricks service page click on “Launch Workspace” and you will be redirected to your workspace.
Now our azure databricks service has been created. It’s time to create a cluster to run the notebook. Let’s create…
Step-1:- From the provide databricks menu options, click on “Compute” to create a cluster.
Step-2:- You will be redirected to compute page, here you will get 2 types of cluster creation options, one is “All-purpose clusters” and the other one is “Job cluster”.
Here we are going to create All-purpose clusters, now click on create cluster button.
Step-3:- Now you will be moved to the new cluster creation page. Here you will have to set the following details:-
Now our cluster is running and we are going to create our first databricks notebook.
Step-1:- Go to workspace and click on it and then click on drop-down arrow on workspace and create a new folder to keep all notebooks inside it. We will name this folder “inshortsnews”.
Step-2:- Now click on “inshortsnews” folder dropdown arrow and click on create and then click on the notebook.
Step-2.1:- Now provide all the details for notebook creation like name, I give “inshorts-news-data-scrapping” name to our notebook, default language, we will choose “Python”. If you want you can also choose between R, Scala, and SQL as a default language for your project.
Step-2.2:- Click on create and the notebook will get created with the provided language.
Now we are going to scrape the news data from the Inshorts news web app using python, pandas, and other libraries.
Inshorts is an aggregator app that summarizes news articles in 60 words and covers a wide range of topics, including tech, business, and other content such as videos, infographics, and blogs. In the below image we are going to scrape the data which are inside the rectangle boxes.
In this, we are going to scrape the article’s news headlines, news contents, and the category of the news articles.
The articles have been classified into many categories but we going to scrape only 7 different categories and they are as follow:- technology
, sports
, politics
, entertainment
, world
, automobile
and science
.
To collect these data I used the following libraries requests
, BeautifulSoup4
, and pandas
. So to use these libraries we have to first install them in our notebook. We only need to install BeautifulSoup lib and the rest two are already provided with our notebook.
Step-1:- To install libraries inside databricks notebooks we use the below method:-
Step-2:- Now import all the required libraries
Step-3:- Now define the endpoints for each category from where we want to scrape the data.
Step-04:- Now we will send requests for each of the “URLs” defined above and then beautify the response data. Then we used list comprehension to find all the news headlines and new content from the response data. We also split the URLs to get the news category.
Step-05:- Create the data frame from the dictionary of the data that we have scraped from the Inshorts news web app.
Step-06:– Now display the data that we have scraped.
For the final code please click here.
Cheers!!! on reaching the end of the guide and learning pretty interesting kinds of stuff about Azure Databricks. From this guide, you successfully learned how to launch databricks services in azure cloud. Along with that you have also learn how to create clusters for notebook in databricks and basic of data scrapping using python and pandas.
Now in next article we are going to explore Azure Data Lake Storage Gen2 (ADLS Gen2), how to create ADLS gen2 storage services and along with that we are going to save our scraped data into this storage account by scheduling our notebook on hourly basis using the Azure Data Factory (ADF) methods. By doing this we create our own textual dataset for the NLP tasks.
Feel free to connect with me on LinkedIn and Github for more content on Data Engineering and Machine Learning!
Happy Learning!!!
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.