Connecting and Reading Data From Azure Data Lake

Gitesh Dhore Last Updated : 02 Sep, 2022
6 min read

This article was published as a part of the Data Science Blogathon.

Introduction

You can access your Azure Data Lake Storage Gen1 directly with the RapidMiner Studio. This is the feature offered by the Azure Data Lake Storage connector. It supports both reading and writing operations. You can also read from a set of files in an Azure Data Lake Storage directory using the Azure Data Lake Storage Data Cloud Icon Loop operator. This article will show you exactly how to do it.

Data Lake

 

Connect to Azure Data Lake

Before using the Azure Data Lake Storage connector, you must configure your Azure environment to support remote connections and set up a new  Storage Gen1 connection in RapidMiner.
For this, you need to go through the following main steps (see details below).
  • Create a web application registration in the Azure portal.
  • Get information for remote connections.
  • Set up and test your connection in RapidMiner.

Step 1: Create a web application registration in the Azure portal

  • Create and configure an Azure AD web application to enable service-to-service authentication using Azure Data Lake Storage Gen1 using Azure Active Directory. Go through Step 1 to Step 3 in the Service-to-service authentication guide. The first step registers a web application that will give RapidMiner access to the Storage. Note that you can use any values ​​for the Name and Sign-on URL fields.
  • The second step describes how to get your tenant ID, application ID for the registered application, and the key that must be provided in RapidMiner to use the application.
  • The third step will configure this Active Directory application to access your Data Lake storage.
  • After completing these steps in your Azure tenant, you should have a web application registration configured to access some or all of the components of your target Azure Data Lake Storage Gen1 resource.
  • Note that for RapidMiner’s operator file viewer (see below) to work, you must grant read and execute access to the root directory and all directories where you want to allow navigation.
  • Additionally, you need permission to write to the cloud storage from RapidMiner. If you can work without a file browser, you can restrict permissions to target folders/files that your operators directly use.

Step 2: Get the remote connection information

  • To create a connection in RapidMiner, you need to get the following information. The previous step and linked guide described how to get them, but let’s repeat the direct links to those details here.
  • Tenant ID that identifies your company account. Get a tenant ID.
  • The fully qualified domain name of your account. Example: If your Azure Data Lake Storage Gen1 is named Contoso, then the FQDN defaults to contoso.azuredatalakestore.net.
  • The application ID and application key for the web application you created. Get your app ID and verification key.

Step 3: Set up and test the new Azure Data Lake Storage Gen1 connection in RapidMiner

Once you have all the information, it’s easy to set up the connection in RapidMiner.

  1. Open the Manage Connections dialogue in RapidMiner Studio by going to Manage Connections IconConnections > Manage Connections.
  2. Click the Add Connection icon at the bottom left:
    Manage Connections
  3.  Enter a name for the new connection and select Data Cloud IconAzure Data Lake Storage Gen1 Connection as the Connection Type:
    azure data lake storage connection
  4. Fill in the connection details for your Azure Data Lake Storage Gen1 account. Enter tenant id, account fqdn (fully qualified domain name), a client id (web application id), and client key (password to access the web application).
  5. Although not required, we recommend you test your new Azure Data Lake Storage Gen1 connection by clicking the Test IconTest Connection button.
  6.  Click the Save button Save all changes icon to save the connection and close the Manage Connections window. You can now start using Azure Data Lake Storage operators.

Read from Azure Data Lake Storage

The Data Cloud IconRead Azure Data Lake Storage operator reads data from your Azure Data Lake Storage Gen1 account. The operator can be used to load any file format as it only downloads the files and does not process them. To process the files, you will need to use additional operators such as CSV read, Excel read, or XML read.
Let’s start by reading a simple CSV file from Azure Data Lake Storage.
  1. Open a new process with The New Process icon in RapidMiner Studio and select Empty Project from the list. Drag the Data Cloud IconRead Azure Data Lake Storage operator to the Process view and connect its output port to the result port of the process:
    Read from Azure Data Lake Storage
    Read from Azure Data Lake Storage
  2. Click the file picker icon to view the files in your Azure Data Lake Storage Gen1 account. Select the file you want to load and click the File Chooser IconOpen. Note that if you want to use the file browser starting from the root folder, you must have read and execute access to the root directory.You can enter the path in the parameter field if you do not have this permission. You can open a file browser if you have access to the parent folder of this path (file or directory) and access to the root folder. Or you can always use a manually entered path and the operator with it (in which case the permission is only checked at runtime).
    Storage Gen1 account
    As mentioned above, Azure Data Lake Storage’s Data Cloud IconRead operator does not process the contents of the specified file. We chose a CSV file (comma-separated values file) in our example. This file type can be processed using the Read CSV operator.
  3. Add a CSV read operator between the Data Cloud IconRead and the resulting port. You can set the parameters of the Read CSV operator – for example, the column separator – depending on the format of your CSV file:
    Storage Gen1 account
  4. Run Process Process! In the results perspective, you should see a table containing the rows and columns of the selected CSV file:
Storage Gen1 account
Now you can use other operators to work with this document, for example, to determine the frequency of certain events. To write the results back to Azure Data Lake Storage, you can use the Data Cloud IconWrite operator. It uses the same connection type as Azure Data Lake Storage’s Data Cloud IconRead operator and has a similar interface. You can also read from a set of files in an Storage directory using the Azure Data Lake Storage Data Cloud IconLoop operator. To do this, you need to specify the connection and folder you want to process and the processing loop steps with nested operators. See the Data Cloud IconLoop operator help for more details.

Conclusion

Azure Data Lake includes all the features needed to make it easy for developers, scientists, and analysts to store information of any size, shape, and velocity and perform all processing and analysis across platforms and languages. It removes the complexity of ingesting and storing all your data while speeding up a startup with batch, streaming, and interactive analytics. It works with existing IT investments in identity, governance, and security to simplify data governance and management. It also seamlessly integrates with operational stores and data warehouses so you can extend existing data applications.

  • Create a web application registration in the Azure portal, Get information for remote connections, and Set up and test a new  connection in RapidMiner.
  • You can enter the path in the parameter field if you do not have this permission. You can open a file browser if you have access to the parent folder of this path (file or directory) and access to the root folder.
  • The Data Cloud IconRead Azure Data Lake Storage operator reads data from your Azure Data Lake Storage Gen1 account. The operator can be used to load any file format as it only downloads the files and does not process them.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

I am a Machine Learning Enthusiast. Done some Industry level projects on Data Science and Machine Learning. Have Certifications in Python and ML from trusted sources like data camp and Skills vertex. My Goal in life is to perceive a career in Data Industry.

Responses From Readers

Clear

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details