How to Launch First Amazon Elastic MapReduce (EMR)?

Pavan Kumar Last Updated : 12 Jan, 2023
5 min read

Introduction

Amazon Elastic MapReduce (EMR) is a fully managed service that makes it easy to process large amounts of data using the popular open-source framework Apache Hadoop. EMR enables you to run petabyte-scale data warehouses and analytics workloads using the Apache Spark, Presto, and Hadoop ecosystems.

AWS EMR

Amazon Elastic MapReduce (EMR) is designed to be flexible and easy to use. It lets you quickly set up and scale a big data environment without worrying about infrastructure and maintenance. EMR can be used to process data stored in Amazon S3, Amazon RDS, Amazon DynamoDB, and Amazon Redshift, as well as on your on-premises data sources.

EMR provides many powerful tools and features to help you process and analyze your data, including running custom scripts, integrating with other AWS services, and setting up automatic scaling. With EMR, you can efficiently perform many big data tasks, such as data transformation, machine learning, real-time processing, and more.

Advantages of Using Amazon Elastic MapReduce (EMR)

AWS EMR Benefits
There are several reasons why you might choose to use Amazon Elastic MapReduce (EMR) for big data processing:

  • Fully managed service: EMR is a fully managed service that takes care of your underlying infrastructure and maintenance. This means you can focus on processing and analyzing your data rather than worrying about setting up and maintaining a big data environment.
  • Scalability: EMR makes it easy to scale your big data processing needs up or down as needed. You can easily add or remove instances from your cluster to meet changing demands.
  • Integration with other AWS services: EMR integrates seamlessly with other AWS services, such as Amazon S3, Amazon RDS, Amazon DynamoDB, and Amazon Redshift. This makes it easy to process and analyze data stored in these services.
  • Wide range of tools and frameworks: EMR provides tools and frameworks for big data processing and analysis, including Apache Spark, Presto, and Hadoop. This gives you much flexibility and choice when processing and analyzing your data.
  • Customization options: EMR allows you to customize your cluster by installing additional applications or libraries or running custom scripts. This allows you to tailor your big data environment to your specific needs.

Stepwise Process to Launch Amazon Elastic MapReduce (EMR) on AWS

To launch an Amazon Elastic MapReduce (EMR) cluster, you will need to follow these steps:

Step 1. Sign in to the AWS Management Console and navigate the EMR service page.

Step 2. Click the “Create cluster” button to create a new EMR cluster.

Amazon Elastic MapReduce (EMR)

Step 3. On the “Select Configuration” page, choose the software and instance types you want to use for your cluster. You can also specify the number of instances and the instance sizes.

Amazon Elastic MapReduce (EMR)

Also, if you want to attach your cluster to jupyter notebook, you need to check the “JupyterEnterpriseGateway” (See the snapshot below)

Amazon Elastic MapReduce (EMR)

Step 4. On the “Hardware Configuration” page, choose the type of hardware you want to use for your cluster. You can choose between on-demand instances or spot instances.

Amazon Elastic MapReduce (EMR)

Amazon Elastic MapReduce (EMR)

 

 

Step 5. On the “General Cluster Settings” page, specify your cluster’s name and logging options. You can specify any additional applications or libraries you want to install on your cluster.

Amazon Elastic MapReduce (EMR)

Step 6. On the “Security and Access” page, specify the security settings for your cluster. You can use an existing security group or create a new one. Also, specify the EC2 key pair to be used. After completing this step, click on create a cluster.

Amazon Elastic MapReduce (EMR)

 

After your cluster is launched, you can access it through the EMR console or using the AWS CLI or SDKs. You can then use the tools and frameworks provided by EMR to process and analyze your data.

Executing a Pyspark Script on Amazon Elastic MapReduce (EMR)

Now we will see how to execute a sample pyspark script on EMR. Here is our sample pyspark script:

Executing a pyspark script on EMR

Firstly, you need to connect to the master node using SSH. Open a terminal where your EC2 key pair is located. Now click on “Connect to the master node using SSH

Executing a pyspark script on EMR

Now copy the command and paste it into the terminal.

Executing a pyspark script on EMR

If the master node is successfully connected, you will see something like this:

Executing a pyspark script on EMR

Now to upload the pyspark script that you have in your local machine to EMR, open another terminal and run the following command:

scp -i ./my_ec2_key_pair.pem samplePysparkScript.py hadoop@<master_public_dns>:~/

Executing a pyspark script on EMR

Now your file has been uploaded on EMR; you can confirm this by running this command in the terminal:

Executing a pyspark script on EMR

Now to run the script on EMR, simply run "spark-submit samplePysparkScript.py"

Executing a pyspark script on EMR

Here is the output:

Executing a pyspark script on EMR

 

When you finish your cluster, you can terminate it to stop incurring charges. If you want to use this cluster again, you can select the cluster from the “Clusters” list and click on “Clone” with or without modifying the previously chosen settings.

Conclusion

To summarize everything that has been stated so far, Amazon Elastic MapReduce (EMR) is a powerful and easy-to-use big data processing service that can help you quickly and efficiently process and analyze large amounts of data in the cloud. With its wide range of tools and frameworks, scalability, and integration with other AWS services, EMR is an excellent choice for businesses of all sizes that need to process and analyze large amounts of data.

Responses From Readers

Clear

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details