15 Hive Query to Unlock the Power of Big Data Analytics

Siddharth Sonkar Last Updated : 15 Feb, 2024
8 min read

Introduction

Given the number of large datasets that data engineers handle daily, there is no doubt that a dedicated tool is required to process and analyze such data. Some tools like Pig are among the most widely used tools to solve such a problem, as is Apache Hive, which is built on top of Hadoop. Apache Hive is a data warehousing built on top of Apache Hadoop. It allows querying distributed data storage, including Hadoop Distributed File System (HDFS). Hive also supports ACID properties with the optimized ORC file format for faster querying. Its ease-of-use and similarity to SQL, known as Hive Query Language (HQL), make it popular among data engineers. HQL simplifies transaction support, enabling the use of familiar SQL statements for tasks like insert, update, delete, and merge.

In this article, we will cover the most commonly used queries that you will find useful when querying data in Hive.

apache hive queries

Learning Objectives

  • Get an overview of Apache Hive.
  • Get familiar with Hive Query Language.
  • Implement various Hive functions, like aggregation, date, etc.

This article was published as a part of the Data Science Blogathon.

Hive Refresher

  • Hive is a data warehouse built on Apache Hadoop, an open-source distributed framework.
  • Hive architecture contains Hive Client, Hive Services, and Distributed Storage.
  • Hive Client various types of connectors like JDBC and ODBC connectors, which allows Hive to support various applications in different programming languages like Java, Python, etc.
  • Hive Services includes Hive Server, Hive CLI, Hive Driver, and Hive Metastore.
  • Beeline has replaced Hive CLI in HiveServer2.
  • Hive supports three different types of execution engines – MapReduce, Tez, and Spark.
  • Hive supports its command line interface, Hive CLI, where programmers can directly write Hive queries.
  • Hive Metastore maintains the metadata about Hive tables.
  • Hive metastore can be used with Spark as well for storing the metadata.
  • Hive supports two types of tables – Managed tables and External tables.
  • The schema and data for Managed tables are stored in Hive.
  • In the case of External tables, only the schema is stored by Hive in the Hive metastore.
  • Hive uses the Hive Query Language (HQL) for querying data.
  • Using HQL or Hiveql, we can easily implement MapReduce jobs on Hadoop.

Let’s look at some popular Hive queries.

Hive Query to Unlock the Power of Big Data Analytics

Here are the Hive queries:

Simple Selects

In Hive, querying data is performed by a SELECT statement. A select statement has 6 key components;

  1. SELECT column names
  2. FROM table-name
  3. GROUP BY column names
  4. WHERE conditions
  5. HAVING conditions
  6. ORDER by column names

In practice, very few queries will have all of these clauses, simplifying many queries. On the other hand, conditions in the WHERE clause can be very complex, and if you need to JOIN two or more tables together, then more clauses (JOIN and ON) are needed.

All of the clause names above have been written in uppercase for clarity. HQL is not case-sensitive. Neither do you need to write each clause on a new line, but it is often clearer to do so for all but the simplest of queries.

Over here, we will start with the very simple ones and work our way up to the more complex ones.

Simple Selects ‐ Selecting Columns

Amongst all the hive queries, the simplest query effectively returns the contents of the whole table. Following is the syntax to do that –

SELECT *
FROM geog_all;
CREATE VIEW [IF NOT EXISTS] [db_name.]view_name [(column_name [COMMENT column_comment], ...)  ]

It is better to practice and generally more efficient to explicitly list the column names you want to be returned. This is one of the optimization techniques you can use while querying in Hive.

SELECT anonid, fueltypes, acorn_type
FROM geog_all;

Simple Selects – Selecting Rows

In addition to limiting the columns returned by a query, you can also limit the rows returned. The simplest case is to say how many rows are wanted using the Limit clause.

SELECT anonid, fueltypes, acorn_type
FROM geog_all
LIMIT 10;

This is useful if you want to get a feel for what the data looks like. Usually, you will want to restrict the rows returned based on some criteria. i.e., certain values or ranges within one or more columns.

SELECT anonid, fueltypes, acorn_type
FROM geog_all
WHERE fueltypes = "ElecOnly";

The Expression in the where clause can be more complex and involve more than one column.

SELECT anonid, fueltypes, acorn_type
FROM geog_all
WHERE fueltypes = "ElecOnly" AND acorn_type > 42;

SELECT anonid, fueltypes, acorn_type
FROM geog_all
WHERE fueltypes = "ElecOnly" AND acorn_type > 42 AND nuts1 <> "--";

Notice that the columns used in the WHERE clause conditions don’t have to appear in the Select clause. Other operators can also be used in the where clause. For complex expressions, brackets can be used to enforce precedence.

SELECT anonid, fueltypes, acorn_type, nuts1, ldz
FROM geog_all
WHERE
fueltypes = "ElecOnly"
AND acorn_type BETWEEN 42 AND 47
AND (nuts1 NOT IN ("UKM", "UKI") OR ldz = "--");

Creating New Columns

It is possible to create new columns in the output of the query. These columns can be from combinations from the other columns using operators and/or built-in Hive functions.

SELECT anonid, eprofileclass, acorn_type, (eprofileclass * acorn_type) AS multiply, (eprofileclass + acorn_type) AS added
FROM edrp_geography_data b;

A full list of the operators and functions available within the Hive can be found in the documentation.

When you create a new column, it is usual to provide an ‘alias’ for the column. This is essentially the name you wish to give to the new column. The alias is given immediately after the expression to which it refers. Optionally, you can add the AS keyword for clarity. If you do not provide an alias for your new columns, Hive will generate a name for you.

Although the term alias may seem odd for a new column with no natural name, alias’ can also be used with any existing column to provide a more meaningful name in the output.

Tables can also be given an alias; this is particularly common in join queries involving multiple tables where there is a need to distinguish between columns with the same name in different tables. In addition to using operators to create new columns, many Hive built‐in functions can be used.

Hive Functions

You can use various Hive functions for data analysis purposes. Following are the functions to do that.

Simple Functions

Let’s talk about the popular functions used to query columns containing string data type values.

Concat

This can be used to add strings together.

SELECT anonid, acorn_category,
acorn_group,
acorn_type,
concat (acorn_category, ",", acorn_group, ",", acorn_type)  AS acorn_code
FROM geog_all;

substr

can be used to extract a part of a string

SELECT anon_id,
advancedatetime,
substr (advancedatetime, 1, 2) AS day,
substr (advancedatetime, 3, 3) AS month,
substr (advancedatetime, 6, 2) AS year
FROM elec_c;

Examples of length, instr, and reverse

SELECT anonid,
     acorn_code,
     length (acorn_code),
     instr (acorn_code, ',') AS a_catpos,
     instr (reverse (acorn_code), "," ) AS reverse_a_typepo

Where needed, functions can be nested within each other, cast and type conversions.

SELECT anonid,
substr (acorn_code, 7, 2) AS ac_type_string,
cast (substr (acorn_code, 7, 2) AS INT) AS ac_type_int,
substr (acorn_code, 7, 2) +1 AS ac_type_not_sure
FROM geog_all;

Aggregation Functions

Aggregate functions are used to perform mathematical or statistical calculations across a group of rows. The different values determine the rows in each group in a specified column or columns. A list of all of the available functions is available in the Apache documentation.

SELECT anon_id,
              count (eleckwh) AS total_row_count,
              sum (eleckwh) AS total_period_usage,
              min (eleckwh) AS min_period_usage,
              avg (eleckwh) AS avg_period_usage,
             max (eleckwh) AS max_period_usage
       FROM elec_c
GROUP BY anon_id;

In the above example, five aggregations were performed over the single column anon_id. It is possible to aggregate over multiple columns by specifying them in both the select and the group by clause. The grouping will occur based on the order of the columns listed in the group by clause. What is not allowed is specifying a non‐aggregated column in the select clause that is not mentioned in the group by clause.

SELECT anon_id,
              substr (advancedatetime, 6, 2) AS reading_year,
              count (eleckwh) AS total_row_count,
              sum (eleckwh) AS total_period_usage,
              min (eleckwh) AS min_period_usage,
              avg (eleckwh) AS avg_period_usage,
              max (eleckwh) AS max_period_usage
       FROM elec_c
GROUP BY anon_id, substr (advancedatetime, 6, 2);

Unfortunately, the group by clause will not accept alias’.

SELECT anon_id,
              substr (advancedatetime, 6, 2) AS reading_year,
              count (eleckwh) AS total_row_count,
              sum (eleckwh) AS total_period_usage,
              min (eleckwh) AS min_period_usage,
              avg (eleckwh) AS avg_period_usage,
              max (eleckwh) AS max_period_usage
      FROM elec_c
GROUP BY anon_id, substr (advancedatetime, 6, 2)
ORDER BY anon_id, reading_year;

But the Order by clause does.

The Distinct keyword provides a set of a unique combination of column values within a table without any aggregation.

SELECT DISTINCT eprofileclass, fueltypes
FROM geog_all;

Date Functions

In the elec_c and gas_c tables, the advance DateTime column, although it contains timestamp-type information, is defined as a string type. For much of the time, this can be quite convenient. However, there will be times when we do need to be able to treat the column as a Timestamp. Perhaps the most obvious example is when you need to sort rows based on the advanced time column.

Hive provides various date-related functions to allow you to convert strings into timestamps and extract parts of the Timestamp.

unix_timestamp

It returns the current date and time – as an integer!

from_unixtime

It takes an integer and converts it into a recognizable Timestamp string

SELECT unix_timestamp () AS currenttime
FROM sample_07
LIMIT 1;

SELECT from_unixtime (unix_timestamp ()) AS currenttime
FROM sample_07
LIMIT 1;

Various date part functions will extract the relevant parts from a Timestamp string.

SELECT anon_id,
             from_unixtime (UNIX_TIMESTAMP (reading_date, 'ddMMMyy'))
                  AS proper_date,
            year (from_unixtime (UNIX_TIMESTAMP (reading_date, 'ddMMMyy')))
                 AS full_year,
            month (from_unixtime (UNIX_TIMESTAMP (reading_date, 'ddMMMyy')))
                AS full_month,
            day (from_unixtime (UNIX_TIMESTAMP (reading_date, 'ddMMMyy')))
               AS full_day,
           last_day (from_unixtime (UNIX_TIMESTAMP (reading_date, 'ddMMMyy')))
              AS last_day_of_month,
           date_add ( (from_unixtime (UNIX_TIMESTAMP (reading_date, 'ddMMMyy'))),10)
              AS added_days
FROM elec_days_c
ORDER BY proper_date;

Conclusion

In conclusion, our exploration of Hive functions and queries revealed the seamless integration of data science and machine learning in a warehouse system. Handling indexes, overwriting, and optimizing query execution showcased the efficiency of Hive on a Hadoop cluster. Loading data seamlessly aligns with basic functions, making Hive akin to MySQL for distributed data queries.

Key Takeaways

  • Hive Query Language is the language supported by Hive.
  • HQL makes it easy for developers to query on Big data.
  • HQL is similar to SQL, making it easy for developers to learn this language.

I recommend you go through these articles to get acquainted with tools for big data:

Frequently Asked Questions

Q1. What queries are used in Hive?

A. Hive supports the Hive Querying Language(HQL). HQL is very similar to SQL. It supports the usual insert, update, delete, and merge SQL statements to query data in Hive.

Q2. What are the benefits of Hive?

A. Hive is built on top of Apache Hadoop. This makes it an apt tool for analyzing Big data. It also supports various connectors, making it easier for developers to query Hive data using different programming languages.

Q3. What is the difference between Hive and MapReduce?

A. Hive is a data warehousing system that provides SQL-like querying language called HiveQL, while MapReduce is a programming model and software framework used for processing large datasets in a distributed computing environment. Hive also provides a schema for data stored in Hadoop Distributed File System (HDFS), making it easier to manage and analyze large datasets.

Q4. How to Become an Apache Spark Developer?

Ans. To become an Apache Spark Developer, acquire a strong foundation in programming languages like Scala or Python. Master Spark’s core concepts, RDDs, DataFrames, and Spark SQL. Gain proficiency in distributed computing and Hadoop ecosystem integration. Explore advanced Spark features, participate in projects, and stay updated with the evolving technology landscape.

Q5. What are Apache Hive and HiveQL on Azure HDInsight?

Ans. Apache Hive is a data warehousing and SQL-like query language tool in Azure HDInsight, facilitating the analysis of large datasets stored in Hadoop. HiveQL, its query language, simplifies data retrieval and manipulation. They enable efficient data processing and analysis on Azure’s Hadoop-based big data service.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Responses From Readers

Clear

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details