Given the number of large datasets that data engineers handle daily, there is no doubt that a dedicated tool is required to process and analyze such data. Some tools like Pig are among the most widely used tools to solve such a problem, as is Apache Hive, which is built on top of Hadoop. Apache Hive is a data warehousing built on top of Apache Hadoop. It allows querying distributed data storage, including Hadoop Distributed File System (HDFS). Hive also supports ACID properties with the optimized ORC file format for faster querying. Its ease-of-use and similarity to SQL, known as Hive Query Language (HQL), make it popular among data engineers. HQL simplifies transaction support, enabling the use of familiar SQL statements for tasks like insert, update, delete, and merge.
In this article, we will cover the most commonly used queries that you will find useful when querying data in Hive.
This article was published as a part of the Data Science Blogathon.
Let’s look at some popular Hive queries.
Here are the Hive queries:
In Hive, querying data is performed by a SELECT statement. A select statement has 6 key components;
In practice, very few queries will have all of these clauses, simplifying many queries. On the other hand, conditions in the WHERE clause can be very complex, and if you need to JOIN two or more tables together, then more clauses (JOIN and ON) are needed.
All of the clause names above have been written in uppercase for clarity. HQL is not case-sensitive. Neither do you need to write each clause on a new line, but it is often clearer to do so for all but the simplest of queries.
Over here, we will start with the very simple ones and work our way up to the more complex ones.
Amongst all the hive queries, the simplest query effectively returns the contents of the whole table. Following is the syntax to do that –
SELECT *
FROM geog_all;
CREATE VIEW [IF NOT EXISTS] [db_name.]view_name [(column_name [COMMENT column_comment], ...) ]
It is better to practice and generally more efficient to explicitly list the column names you want to be returned. This is one of the optimization techniques you can use while querying in Hive.
SELECT anonid, fueltypes, acorn_type
FROM geog_all;
In addition to limiting the columns returned by a query, you can also limit the rows returned. The simplest case is to say how many rows are wanted using the Limit clause.
SELECT anonid, fueltypes, acorn_type
FROM geog_all
LIMIT 10;
This is useful if you want to get a feel for what the data looks like. Usually, you will want to restrict the rows returned based on some criteria. i.e., certain values or ranges within one or more columns.
SELECT anonid, fueltypes, acorn_type
FROM geog_all
WHERE fueltypes = "ElecOnly";
The Expression in the where clause can be more complex and involve more than one column.
SELECT anonid, fueltypes, acorn_type
FROM geog_all
WHERE fueltypes = "ElecOnly" AND acorn_type > 42;
SELECT anonid, fueltypes, acorn_type
FROM geog_all
WHERE fueltypes = "ElecOnly" AND acorn_type > 42 AND nuts1 <> "--";
Notice that the columns used in the WHERE clause conditions don’t have to appear in the Select clause. Other operators can also be used in the where clause. For complex expressions, brackets can be used to enforce precedence.
SELECT anonid, fueltypes, acorn_type, nuts1, ldz
FROM geog_all
WHERE
fueltypes = "ElecOnly"
AND acorn_type BETWEEN 42 AND 47
AND (nuts1 NOT IN ("UKM", "UKI") OR ldz = "--");
It is possible to create new columns in the output of the query. These columns can be from combinations from the other columns using operators and/or built-in Hive functions.
SELECT anonid, eprofileclass, acorn_type, (eprofileclass * acorn_type) AS multiply, (eprofileclass + acorn_type) AS added
FROM edrp_geography_data b;
A full list of the operators and functions available within the Hive can be found in the documentation.
When you create a new column, it is usual to provide an ‘alias’ for the column. This is essentially the name you wish to give to the new column. The alias is given immediately after the expression to which it refers. Optionally, you can add the AS keyword for clarity. If you do not provide an alias for your new columns, Hive will generate a name for you.
Although the term alias may seem odd for a new column with no natural name, alias’ can also be used with any existing column to provide a more meaningful name in the output.
Tables can also be given an alias; this is particularly common in join queries involving multiple tables where there is a need to distinguish between columns with the same name in different tables. In addition to using operators to create new columns, many Hive built‐in functions can be used.
You can use various Hive functions for data analysis purposes. Following are the functions to do that.
Let’s talk about the popular functions used to query columns containing string data type values.
This can be used to add strings together.
SELECT anonid, acorn_category,
acorn_group,
acorn_type,
concat (acorn_category, ",", acorn_group, ",", acorn_type) AS acorn_code
FROM geog_all;
can be used to extract a part of a string
SELECT anon_id,
advancedatetime,
substr (advancedatetime, 1, 2) AS day,
substr (advancedatetime, 3, 3) AS month,
substr (advancedatetime, 6, 2) AS year
FROM elec_c;
Examples of length, instr, and reverse
SELECT anonid,
acorn_code,
length (acorn_code),
instr (acorn_code, ',') AS a_catpos,
instr (reverse (acorn_code), "," ) AS reverse_a_typepo
Where needed, functions can be nested within each other, cast and type conversions.
SELECT anonid,
substr (acorn_code, 7, 2) AS ac_type_string,
cast (substr (acorn_code, 7, 2) AS INT) AS ac_type_int,
substr (acorn_code, 7, 2) +1 AS ac_type_not_sure
FROM geog_all;
Aggregate functions are used to perform mathematical or statistical calculations across a group of rows. The different values determine the rows in each group in a specified column or columns. A list of all of the available functions is available in the Apache documentation.
SELECT anon_id,
count (eleckwh) AS total_row_count,
sum (eleckwh) AS total_period_usage,
min (eleckwh) AS min_period_usage,
avg (eleckwh) AS avg_period_usage,
max (eleckwh) AS max_period_usage
FROM elec_c
GROUP BY anon_id;
In the above example, five aggregations were performed over the single column anon_id. It is possible to aggregate over multiple columns by specifying them in both the select and the group by clause. The grouping will occur based on the order of the columns listed in the group by clause. What is not allowed is specifying a non‐aggregated column in the select clause that is not mentioned in the group by clause.
SELECT anon_id,
substr (advancedatetime, 6, 2) AS reading_year,
count (eleckwh) AS total_row_count,
sum (eleckwh) AS total_period_usage,
min (eleckwh) AS min_period_usage,
avg (eleckwh) AS avg_period_usage,
max (eleckwh) AS max_period_usage
FROM elec_c
GROUP BY anon_id, substr (advancedatetime, 6, 2);
Unfortunately, the group by clause will not accept alias’.
SELECT anon_id,
substr (advancedatetime, 6, 2) AS reading_year,
count (eleckwh) AS total_row_count,
sum (eleckwh) AS total_period_usage,
min (eleckwh) AS min_period_usage,
avg (eleckwh) AS avg_period_usage,
max (eleckwh) AS max_period_usage
FROM elec_c
GROUP BY anon_id, substr (advancedatetime, 6, 2)
ORDER BY anon_id, reading_year;
But the Order by clause does.
The Distinct keyword provides a set of a unique combination of column values within a table without any aggregation.
SELECT DISTINCT eprofileclass, fueltypes
FROM geog_all;
In the elec_c and gas_c tables, the advance DateTime column, although it contains timestamp-type information, is defined as a string type. For much of the time, this can be quite convenient. However, there will be times when we do need to be able to treat the column as a Timestamp. Perhaps the most obvious example is when you need to sort rows based on the advanced time column.
Hive provides various date-related functions to allow you to convert strings into timestamps and extract parts of the Timestamp.
It returns the current date and time – as an integer!
It takes an integer and converts it into a recognizable Timestamp string
SELECT unix_timestamp () AS currenttime
FROM sample_07
LIMIT 1;
SELECT from_unixtime (unix_timestamp ()) AS currenttime
FROM sample_07
LIMIT 1;
Various date part functions will extract the relevant parts from a Timestamp string.
SELECT anon_id,
from_unixtime (UNIX_TIMESTAMP (reading_date, 'ddMMMyy'))
AS proper_date,
year (from_unixtime (UNIX_TIMESTAMP (reading_date, 'ddMMMyy')))
AS full_year,
month (from_unixtime (UNIX_TIMESTAMP (reading_date, 'ddMMMyy')))
AS full_month,
day (from_unixtime (UNIX_TIMESTAMP (reading_date, 'ddMMMyy')))
AS full_day,
last_day (from_unixtime (UNIX_TIMESTAMP (reading_date, 'ddMMMyy')))
AS last_day_of_month,
date_add ( (from_unixtime (UNIX_TIMESTAMP (reading_date, 'ddMMMyy'))),10)
AS added_days
FROM elec_days_c
ORDER BY proper_date;
In conclusion, our exploration of Hive functions and queries revealed the seamless integration of data science and machine learning in a warehouse system. Handling indexes, overwriting, and optimizing query execution showcased the efficiency of Hive on a Hadoop cluster. Loading data seamlessly aligns with basic functions, making Hive akin to MySQL for distributed data queries.
I recommend you go through these articles to get acquainted with tools for big data:
A. Hive supports the Hive Querying Language(HQL). HQL is very similar to SQL. It supports the usual insert, update, delete, and merge SQL statements to query data in Hive.
A. Hive is built on top of Apache Hadoop. This makes it an apt tool for analyzing Big data. It also supports various connectors, making it easier for developers to query Hive data using different programming languages.
A. Hive is a data warehousing system that provides SQL-like querying language called HiveQL, while MapReduce is a programming model and software framework used for processing large datasets in a distributed computing environment. Hive also provides a schema for data stored in Hadoop Distributed File System (HDFS), making it easier to manage and analyze large datasets.
Ans. To become an Apache Spark Developer, acquire a strong foundation in programming languages like Scala or Python. Master Spark’s core concepts, RDDs, DataFrames, and Spark SQL. Gain proficiency in distributed computing and Hadoop ecosystem integration. Explore advanced Spark features, participate in projects, and stay updated with the evolving technology landscape.
Ans. Apache Hive is a data warehousing and SQL-like query language tool in Azure HDInsight, facilitating the analysis of large datasets stored in Hadoop. HiveQL, its query language, simplifies data retrieval and manipulation. They enable efficient data processing and analysis on Azure’s Hadoop-based big data service.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.