This article was published as a part of the Data Science Blogathon.
The Python API for Apache Spark is known as PySpark.To develop spark applications in Python, we will use PySpark. It also provides the Pyspark shell for real-time data analysis. PySpark supports most of the Apache Spark functionality, including Spark Core, SparkSQL, DataFrame, Streaming, MLlib (Machine Learning), and MLlib (Machine Learning).
This article will explore useful PySpark functions with scenario-based examples to understand them better.
It is a SQL function in PySpark to ๐๐ฑ๐๐๐ฎ๐ญ๐ ๐๐๐-๐ฅ๐ข๐ค๐ ๐๐ฑ๐ฉ๐ซ๐๐ฌ๐ฌ๐ข๐จ๐ง๐ฌ. It will accept a SQL expression as a string argument and execute the commands written in the statement. It enables the use of SQL-like functions that are absent from the PySpark Column type and pyspark.sql.functions API. Ex:- ๐๐๐๐ ๐๐๐๐. We are allowed to use ๐๐๐ญ๐๐ ๐ซ๐๐ฆ๐ ๐๐จ๐ฅ๐ฎ๐ฆ๐ง๐ฌ in the expression. The syntax for this function is ๐๐ฑ๐ฉ๐ซ(๐ฌ๐ญ๐ซ).
# importing necessary libs from pyspark.sql import SparkSession from pyspark.sql.functions import expr # creating session spark = SparkSession.builder.appName("practice").getOrCreate() # create data data = [("Prashant","Banglore",25, 58, "2022-08-01", 1), ("Ankit","Banglore",26,54,"2021-05-02", 2), ("Ramakant","Gurugram",24, 60, "2022-06-02", 3), ("Brijesh","Gazipur", 26,75,"2022-07-04", 4), ("Devendra","Gurugram", 27, 62, "2022-04-03", 5), ("Ajay","Chandigarh", 25,72,"2022-02-01", 6)] columns= ["friends_name","location", "age", "weight", "meetup_date", "offset"] df_friends = spark.createDataFrame(data = data, schema = columns) df_friends.show()
Letโs see the practical Implementations:-
Example:- A.) Concatenating one or more columns using expr()
# concate friend's name, age, and location columns using expr() df_concat = df_friends.withColumn("name-age-location",expr("friends_name|| '-'|| age || '-' || location")) df_concat.show()
We have joined the name, age, and location columns and stored the result in a new column called โname-age-location.โ
Example:- B.) Add a new column based on a condition (CASE WHEN) using expr()
# check if exercise needed based on weight # if weight is more or equal to 60 -- Yes # if weight is less than 55 -- No # else -- Enjoy df_condition = df_friends.withColumn("Exercise_Need", expr("CASE WHEN weight >= 60 THEN 'Yes' " + "WHEN weight < 55 THEN 'No' ELSE 'Enjoy' END")) df_condition.show()
Our โExercise_Needโ column received three values (Enjoy, No, and Yes) based on the condition given in CASE WHEN. The first value of the weight column is 58, so itโs less than 60 and more than 55, so the result is โEnjoy.โ
Example:- C.) Creating a new column using the current column value inside the expression.
# let increment the meetup month by the number of offset df_meetup = df_friends.withColumn("new_meetup_date", expr("add_months(meetup_date,offset)")) df_meetup.show()
The โmeetup_dateโ month value increases by the offset value, and the newly generated result is stored in the โnew_meetup_dateโ column.
A.) lpad():-
This function provides padding to the left side of the column, and the inputs for this function are column name, length, and padding string.
B.) rpad ():-
This function is used to add padding to the right side of the column. Column name, length, and padding string are additional inputs for this function.
Note:-
Letโs first create a data Frame:-
# importing necessary libs from pyspark.sql import SparkSession from pyspark.sql.functions import col, lpad, rpad # creating session spark = SparkSession.builder.appName("practice").getOrCreate() # creating data data = [("Delhi",30000),("Mumbai",50000),("Gujrat",80000)] columns= ["state_name","state_population"] df_states = spark.createDataFrame(data = data, schema = columns) df_states.show()
Example:- 01 โ Use of left padding
# left padding df_states = df_states.withColumn('states_name_leftpad', lpad(col("state_name"), 10, '#')) df_states.show(truncate =False)
We added the โ#โ symbol to the left of the โstate_nameโ column values, and the total length of column values becomes โ10โณ after the padding.
Example:-02 โ Right padding
# right padding df_states = df_states.withColumn('states_name_rightpad', rpad(col("state_name"), 10, '#')) df_states.show(truncate =False)
We added the โ#โ symbol to the right of the โstate_nameโ column values, and the total length becomes ten after the right padding.
Example:-03 โ When the column string length is longer than the padded string length
df_states = df_states.withColumn('states_name_condition', lpad(col("state_name"), 3, '#')) df_states.show(truncate =False)
In this case, the return column value will be shortened to the length of the padded string length. You can see the โstate_name_conditionโ column only has values of length 3, which is the padded length we have given in the function.
In PySpark, we use the repeat function to duplicate the column values. The repeat(str,n) function returns the string containing the specified string value repeated n times.
Example:- 01
# importing necessary libs from pyspark.sql import SparkSession from pyspark.sql.functions import expr, repeat # creating session spark = SparkSession.builder.appName("practice").getOrCreate() # # create data data = [("Prashant",25, 80), ("Ankit",26, 90),("Ramakant", 24, 85)] columns= ["student_name", "student_age", "student_score"] df_students = spark.createDataFrame(data = data, schema = columns) df_students.show() # repeating the column (student_name) twice and saving results in new column df_repeated = df_students.withColumn("student_name_repeated",(expr("repeat(student_name, 2)"))) df_repeated.show()
We have repeated the โstudent_nameโ column values in the above example twice.
We can also use this function with the Concat function, where we can repeat some string values n times before column values, working like padding, where n may be the length of some values.
startswith():-
It will produce a boolean result of True or False. When the Dataframe column value ends with the string provided as a parameter to this method, it returns True. If no match is found, it returns False.
endswith():-
The boolean value (True/False) will be returned. When the DataFrame column value ends with a string supplied as an input to this method, it returns True. False is returned if not matched.
Note:-
Create a data frame:-
# importing necessary libs from pyspark.sql import SparkSession from pyspark.sql.functions import col # creating session spark = SparkSession.builder.appName("practice").getOrCreate() # # create dataframe data = [("Prashant",25, 80), ("Ankit",26, 90),("Ramakant", 24, 85), (None, 23, 87)] columns= ["student_name", "student_age", "student_score"] df_students = spark.createDataFrame(data = data, schema = columns) df_students.show()
Example โ 01 First, check the output type.
df_internal_res = df_students.select(col("student_name").endswith("it").alias("internal_bool_val")) df_internal_res.show()
Example โ 02
df_check_start = df_students.filter(col("student_name").startswith("Pra")) df_check_start.show()
Here we got the first row as output because the โstudent_nameโ column value starts with the value mentioned inside the function.
Example โ 03
df_check_end = df_students.filter(col("student_name").endswith("ant")) df_check_end.show()
Here we got the two rows as output because the โstudent_nameโ column value ends with the value mentioned inside the function.
Example โ 04 โ What if arguments in functions are empty?
df_check_empty = df_students.filter(col("student_name").endswith("")) df_check_empty.show()
In this case, we get a True value corresponding to each row, and no False value returned.
In this article, we started our discussion by defining PySpark and its features. Then we talk about functions, their definitions, and their syntax. After discussing each function, we created a data frame and practiced some examples using it. We covered six functions in this article.
Key takeaways from this article are:-
I hope this article helps you to understand the PySpark functions. If you have any opinions or questions, then comment down below. Connect with me on LinkedIn for further discussion.
Keep Learning!!!
The media shown in this article is not owned by Analytics Vidhya and is used at the Authorโs discretion.