This article was published as a part of the Data Science Blogathon.
The Python API for Apache Spark is known as PySpark.To develop spark applications in Python, we will use PySpark. It also provides the Pyspark shell for real-time data analysis. PySpark supports most of the Apache Spark functionality, including Spark Core, SparkSQL, DataFrame, Streaming, MLlib (Machine Learning), and MLlib (Machine Learning).
This article will explore useful PySpark functions with scenario-based examples to understand them better.
It is a SQL function in PySpark to đđ±đđđźđđ đđđ-đ„đąđ€đ đđ±đ©đ«đđŹđŹđąđšđ§đŹ. It will accept a SQL expression as a string argument and execute the commands written in the statement. It enables the use of SQL-like functions that are absent from the PySpark Column type and pyspark.sql.functions API. Ex:- đđđđ đđđđ. We are allowed to use đđđđđ đ«đđŠđ đđšđ„đźđŠđ§đŹ in the expression. The syntax for this function is đđ±đ©đ«(đŹđđ«).
# importing necessary libs from pyspark.sql import SparkSession from pyspark.sql.functions import expr # creating session spark = SparkSession.builder.appName("practice").getOrCreate() # create data data = [("Prashant","Banglore",25, 58, "2022-08-01", 1), ("Ankit","Banglore",26,54,"2021-05-02", 2), ("Ramakant","Gurugram",24, 60, "2022-06-02", 3), ("Brijesh","Gazipur", 26,75,"2022-07-04", 4), ("Devendra","Gurugram", 27, 62, "2022-04-03", 5), ("Ajay","Chandigarh", 25,72,"2022-02-01", 6)] columns= ["friends_name","location", "age", "weight", "meetup_date", "offset"] df_friends = spark.createDataFrame(data = data, schema = columns) df_friends.show()
Let’s see the practical Implementations:-
Example:- A.) Concatenating one or more columns using expr()
# concate friend's name, age, and location columns using expr() df_concat = df_friends.withColumn("name-age-location",expr("friends_name|| '-'|| age || '-' || location")) df_concat.show()
We have joined the name, age, and location columns and stored the result in a new column called “name-age-location.”
Example:- B.) Add a new column based on a condition (CASE WHEN) using expr()Â
# check if exercise needed based on weight # if weight is more or equal to 60 -- Yes # if weight is less than 55 -- No # else -- Enjoy df_condition = df_friends.withColumn("Exercise_Need", expr("CASE WHEN weight >= 60 THEN 'Yes' " + "WHEN weight < 55 THEN 'No' ELSE 'Enjoy' END")) df_condition.show()
Our “Exercise_Need” column received three values (Enjoy, No, and Yes) based on the condition given in CASE WHEN. The first value of the weight column is 58, so it’s less than 60 and more than 55, so the result is “Enjoy.”
Example:- C.) Creating a new column using the current column value inside the expression.
# let increment the meetup month by the number of offset df_meetup = df_friends.withColumn("new_meetup_date", expr("add_months(meetup_date,offset)")) df_meetup.show()
The “meetup_date” month value increases by the offset value, and the newly generated result is stored in the “new_meetup_date” column.
A.) lpad():-Â
This function provides padding to the left side of the column, and the inputs for this function are column name, length, and padding string.
B.) rpad ():-Â
This function is used to add padding to the right side of the column. Column name, length, and padding string are additional inputs for this function.
Note:-Â
Let’s first create a data Frame:-
# importing necessary libs from pyspark.sql import SparkSession from pyspark.sql.functions import col, lpad, rpad # creating session spark = SparkSession.builder.appName("practice").getOrCreate() # creating data data = [("Delhi",30000),("Mumbai",50000),("Gujrat",80000)] columns= ["state_name","state_population"] df_states = spark.createDataFrame(data = data, schema = columns) df_states.show()
Example:- 01 – Use of left padding
# left padding df_states = df_states.withColumn('states_name_leftpad', lpad(col("state_name"), 10, '#')) df_states.show(truncate =False)
We added the ‘#’ symbol to the left of the “state_name” column values, and the total length of column values becomes “10″ after the padding.
Example:-02 – Right padding
# right padding df_states = df_states.withColumn('states_name_rightpad', rpad(col("state_name"), 10, '#')) df_states.show(truncate =False)
We added the “#” symbol to the right of the “state_name” column values, and the total length becomes ten after the right padding.
Example:-03 – When the column string length is longer than the padded string length
df_states = df_states.withColumn('states_name_condition', lpad(col("state_name"), 3, '#')) df_states.show(truncate =False)
In this case, the return column value will be shortened to the length of the padded string length. You can see the “state_name_condition” column only has values of length 3, which is the padded length we have given in the function.
In PySpark, we use the repeat function to duplicate the column values. The repeat(str,n) function returns the string containing the specified string value repeated n times.
Example:- 01Â
# importing necessary libs from pyspark.sql import SparkSession from pyspark.sql.functions import expr, repeat # creating session spark = SparkSession.builder.appName("practice").getOrCreate() # # create data data = [("Prashant",25, 80), ("Ankit",26, 90),("Ramakant", 24, 85)] columns= ["student_name", "student_age", "student_score"] df_students = spark.createDataFrame(data = data, schema = columns) df_students.show() # repeating the column (student_name) twice and saving results in new column df_repeated = df_students.withColumn("student_name_repeated",(expr("repeat(student_name, 2)"))) df_repeated.show()
We have repeated the “student_name” column values in the above example twice.
We can also use this function with the Concat function, where we can repeat some string values n times before column values, working like padding, where n may be the length of some values.
startswith():-
It will produce a boolean result of True or False. When the Dataframe column value ends with the string provided as a parameter to this method, it returns True. If no match is found, it returns False.
endswith():-
The boolean value (True/False) will be returned. When the DataFrame column value ends with a string supplied as an input to this method, it returns True. False is returned if not matched.
Note:-
Create a data frame:-
# importing necessary libs from pyspark.sql import SparkSession from pyspark.sql.functions import col # creating session spark = SparkSession.builder.appName("practice").getOrCreate() # # create dataframe data = [("Prashant",25, 80), ("Ankit",26, 90),("Ramakant", 24, 85), (None, 23, 87)] columns= ["student_name", "student_age", "student_score"] df_students = spark.createDataFrame(data = data, schema = columns) df_students.show()
Example – 01Â First, check the output type.
df_internal_res = df_students.select(col("student_name").endswith("it").alias("internal_bool_val")) df_internal_res.show()
Example – 02
df_check_start = df_students.filter(col("student_name").startswith("Pra")) df_check_start.show()
Here we got the first row as output because the “student_name” column value starts with the value mentioned inside the function.
Example – 03Â
df_check_end = df_students.filter(col("student_name").endswith("ant")) df_check_end.show()
Here we got the two rows as output because the “student_name” column value ends with the value mentioned inside the function.
Example – 04 – What if arguments in functions are empty?
df_check_empty = df_students.filter(col("student_name").endswith("")) df_check_empty.show()
In this case, we get a True value corresponding to each row, and no False value returned.
In this article, we started our discussion by defining PySpark and its features. Then we talk about functions, their definitions, and their syntax. After discussing each function, we created a data frame and practiced some examples using it. We covered six functions in this article.
Key takeaways from this article are:-
I hope this article helps you to understand the PySpark functions. If you have any opinions or questions, then comment down below. Connect with me on LinkedIn for further discussion.
Keep Learning!!!
The media shown in this article is not owned by Analytics Vidhya and is used at the Authorâs discretion.