2.5 quintillion bytes of data are produced every day! Consider how much we can deduce from that and what conclusions we can draw. Wait! But, how do we deal with such a massive amount of data?
Not to worry; the Pandas library is your best friend if you enjoy working with data in Python.
But what exactly is Pandas, and why should we use it?
Pandas is a Python library used for working with large amounts of data in a variety of formats such as CSV files, TSV files, Excel sheets, and so on. It has functions for analyzing, cleaning, exploring, and modifying data. Pandas can be used for a variety of purposes, including the following :
Pandas enable the analysis of large amounts of data and drawing conclusions based on statistical theories.
Real-world data is never perfect and requires a lot of work; the Pandas library makes this work easier and faster, making datasets more relevant and cleaner.
It has a robust feature set you can apply to your data to customize, edit, and pivot it to your liking. This makes getting the most out of your data a lot easier.
It also allows you to represent data in a very streamlined way. This aids in data analysis and the ability to comprehend.
These are just a few of the Pandas library’s many benefits. So, let us delve deep into this library and bring all the benefits listed to live! It sounds interesting, don’t you think? Learning about the Pandas Library will pique your interest.😁
Pandas Installation
Before you can learn about the Pandas library, you must first install it on your system. To do so, install Anaconda and, once installed, enter the following code into your Anaconda Prompt.
conda install pandas
Now that we’ve installed Pandas on our system, let’s look at the data structures it contains.
Data Structures In Pandas
The Pandas library deals with the following data structures :
Pandas Data Frame :
Whenever there is a dataset with at least two columns and any number of records(rows) then it is known as a Data frame.
Pandas Series :
Whenever there is a dataset with just a single column with any number of records (rows) then it is known as a Series.
IMAGE 1
Importing The Pandas Library
Before learning and using the functionalities of Pandas, it is necessary to import the Pandas libraryPandas library first. We do so by writing the following code into our Jupyter notebook :
import pandas as pd
Note : “pd” is used as an alias so that the Pandas package can be referred to as “pd” instead of “pandas”.
Now that we have installed Pandas and also imported it into our Jupyter notebook, we can now explore the different functionalities of Pandas.
Importing Data
Before working on data, we have to first import it. The Pandas library has a variety of commands for dealing with different forms of data. We will be learning about one such command which deals with CSV files.
1. read_csv()
The pd.read_csv() command is used to read a CSV file into data frame.
Python Code:
import pandas as pd
df = pd.read_csv("anime.csv")
print(df.head())
Before working with your data, it is necessary to know about your data properly. Pandas help you in doing so :
1. head() and tail()
The above output is not very intriguing to watch. Let us try looking at only first or last few records of our dataset. We can do so by using the following Pandas commands.
The df.head() command helps us view our dataset’s top 5 (default value) records. If you want to view more than 5, you can do so by typing – df.head(n) where n is the number of records you want to view.
df.head()
The df.tail() command helps us view our dataset’s last 5 (default value) records. If you want to view more than 5, you can do so by typing – df.tail(n) where n is the number of records you want to view.
df.tail()
2. shape
The df.shape command provides you with the number of rows and columns in your dataset.
df.shape()
3. info()
The df.info() command prints the information about our dataset, including columns, datatypes of columns, non-null values, and memory usage.
df.info()
4. describe()
The df.describe() command calculates a summary of statistics for the data frame columns. This function returns the count, mean, standard deviation, and interquartile range (IQR) values.
df.describe()
5. nunique()
The df.nunique command returns the number of unique entries in each column.
df.nunique()
Selection Of Data
You often do not want to work with only a subset of the entire dataset. In such cases, use the following commands:
1. df[col]
The df[col] command returns the column with the specified label as series.
# selecting the column 'title'
df['title']
2. df[[col1,col2]]
The df[[col1, col2]] returns columns as a data frame.
# selecting the columns 'title' and 'mediaType'
df[['title', 'mediaType']]
3. loc[ ]
The df.loc[ ] command helps you access a group of rows and columns. loc is label-based, which means that you have to specify rows and columns based on their row and column labels. It includes the ends, i.e., df.loc[0:4] will return all the rows from 0-4(included).
# selecting all the rows from 0-4 and the associated columns
df.loc[ :4]
# selecting all the rows and the column named 'title'
df.loc[: , 'title']
# selecting all the rows from 1-5 and the columns named 'title' and 'rating'
df.loc[1:5, ['title', 'rating'] ]
# selecting all the rows and columns with entries having rating > 4.5
df.loc[df['rating'] > 4.5]
4. iloc[ ]
The df.iloc[ ] command also helps you access a group of rows and columns. Still, unlike loc, iloc is integer position-based, so you have to specify rows and columns by their integer position values. It excludes the ends, i.e., df.iloc[0:4] will return all the rows from 0-3 as 4 is excluded.
# selecting all the rows from 0-4(excluded) and the associated columns
df.iloc[0:4]
# selecting all the rows and columns
df.iloc[: , :]
# selecting all the rows and columns from 0-4(excluded)
df.iloc[0:4, 0:4]
# selecting all the rows from 0-10(excluded) and the 0th, 2nd, and 5th columns
df.iloc[ 0:10, [0, 2, 5] ]
# selecting the 3rd, 4th, and 5th rows and the 0th and 2nd columns
df.iloc[[3, 4, 5], [0, 2]]
Filter, Sort & Groupby
When working with datasets, you will encounter circumstances when you need to sort, filter, or even group your data to make it easier to understand. The commands listed below will be your helping hands in this :
1. df[df[col] operator number]
The df[df[col] operator number] command helps you easily filter out data.
# selecting all the records where column 'watched' > 1000
df[df['watched] > 1000]
# selecting all the records where column 'watched' < 100
df[df['watched'] < 100]
# filtering out with multiple conditions.
# selecting all the records where 'watched' > 1000 and 'eps' = 10
df[(df['watched'] > 1000) & (df['eps'] == 10)]
2. sort_values()
The df.sort_values() command helps in sorting your data in ascending or descending manner.
# sorting the values of the column 'eps' in ascending manner(default)
df.sort_values('eps')
# sorting the values of the column 'eps' in descending manner
df.sort_values('eps' , ascending = False)
# sorting multiple columns in ascending and descending manner
# sorting the column 'eps' in ascending order and 'duration' in descending order
df.sort_values(['eps', 'duration'], ascending = [True, False])
3. Groupby()
The df.groupby() command helps split data into separate groups and lets you perform functions on these groups.
# the below code means we want to analyze our data by different "eps" values.
# the below code returns a DataFrameGroupBy object
df_groupby_eps = df.groupby('eps')
df_groupby_eps
# using the size() attribute
# it will display the group sizes [there are 7307 animes having only 1 episode and so on]
df_groupby_eps.size()
# using the get_group() attribute
# it will retrieve one of the created groups
# it will display the anime that has 500 episodes
df_groupby_eps.get_group(500.0)
# making use of aggregate functions to compute on grouped data
# applying the mean function on the grouped data
df_groupby_eps.mean()
# using the agg() function
# we can apply different aggregate functions.
# the below code will display the maximum and minimum rating of animes which are grouped by their votes.
df.groupby('votes').rating.agg(['max', 'min])
Data Cleaning
If your elders have warned you about how the real world is not what we think it is, how it is messy and uninterpretable, then let me add something more to it. Real-world data is just the same: messy and uninterpretable. You first have to clean the data to get the most out of your data and infer meaningful insights. And as I mentioned, Pandas is your best friend, so well, your best friend has got it all covered.
1. isnull()
The df.isnull() command checks for all the null values in your dataset.
# checking for null values
# it will return the dataset with the entries as True / False where True means that this cell has a null value and False means that this cell does not has a null value.
df.isnull()
# we can use the sum() function with isnull()
# it will return the sum of null values for each column
df.isnull().sum()
2. notnull()
The df.notnull() command is just the opposite of df.isnull() command. This command will check for the non-null values in the dataset.
# returns the dataset with entries as True/False where True means not having a null value and False means having a null value.
df.notnull()
# using the sum() function
# it will return the sum of all the non null values for each column.
df.notnull().sum()
3. dropna()
The dropna() command drops the rows/columns with missing entries.
# dropping all the rows with missing values
df.dropna()
# using the "axis" parameter
# "axis = 0" (default) means Row and "axis = 1" means Column
# dropping all the columns with missing entries
df.dropna(axis = 1)
# using the "how" parameter
# how = "any" means dropping rows/columns having "ANY" missing entries.
# how = "all" means dropping rows/columns having "ALL" missing entries.
# dropping the columns having any missing values.
df.dropna(axis = 1, how = 'any')
# using the "thresh" parameter
# it specifies how many non-null values a row or column must have so as to not be dropped
# keeping only the columns with at least 14000 non-null values
df.dropna(axis = 1, thresh = 14000)
# using the "subset" parameter
# it is used for defining in which columns to look for missing values
# dropping all the rows where the "duration" column is NaN
df.dropna(subset = ['duration'])
NOTE : The dropna() and fillna() commands returns a copy of your object rather than the actual object. To update your object , you need to specify the value of the inplace parameter as True. Don’t worry, the inplace parameter is covered below.
3. fillna()
The df.fillna() command does exactly what its name implies: it fills in the missing entries with some value.
# filling the NaN values with some user specified value
df.fillna(value = "Not Specified")
using the “method” parameter
it specifies how shall we fill the missing values
method = ‘ffill’/ ‘bfill’
“ffill” stands for forward fill
“bfill” stands for backward fill
HOW THE FFILL METHOD WORKS
HOW THE BFILL METHOD WORKS
# executing the ffill method
df.fillna(method = 'ffill')
# executing the bfill method
df.fillna(method = 'bfill')
# using the "inplace" parameter
# The 'inplace = True' argument means that the data frame has to make changes permanent.
# If you use 'inplace = False' (default), you basically get back a copy
# This is before using 'inplace = True'
df.dropna().head(2)
df.isnull().sum()
df.fillna(value = 1).head(2)
df.isnull().sum()
# after using 'inplace = True'
df.dropna(inplace = True)
df.isnull().sum()
NOTE: As all the rows with missing values are dropped, there is no need to use fillna().
4. rename()
The df.rename() command helps us in altering axes labels.
# renaming the 'eps' column as 'Episodes'
df.rename(columns = {'eps' : 'Episodes'})
Question: Can you guess why the records start from 149 instead of 0?
Creating Test Objects
You don’t necessarily need to use pre-existing datasets; instead, you can generate your own test objects and run a range of commands to explore this library further. You are covered in that by the following commands :
1. DataFrame() & Series()
The pd.DataFrame() command helps you create your own data frame with ease.
The pd.Series() command helps you create your own series with ease.
Pandas’ ability to apply statistical techniques to data is beneficial since it improves the analysis and interpretation of the data. The commands listed below assist us in achieving that :
1. mean() & median()
The df.mean() and df.median() commands returns the mean and median(respectively) of all columns.
df.mean()
df.median()
2. corr()
The df.corr() command returns the correlation between columns.
# correlation can be defined as a relationship between variables.
# it lies between -1 and 1 (inclusive of both the values)
df.corr()
3. std()
The df.std() command returns the standard deviation of all the columns.
df.std()
4. max() & min()
The df.max() and df.min() commands returns the highest and lowest value in each columns.
df.max()
df.min()
Combining Data Frames
Combining datasets is necessary when you have multiple datasets yet want to study all their data simultaneously. The commands listed below can be useful in this scenario :
1. concat()
The pd.concat() command lets you combine data across rows or columns.
# using the "axis" parameter
pd.concat([df2,df3,df4], axis = 1)
Conclusion
Pandas is one of the most useful and user-friendly data science and machine learning libraries. It aids in deriving meaningful insights from various types of datasets. It has outstanding features that, if properly understood, can be useful when working with data and speed up your process. Do not stop learning about this incredible library here because the Pandas library has many more interesting functionalities with which you can infer insights from data in minutes!
Thank you for reading!😊
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.
We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.
Show details
Powered By
Cookies
This site uses cookies to ensure that you get the best experience possible. To learn more about how we use cookies, please refer to our Privacy Policy & Cookies Policy.
brahmaid
It is needed for personalizing the website.
csrftoken
This cookie is used to prevent Cross-site request forgery (often abbreviated as CSRF) attacks of the website
Identityid
Preserves the login/logout state of users across the whole site.
sessionid
Preserves users' states across page requests.
g_state
Google One-Tap login adds this g_state cookie to set the user status on how they interact with the One-Tap modal.
MUID
Used by Microsoft Clarity, to store and track visits across websites.
_clck
Used by Microsoft Clarity, Persists the Clarity User ID and preferences, unique to that site, on the browser. This ensures that behavior in subsequent visits to the same site will be attributed to the same user ID.
_clsk
Used by Microsoft Clarity, Connects multiple page views by a user into a single Clarity session recording.
SRM_I
Collects user data is specifically adapted to the user or device. The user can also be followed outside of the loaded website, creating a picture of the visitor's behavior.
SM
Use to measure the use of the website for internal analytics
CLID
The cookie is set by embedded Microsoft Clarity scripts. The purpose of this cookie is for heatmap and session recording.
SRM_B
Collected user data is specifically adapted to the user or device. The user can also be followed outside of the loaded website, creating a picture of the visitor's behavior.
_gid
This cookie is installed by Google Analytics. The cookie is used to store information of how visitors use a website and helps in creating an analytics report of how the website is doing. The data collected includes the number of visitors, the source where they have come from, and the pages visited in an anonymous form.
_ga_#
Used by Google Analytics, to store and count pageviews.
_gat_#
Used by Google Analytics to collect data on the number of times a user has visited the website as well as dates for the first and most recent visit.
collect
Used to send data to Google Analytics about the visitor's device and behavior. Tracks the visitor across devices and marketing channels.
AEC
cookies ensure that requests within a browsing session are made by the user, and not by other sites.
G_ENABLED_IDPS
use the cookie when customers want to make a referral from their gmail contacts; it helps auth the gmail account.
test_cookie
This cookie is set by DoubleClick (which is owned by Google) to determine if the website visitor's browser supports cookies.
_we_us
this is used to send push notification using webengage.
WebKlipperAuth
used by webenage to track auth of webenagage.
ln_or
Linkedin sets this cookie to registers statistical data on users' behavior on the website for internal analytics.
JSESSIONID
Use to maintain an anonymous user session by the server.
li_rm
Used as part of the LinkedIn Remember Me feature and is set when a user clicks Remember Me on the device to make it easier for him or her to sign in to that device.
AnalyticsSyncHistory
Used to store information about the time a sync with the lms_analytics cookie took place for users in the Designated Countries.
lms_analytics
Used to store information about the time a sync with the AnalyticsSyncHistory cookie took place for users in the Designated Countries.
liap
Cookie used for Sign-in with Linkedin and/or to allow for the Linkedin follow feature.
visit
allow for the Linkedin follow feature.
li_at
often used to identify you, including your name, interests, and previous activity.
s_plt
Tracks the time that the previous page took to load
lang
Used to remember a user's language setting to ensure LinkedIn.com displays in the language selected by the user in their settings
s_tp
Tracks percent of page viewed
AMCV_14215E3D5995C57C0A495C55%40AdobeOrg
Indicates the start of a session for Adobe Experience Cloud
s_pltp
Provides page name value (URL) for use by Adobe Analytics
s_tslv
Used to retain and fetch time since last visit in Adobe Analytics
li_theme
Remembers a user's display preference/theme setting
li_theme_set
Remembers which users have updated their display / theme preferences
We do not use cookies of this type.
_gcl_au
Used by Google Adsense, to store and track conversions.
SID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
SAPISID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
__Secure-#
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
APISID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
SSID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
HSID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
DV
These cookies are used for the purpose of targeted advertising.
NID
These cookies are used for the purpose of targeted advertising.
1P_JAR
These cookies are used to gather website statistics, and track conversion rates.
OTZ
Aggregate analysis of website visitors
_fbp
This cookie is set by Facebook to deliver advertisements when they are on Facebook or a digital platform powered by Facebook advertising after visiting this website.
fr
Contains a unique browser and user ID, used for targeted advertising.
bscookie
Used by LinkedIn to track the use of embedded services.
lidc
Used by LinkedIn for tracking the use of embedded services.
bcookie
Used by LinkedIn to track the use of embedded services.
aam_uuid
Use these cookies to assign a unique ID when users visit a website.
UserMatchHistory
These cookies are set by LinkedIn for advertising purposes, including: tracking visitors so that more relevant ads can be presented, allowing users to use the 'Apply with LinkedIn' or the 'Sign-in with LinkedIn' functions, collecting information about how visitors use the site, etc.
li_sugr
Used to make a probabilistic match of a user's identity outside the Designated Countries
MR
Used to collect information for analytics purposes.
ANONCHK
Used to store session ID for a users session to ensure that clicks from adverts on the Bing search engine are verified for reporting purposes and for personalisation
We do not use cookies of this type.
Cookie declaration last updated on 24/03/2023 by Analytics Vidhya.
Cookies are small text files that can be used by websites to make a user's experience more efficient. The law states that we can store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses different types of cookies. Some cookies are placed by third-party services that appear on our pages. Learn more about who we are, how you can contact us, and how we process personal data in our Privacy Policy.