This article describes various steps involved in a machine learning project. There are standard steps that you’ve to follow for a data science project. For any project, first, we have to collect the data according to our business needs. The next step is to clean the data like removing values, removing outliers, handling imbalanced datasets, changing categorical variables to numerical values, etc.
After that training of a model, use various machine learning and deep learning algorithms. Next, is model evaluation using different metrics like recall, f1 score, accuracy, etc. Finally, model deployment on the cloud and retrain a model. So let’s start:
1. Data Collection
Questions to ask?
What kind of problem are we trying to solve?
What data sources already exist?
What privacy concerns are there?
Is the data public?
Where should we store the files?
Types of data
Structured data: appears in tabulated format (rows and columns style, like what you’d find in an Excel spreadsheet). It contains different types of data, for example numerical, categorical, time series.
·Nominal/categorical – One thing or another (mutually exclusive). For example, for car scales, color is a category. A car may be blue but not white. An order does not matter.
Numerical: Any continuous value where the difference between them matters. For example, when selling houses, $107,850 is more than $56,400.
Ordinal: Data which has order but the distance between values is unknown. For example, a question such as, how would you rate your health from 1-5? 1 being poor, 5 being healthy. You can answer 1,2,3,4,5 but the distance between each value doesn’t necessarily mean an answer of 5 is five times as good as an answer of 1.Time-series: Data across time. For example, the historical sale values of Bulldozers from 2012-2018.
Time-series: Data across time. For example, the historical sale values of Bulldozers from 2012-2018.
Unstructured data: Data with no rigid structure(images, video, speech, natural
language text)
2. Data preparation
Exploratory data analysis(EDA), learning about the data you’re working with
What are the feature variables (input) and the target variable (output) For example, for predicting heart disease, the feature variables may be a person’s age, weight, average heart rate, and level of physical activity. And the target variable will be whether or not they have a disease
What kind of do you have? Structured, unstructured, numerical, time series. Are there missing values? Should you remove them or fill them feature imputation.
Where are the outliers? How many of them are there? Why are they there? Are there any questions you could ask a domain expert about the data? For example, would a heart disease physician be able to shed some light on your heart disease dataset?
Data preprocessing, preparing your data to be modelled.
Feature imputation: filling missing values ( a machine learning model can’t learn
on data that’s isn’t there)
Single imputation: Fill with mean, a median of the column.
Multiple imputations: Model other missing values and with what your model finds.
KNN (k-nearest neighbors): Fill data with a value from another example that is similar.
Many more, such as random imputation, last observation carried forward (for time series), moving window, and most frequent.
Feature encoding (turning values into numbers). A machine learning model
requires all values to be numerical)
One hot encoding: Turn all unique values into lists of 0’s and 1’s where the target value is 1 and the rest are 0’s. For example, when a car colors green, red blue, a green, a car’s color future would be represented as [1, 0, and 0] and a red one would be [0, 1, and 0].
Label Encoder:Turn labels into distinct numerical values. For example, if your target variables are different animals, such as dog, cat, bird, these could become 0, 1, and 2, respectively.
Embedding encoding:Learn a representation amongst all the different data points. For example, a language model is a representation of how different words relate to each other. Embedding is also becoming more widely available for structured (tabular) data.
Feature normalization (scaling) or standardization: When you’re numerical variables are on different scales (e.g. number_of_bathroom is between 1 and 5 and size_of_land between 500 and 20000 sq. feet), some machine learning algorithms don’t perform very well. Scaling and standardization help to fix this.
Feature engineering: transform data into (potentially) more meaningful representation by adding in domain knowledge
Decompose
Discretization: turning large groups into smaller groups
Crossing and interaction features: combining two or more features
The indicator features: using other parts of the data to indicate something potentially significant
Feature selection: selecting
the most valuable features of your dataset to model. Potentially reducing overfitting and training time(less overall data and less redundant data to train on) and improving accuracy.
Dimensionality reduction: A common dimensionality reduction method, PCA or principal component analysis taken a large number of dimensions (features) and uses linear algebra to reduce them to fewer dimensions. For example, say you have 10 numerical features, you could run PCA to reduce it down to 3.
Feature importance (post modelling): Fit a model to a set of data, then inspect which features were most important to the results, remove the least important ones.
Wrapper methods such as genetic algorithms and recursive feature elimination involve creating large subsets of feature options and then removing the ones which don’t matter.
Dealing with imbalances: does your data have 10,000 examples of one class but only 100 examples of another?
Collect more data (if you can)
Use the scikit-learn-contrib imbalanced- learn package
Use SMOTE: synthetic minority over-sampling technique. It creates synthetic samples of your minor class to try and level the playing field.
A helpful paper to look at is “Learning from imbalanced Data”.
Data splitting
Training set (usually 70-80% of data): Model learns on this.
Validation set (usually 10-15% of data): Model hyperparameters are tuned on this
Test set (usually 10-15% of data): Models’ final performance is evaluated on this. If you have done it right, hopefully, the results on the test set give a good indication of how the model should perform in the real world. Do not use this dataset to tune the model.
3. Train model on data( 3 steps: Choose an algorithm, overfit the model, reduce overfitting with regularization)
Choosing an algorithms
Supervised algorithms – Linear Regression, Logistic Regression, KNN, SVMs, Decision tree and Random forests, AdaBoost/Gradient Boosting Machine(boosting)
Underfitting – happens when your model doesn’t perform as well as you’d like on your data. Try training for a longer or more advanced model.
Overfitting– happens when your validation loss starts to increase or when the model performs better on the training set than on the test set.
Regularization: a collection of technologies to prevent/reduce overfitting (e.g. L1, L2, Dropout, Early stopping, Data augmentation, Batch normalization)
Hyperparameter Tuning – run a bunch of experiments with different settings and see which works best
4. Analysis/Evaluation
Evaluation metrics
Classification- Accuracy, Precision, Recall, F1, Confusion matrix, Mean average precision (object detection)
Regression – MSE,MAE,R^2
Task-based metric – E.g. for the self-driving car, you might want to know the number of disengagements
Feature importance
Training/inference time/cost
What if tool: how does my model compare to other models?
Least confident examples: what does the model get wrong?
Bias/variance trade-off
5. Serve model (deploying a model)
Put the model into production and see how it goes.
Tools you can use: TensorFlow Servinf, PyTorch Serving, Google AI Platform, Sagemaker
MLOps: where software engineering meets machine learning, essentially all the technology required around a machine learning model to have it working in production
6. Retrain model
See how the model performs after serving (or before serving) based on various evaluation metrics and revisit the above steps as required (remember, machine learning is very experimental, so this is where you’ll want to track your data and experiments.
You’ll also find your model’s predictions start to ‘age’ (usually not in a fine-wine style) or ‘drift’, as in when data sources change or upgrade(new hardware, etc.). This is when you’ll want to retrain it.
7. Machine Learning Tools
Thanks for reading this. If you like this article then please share it with your friends. In case of any suggestion/doubt comment below.
We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.
Show details
Powered By
Cookies
This site uses cookies to ensure that you get the best experience possible. To learn more about how we use cookies, please refer to our Privacy Policy & Cookies Policy.
brahmaid
It is needed for personalizing the website.
csrftoken
This cookie is used to prevent Cross-site request forgery (often abbreviated as CSRF) attacks of the website
Identityid
Preserves the login/logout state of users across the whole site.
sessionid
Preserves users' states across page requests.
g_state
Google One-Tap login adds this g_state cookie to set the user status on how they interact with the One-Tap modal.
MUID
Used by Microsoft Clarity, to store and track visits across websites.
_clck
Used by Microsoft Clarity, Persists the Clarity User ID and preferences, unique to that site, on the browser. This ensures that behavior in subsequent visits to the same site will be attributed to the same user ID.
_clsk
Used by Microsoft Clarity, Connects multiple page views by a user into a single Clarity session recording.
SRM_I
Collects user data is specifically adapted to the user or device. The user can also be followed outside of the loaded website, creating a picture of the visitor's behavior.
SM
Use to measure the use of the website for internal analytics
CLID
The cookie is set by embedded Microsoft Clarity scripts. The purpose of this cookie is for heatmap and session recording.
SRM_B
Collected user data is specifically adapted to the user or device. The user can also be followed outside of the loaded website, creating a picture of the visitor's behavior.
_gid
This cookie is installed by Google Analytics. The cookie is used to store information of how visitors use a website and helps in creating an analytics report of how the website is doing. The data collected includes the number of visitors, the source where they have come from, and the pages visited in an anonymous form.
_ga_#
Used by Google Analytics, to store and count pageviews.
_gat_#
Used by Google Analytics to collect data on the number of times a user has visited the website as well as dates for the first and most recent visit.
collect
Used to send data to Google Analytics about the visitor's device and behavior. Tracks the visitor across devices and marketing channels.
AEC
cookies ensure that requests within a browsing session are made by the user, and not by other sites.
G_ENABLED_IDPS
use the cookie when customers want to make a referral from their gmail contacts; it helps auth the gmail account.
test_cookie
This cookie is set by DoubleClick (which is owned by Google) to determine if the website visitor's browser supports cookies.
_we_us
this is used to send push notification using webengage.
WebKlipperAuth
used by webenage to track auth of webenagage.
ln_or
Linkedin sets this cookie to registers statistical data on users' behavior on the website for internal analytics.
JSESSIONID
Use to maintain an anonymous user session by the server.
li_rm
Used as part of the LinkedIn Remember Me feature and is set when a user clicks Remember Me on the device to make it easier for him or her to sign in to that device.
AnalyticsSyncHistory
Used to store information about the time a sync with the lms_analytics cookie took place for users in the Designated Countries.
lms_analytics
Used to store information about the time a sync with the AnalyticsSyncHistory cookie took place for users in the Designated Countries.
liap
Cookie used for Sign-in with Linkedin and/or to allow for the Linkedin follow feature.
visit
allow for the Linkedin follow feature.
li_at
often used to identify you, including your name, interests, and previous activity.
s_plt
Tracks the time that the previous page took to load
lang
Used to remember a user's language setting to ensure LinkedIn.com displays in the language selected by the user in their settings
s_tp
Tracks percent of page viewed
AMCV_14215E3D5995C57C0A495C55%40AdobeOrg
Indicates the start of a session for Adobe Experience Cloud
s_pltp
Provides page name value (URL) for use by Adobe Analytics
s_tslv
Used to retain and fetch time since last visit in Adobe Analytics
li_theme
Remembers a user's display preference/theme setting
li_theme_set
Remembers which users have updated their display / theme preferences
We do not use cookies of this type.
_gcl_au
Used by Google Adsense, to store and track conversions.
SID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
SAPISID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
__Secure-#
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
APISID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
SSID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
HSID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
DV
These cookies are used for the purpose of targeted advertising.
NID
These cookies are used for the purpose of targeted advertising.
1P_JAR
These cookies are used to gather website statistics, and track conversion rates.
OTZ
Aggregate analysis of website visitors
_fbp
This cookie is set by Facebook to deliver advertisements when they are on Facebook or a digital platform powered by Facebook advertising after visiting this website.
fr
Contains a unique browser and user ID, used for targeted advertising.
bscookie
Used by LinkedIn to track the use of embedded services.
lidc
Used by LinkedIn for tracking the use of embedded services.
bcookie
Used by LinkedIn to track the use of embedded services.
aam_uuid
Use these cookies to assign a unique ID when users visit a website.
UserMatchHistory
These cookies are set by LinkedIn for advertising purposes, including: tracking visitors so that more relevant ads can be presented, allowing users to use the 'Apply with LinkedIn' or the 'Sign-in with LinkedIn' functions, collecting information about how visitors use the site, etc.
li_sugr
Used to make a probabilistic match of a user's identity outside the Designated Countries
MR
Used to collect information for analytics purposes.
ANONCHK
Used to store session ID for a users session to ensure that clicks from adverts on the Bing search engine are verified for reporting purposes and for personalisation
We do not use cookies of this type.
Cookie declaration last updated on 24/03/2023 by Analytics Vidhya.
Cookies are small text files that can be used by websites to make a user's experience more efficient. The law states that we can store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses different types of cookies. Some cookies are placed by third-party services that appear on our pages. Learn more about who we are, how you can contact us, and how we process personal data in our Privacy Policy.
Nice one, thank you sir.
Good document. Thanks for sharing it