A Beginners Guide to Multi-Processing in Python

Karthik Last Updated : 09 Apr, 2024
9 min read

Introduction

In today’s era of Big Data, Python has emerged as a highly coveted programming language. In this article, we’ll delve into a vital aspect that empowers Python’s prowess – Multi-Processing. This feature distinguishes Python as a potent tool, enabling efficient handling of complex tasks by harnessing the full potential of modern hardware.

Now before we dive into the nitty-gritty of Multi-Processing, I suggest you read my previous article on Threading in Python, since it can provide a better context for the current article.

This article was published as a part of the Data Science Blogatho.

What is Multi-Processing?

Let us say you are an elementary school student who is given the mind-numbing task of multiplying 1200 pairs of numbers as your homework. Let us say you are capable of multiplying a pair of numbers within 3 seconds. Then on a total, it takes 1200*3 = 3600 seconds, which is 1 hour to solve the entire assignment.  But you have to catch up on your favorite TV show in 20 minutes.

What would you do? An intelligent student, though dishonest, will call up three more friends who have similar capacity and divide the assignment. So you’ll get 250 multiplications tasks on your plate, which you’ll complete in 250*3 = 750 seconds, that is 15 minutes. Thus, you along with your 3 other friends, will finish the task in 15 minutes, giving you 5 minutes time to grab a snack and sit for your TV show. The task just took 15 minutes when 4 of you work together, which otherwise would have taken 1 hour.

This is the basic ideology of Multi-Processing. If you have an algorithm that can be divided into different workers(processors), then you can speed up the program. Machines nowadays come with 4,8 and 16 cores, which then can be deployed in parallel.

Also Read: A Complete Python Tutorial to Learn Data Science

Multi-Processing in Data Science?

Multi-Processing has two crucial applications in Data Science.

Input-Output Processes

Any data-intensive pipeline has input, output processes where millions of bytes of data flow throughout the system. Generally, the data reading(input) process won’t take much time but the process of writing data to Data Warehouses takes significant time. The writing process can be made in parallel, saving a huge amount of time.

No Multi-processing
Multi-processing included

Training Models

Though not all models can be trained in parallel, few models have inherent characteristics that allow them to get trained using parallel processing. For example, the Random Forest algorithm deploys multiple Decision trees to take a cumulative decision. These trees can be constructed in parallel. In fact, the sklearn API comes with a parameter called n_jobs, which provides an option to use multiple workers.

Also Read: 90+ Python Interview Questions to Ace Your Next Job Interview

Multi-Processing in Python using Process Class

Now let us get our hands on the multiprocessing library in Python.

Multiprocessing Library in Python

Take a look at the following code

Python Code:

import time

def sleepy_man():
    print('Starting to sleep')
    time.sleep(1)
    print('Done sleeping')

tic = time.time()
sleepy_man()
sleepy_man()
toc = time.time()

print('Done in {:.4f} seconds'.format(toc-tic))

The above code is simple. The function sleepy_man sleeps for a second and we call the function two times. We record the time taken for the two function calls and print the results. The output is as shown below:

Starting to sleep
Done sleeping
Starting to sleep
Done sleeping
Done in 2.0037 seconds

This is expected as we call the function twice and record the time. The flow is shown in the diagram below.

sleepy_man Multi-processing

Incorporate Multi-Processing into the Code

Now let us incorporate Multi-Processing into the code

import multiprocessing
import time
def sleepy_man():
    print('Starting to sleep')
    time.sleep(1)
    print('Done sleeping')

tic = time.time()
p1 =  multiprocessing.Process(target= sleepy_man)
p2 =  multiprocessing.Process(target= sleepy_man)
p1.start()
p2.start()
toc = time.time()

print('Done in {:.4f} seconds'.format(toc-tic))

Here multiprocessing.Process(target= sleepy_man) defines a Distributed Computing instance. We pass the required function to be executed, sleepy_man, as an argument. We trigger the two instances by p1.start().

The output is as follows:

Done in 0.0023 seconds
Starting to sleep
Starting to sleep
Done sleeping
Done sleeping

Now notice one thing. The time log print statement got executed first. This is because along with the multi-process instances triggered for the sleepy_man function, the main code of the function got executed separately in parallel. The flow diagram given below will make things clear.

Multi-processing flow_diagram

Executing Join() Function

In order to execute the rest of the program after the Distributed Computing functions are executed, we need to execute the function join().

import multiprocessing
import time

def sleepy_man():
    print('Starting to sleep')
    time.sleep(1)
    print('Done sleeping')

tic = time.time()
p1 =  multiprocessing.Process(target= sleepy_man)
p2 =  multiprocessing.Process(target= sleepy_man)
p1.start()
p2.start()
p1.join()
p2.join()
toc = time.time()

print('Done in {:.4f} seconds'.format(toc-tic))

Now the rest of the code block will only get executed after the multiprocessing tasks are done. The output is shown below.

Starting to sleep
Starting to sleep
Done sleeping
Done sleeping
Done in 1.0090 seconds

The flow diagram is shown below.

2 seconds Multi-processing

Multiprocessing Using a For Loop

Since the two sleep functions are executed in parallel, the function together takes around 1 second.

We can define any number of Distributed Computing instances. Look at the code below. It defines 10 different multi-processing instances using a for loop.

import multiprocessing
import time

def sleepy_man():
    print('Starting to sleep')
    time.sleep(1)
    print('Done sleeping')

tic = time.time()

process_list = []
for i in range(10):
    p =  multiprocessing.Process(target= sleepy_man)
    p.start()
    process_list.append(p)

for process in process_list:
    process.join()

toc = time.time()

print('Done in {:.4f} seconds'.format(toc-tic))
The output for the above code is as shown below.

Starting to sleep
Starting to sleep
Starting to sleep
Starting to sleep
Starting to sleep
Starting to sleep
Starting to sleep
Starting to sleep
Starting to sleep
Starting to sleep
Done sleeping
Done sleeping
Done sleeping
Done sleeping
Done sleeping
Done sleeping
Done sleeping
Done sleeping
Done sleeping
Done sleeping
Done in 1.0117 seconds

Here the ten function executions are processed in parallel and thus the entire program takes just one second. Now my machine doesn’t have 10 processors. When we define more processes than our machine, the multiprocessing library has a logic to schedule the jobs. So you don’t have to worry about it.

import multiprocessing
import time

def sleepy_man(sec):
    print('Starting to sleep')
    time.sleep(sec)
    print('Done sleeping')

tic = time.time()

process_list = []
for i in range(10):
    p =  multiprocessing.Process(target= sleepy_man, args = [2])
    p.start()
    process_list.append(p)

for process in process_list:
    process.join()

toc = time.time()

print('Done in {:.4f} seconds'.format(toc-tic))
The output for the above code is as shown below.

Starting to sleep
Starting to sleep
Starting to sleep
Starting to sleep
Starting to sleep
Starting to sleep
Starting to sleep
Starting to sleep
Starting to sleep
Starting to sleep
Done sleeping
Done sleeping
Done sleeping
Done sleeping
Done sleeping
Done sleeping
Done sleeping
Done sleeping
Done sleeping
Done sleeping
Done in 2.0161 seconds

Since we passed an argument, the sleepy_man function slept for 2 seconds instead of 1 second.

Multi-Processing in Python using Pool Class

In the last code snippet, we executed 10 different processes using a for a loop. Instead of that we can use the Pool method to do the same.

import multiprocessing
import time

def sleepy_man(sec):
    print('Starting to sleep for {} seconds'.format(sec))
    time.sleep(sec)
    print('Done sleeping for {} seconds'.format(sec))

tic = time.time()

pool = multiprocessing.Pool(5)
pool.map(sleepy_man, range(1,11))
pool.close()

toc = time.time()

print('Done in {:.4f} seconds'.format(toc-tic))

multiprocessing.Pool(5) defines the number of workers. Here we define the number to be 5. pool.map() is the method that triggers the function execution. We call pool.map(sleepy_man, range(1,11)). Here, sleepy_man  is the function that will be called with the parameters for the functions executions defined by range(1,11)  (generally a list is passed). The output is as follows:

Starting to sleep for 1 seconds
Starting to sleep for 2 seconds
Starting to sleep for 3 seconds
Starting to sleep for 4 seconds
Starting to sleep for 5 seconds
Done sleeping for 1 seconds
Starting to sleep for 6 seconds
Done sleeping for 2 seconds
Starting to sleep for 7 seconds
Done sleeping for 3 seconds
Starting to sleep for 8 seconds
Done sleeping for 4 seconds
Starting to sleep for 9 seconds
Done sleeping for 5 seconds
Starting to sleep for 10 seconds
Done sleeping for 6 seconds
Done sleeping for 7 seconds
Done sleeping for 8 seconds
Done sleeping for 9 seconds
Done sleeping for 10 seconds
Done in 15.0210 seconds

Pool class is a  better way to deploy Distributed Computing because it distributes the tasks to available processors using the First In First Out schedule. It is almost similar to the map-reduce architecture- in essence, it maps the input to different processors and collects the output from all processors as a list. The processes in execution are stored in memory and other non-executing processes are stored out of memory.

pool class

Whereas in Process class, all the processes are executed in memory and scheduled execution using FIFO policy.

Comparing the Time Performance for Calculating Perfect Numbers

Hitherto, we played around with multiprocessing functions on sleep functions. Now let us take a function that checks if a number is a Perfect Number or not. For those who don’t know, A number is a perfect number if the sum of its positive divisors is equal to the number itself. We will be listing the Perfect numbers less than or equal to 100000. We will implement it in 3 ways- Using a regular for loop, using multiprocess.Process() and multiprocess.Pool().

Using Regular For Loop

import time

def is_perfect(n):
    sum_factors = 0
    for i in range(1, n):
        if (n % i == 0):
            sum_factors = sum_factors + i
    if (sum_factors == n):
        print('{} is a Perfect number'.format(n))

tic = time.time()
for n in range(1,100000):
    is_perfect(n)
toc = time.time()

print('Done in {:.4f} seconds'.format(toc-tic))
The output for the above program is shown below.

6 is a Perfect number
28 is a Perfect number
496 is a Perfect number
8128 is a Perfect number
Done in 258.8744 seconds

Using a Process Class

import time
import multiprocessing

def is_perfect(n):
    sum_factors = 0
    for i in range(1, n):
        if(n % i == 0):
            sum_factors = sum_factors + i
    if (sum_factors == n):
        print('{} is a Perfect number'.format(n))

tic = time.time()

processes = []
for i in range(1,100000):
    p = multiprocessing.Process(target=is_perfect, args=(i,))
    processes.append(p)
    p.start()

for process in processes:
    process.join()

toc = time.time()
print('Done in {:.4f} seconds'.format(toc-tic))

The output for the above program is shown below

6 is a Perfect number
28 is a Perfect number
496 is a Perfect number
8128 is a Perfect number
Done in 143.5928 seconds

As you could see, we achieved a 44.4% reduction in time when we deployed Multi-Processing using Process class, instead of a regular for loop.

Using a Pool Class

import time
import multiprocessing

def is_perfect(n):
    sum_factors = 0
    for i in range(1, n):
        if(n % i == 0):
            sum_factors = sum_factors + i
    if (sum_factors == n):
        print('{} is a Perfect number'.format(n))

tic = time.time()
pool = multiprocessing.Pool()
pool.map(is_perfect, range(1,100000))
pool.close()
toc = time.time()

print('Done in {:.4f} seconds'.format(toc-tic))

The output for the above program is shown below.

6 is a Perfect number
28 is a Perfect number
496 is a Perfect number
8128 is a Perfect number
Done in 74.2217 seconds

As you could see, compared to a regular for loop we achieved a 71.3% reduction in computation time, and compared to the Process class, we achieve a 48.4% reduction in computation time.

Thus, it is very well evident that by deploying a suitable method from the multiprocessing library, we can achieve a significant reduction in computation time.

Conclusion

In conclusion, mastering multi-processing in Python is a significant step for anyone venturing into the world of data-driven applications and performance optimization. With the ability to leverage multiple CPU cores, Python’s multi-processing empowers developers to tackle Big Data challenges, parallelize tasks, and enhance program efficiency. By understanding its principles, utilizing the multiprocessing library effectively, and optimizing code for concurrency, developers can unlock Python’s full potential. As technology evolves and data volumes continue to grow, the skill of multi-processing becomes increasingly invaluable, ensuring Python remains a top choice for tackling the most demanding computational tasks and maintaining its status as a programming powerhouse.

The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion. 

Frequently Asked Questions?

Q1. What is multiprocessing Python?

A. Multiprocessing in Python refers to a module in the Python Standard Library that allows developers to create and manage multiple processes concurrently. It enables parallelism by running multiple Python processes, taking advantage of multiple CPU cores to execute tasks simultaneously.

Q2. What are the 4 essential parts of multiprocessing in Python?

The four essential parts of multiprocessing in Python are:
Process: Represents an independent process that can run concurrently with other processes.
Queue: Facilitates communication between processes by allowing them to exchange data.
Lock: Ensures that only one process can access a shared resource at a time, preventing data corruption or race conditions.
Pool: Provides a convenient way to parallelize the execution of a function across multiple input values by distributing the workload among multiple processes.

Q3. What is the difference between multithreading and multiprocessing in Python?

A. Multithreading involves executing multiple threads within the same process, sharing the same memory space. However, due to Python’s Global Interpreter Lock (GIL), multithreading is not suitable for CPU-bound tasks that require significant computational resources.

Q4. Is multiprocessing in Python real?

A. Yes, multiprocessing in Python is real. It’s a built-in module in the Python Standard Library, providing developers with tools to leverage parallelism and make efficient use of multi-core processors. Distributed Computing is widely used in Python for tasks that can benefit from parallel execution, such as data processing, scientific computing, and parallel algorithm implementations.

Responses From Readers

Clear

Giacomo
Giacomo

Great and easy to understand article.

Mehala
Mehala

Superb Good job

Rajindra Sheran Ramachandran
Rajindra Sheran Ramachandran

Hi MSMKarthik! This article was really helpful and great in getting into the basics of Python multiprocessing. All your initial code worked but applying multiprocessing did not show any reduction of execution time specially for the last example you have given in finding all the Perfect numbers up to 100000. My local machine(having 12GB RAM and Intel Core i5 7th Generation processor) got stuck with 100% CPU usage when testing the above given Perfect number finding using Process class and Pool class. Therefore I tried using Google Colab online IPython notebook. There too, the same result was given where the regular for-loop worked better and faster than the multi-processing approaches. I am not sure how you got the timing values you have given in this article. Can you please help me to solve this?

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details