Create Your Own NLP Search Engine With BM25

Sudeep Last Updated : 30 Jan, 2025
5 min read

Search engines like Google and Yahoo work through Crawling, Indexing, and ranking methods like BM25. Crawling is when automated bots find new or updated pages and store key details like URLs, titles, and keywords. Indexing analyzes this data, identifying key content, images, and videos to store for future searches. BM25, a ranking algorithm, helps retrieve the most relevant results based on keyword relevance. When you search, engines don’t scan the entire internet but retrieve results from their indexed data. Today, we’ll build a small prototype that mimics the indexing process of a search engine.

This article was published as a part of the Data Science Blogathon.

Importing packages

import pandas as pd
from rank_bm25 import *

What is BM25?

BM25 is a simple Python package and can be used to index the data, tweets in our case, based on the search query. It works on the concept of TF/IDF i.e.

  • TF or Term Frequency — Simply put, indicates the number of occurrences of the search term in our tweet
  • IDF or Inverse Document Frequency — It measures how important your search term is. Since TF considers all terms equally important, thus, we can’t only use term frequencies to calculate the weight of a term in your text. We would need to weigh down the frequent terms while scaling up the rare terms showing their relevancy to the tweet.

Once you run the query, BM25 will show the relevancy of your search term with each of the tweets. You can sort it to index the most relevant ones.

Preparing your tweets

Since this is not a discussion on Twitter API, will start using an excel based feed. You can clean your text data on these key steps to make the search more robust.

Tokenization: 

Splitting the sentence into words. So that each word can be considered uniquely.

import pandas as pd
from rank_bm25 import *
import warnings
warnings.filterwarnings('ignore')
import nltk
nltk.download('punkt')
from nltk.tokenize import word_tokenize
sentence = "Jack is a sharp minded fellow"
words = word_tokenize(sentence)
print(words)

Removing special characters:

Removing the special characters from your tweets

def spl_chars_removal(lst):
    lst1=list()
    for element in lst:
        str=””
        str = re.sub(“[⁰-9a-zA-Z]”,” “,element)
        lst1.append(str)
    return lst1

Removing stop words:

Stop words are commonly used words (is, for, the, etc.) in the tweets. These words do not signify any importance as they do not help in distinguishing two tweets. I used Gensim package to remove my stopwords, you can also try it using nltk, but I found Gensim much faster than others.

One can also easily add new words to the stop words list, in case your data is particularly surrounded with those words and is frequently occurring.

#adding words to stopwords
from nltk.tokenize import word_tokenize
from gensim.parsing.preprocessing import STOPWORDS
#adding custom words to the pre-defined stop words list
all_stopwords_gensim = STOPWORDS.union(set([‘disease’]))
def stopwprds_removal_gensim_custom(lst):
    lst1=list()
    for str in lst:
        text_tokens = word_tokenize(str)
        tokens_without_sw = [word for word in text_tokens if not word in all_stopwords_gensim]
        str_t = “ “.join(tokens_without_sw)
        lst1.append(str_t)
 
    return lst1

Normalization:

 Text normalization is the process of transforming a text into a canonical (standard) form. For example, the word “gooood” and “gud” can be transformed to “good”, its canonical form. Another example is mapping of near-identical words such as “stopwords”, “stop-words” and “stop words” to just “stopwords”.

This technique is important for noisy texts such as social media comments, text messages, and comments to blog posts where abbreviations, misspellings, and use of out-of-vocabulary words (oov) are prevalent. People tend to write comments in short-hand and hence this pre-processing becomes very important.

RawNormalized
yest, ydayyesterday
tomo, 2moro, 2mrw, tmrwtomorrow
brbbe right back

Stemming:

Process of transforming the words to their root form. It’s the process of reducing inflection in words (e.g. troubled, troubles) to their root form (e.g. trouble). The “root” in this case may not be a real root word, but just a canonical form of the original word.

Stemming uses a heuristic process that chops off the ends of words in the hope of correctly transforming words into their root form. It needs to be reviewed as in the below example you can see “Machine” gets transformed to “Machin”, “e” is chopped off in the stemming process.

import nltk
from nltk.stem
import PorterStemmer
ps = PorterStemmer() sentence = “Machine Learning is cool”
for word in sentence.split():
    print(ps.stem(word))

Output: ['Machin', 'Learning', 'cool']

Tokenizing tweets and running BM25

This is the central piece where we run the query for search. We search the tweets based on the word “vaccine” user-based. One can enter a phrase too and it will fluently as we tokenize our search term in the 2nd line below.

tokenized_corpus = [doc.split(" ") for doc in lst1]
bm25 = BM25Okapi(tokenized_corpus)
query = "vaccine" ## Enter search query
tokenized_query = query.split(" ")

You can check the association of each tweet with your search term using .get_scores function.

doc_scores = bm25.get_scores(tokenized_query)
print(doc_scores)

As we enter n=5 in .get_top_n we would get five most associated tweets as our result. You can put the value of n according to your needs.

docs = bm25.get_top_n(tokenized_query, lst1, n=5)
df_search = df[df['Text'].isin(docs)]
df_search.head()

Top Five associated Tweets

Top 5 TweetsTweeted By
@MikeCarlton01 Re #ABC funding, looked up Budget Papers. After massive prior cuts, it got extra $4.7M in funding (.00044% far less than inflation).#Morrison wastes $Ms on over-priced & ineffective services eg useless #Covid app.; delivery vaccine #agedcare; consultancies vaccine roll-out..MORRIGAN
@TonyHWindsor @barriecassidy @4corners @abc730 For its invaluable work, #ABC got extra $4.7M in funding (.00044% far less than inflation).While #Morrison Govt spends like drunken sailor on buying over-priced & ineffective services from mates (eg useless #Covid app.; delivery vaccine #agedcare; vaccine roll-out) #auspolMORRIGAN
It’s going to be a month after my #Covid recovery. Now I will go vaccine 😎😎😎😎Simi Elizabeth😃
RT @pradeepkishan : What a despicable politician is #ArvindKejariwal ! The minute oxygen hoarding came to light his propaganda shifted to vaccine shortage. He is more dangerous than #COVID itself! @BJP4India @TajinderBaggap.hariharan
RT @AlexBerenson : TL: DR – In the @pfizer teen #Covid vaccine trial, 4 or 5 (the exact figure is hidden) of 1,100 kids who got the vaccine had serious side effects, compared to 1 who got placebo.@US_FDA did not disclose specifics, so we have no idea what they were or if they follow any pattern. https://t.co/n5igf2xXFNSagezza

Additional use cases of BM25

There can be many use cases where a search feature is required. One of the most relevant ones is around parsing the PDF and developing a search function over the PDF content.

This is one of the widely used cases for BM25. As the globe slowly shifts to better data strategy and efficient storage techniques, the old PDF documents can be retrieved efficiently using algorithms like BM25.

Hope you enjoyed reading this and find this helpful. Thank you, folks!

Conclusion

Importing packages provides essential tools. BM25 ranks document relevance. Preparing tweets involves cleaning data. Tokenization breaks text into words. Removing special characters and stop words improves focus. Normalization ensures consistency. Stemming reduces words to root forms. Running BM25 finds relevant tweets. Top five tweets are most relevant. BM25 also aids search engines.

Frequently Asked Questions

Q1.What is the BM25 method?

BM25 is a ranking algorithm used to score and rank documents based on their relevance to a search query. It considers term frequency (TF) and document length to improve accuracy.

Q2.Why is BM25 better than TF-IDF?

BM25 is better because it handles term frequency saturation (too many repetitions of a term don’t over-influence the score) and accounts for document length, making it more effective for real-world search scenarios.

Q3.What is BM25 in Elasticsearch?

In Elasticsearch, BM25 is the default ranking algorithm used to calculate relevance scores for search results, replacing TF-IDF for better accuracy and performance.

Login to continue reading and enjoy expert-curated content.

Responses From Readers

Clear

Ganesh
Ganesh

Amazing. You've explained it in a very concise and clear manner. Will try this out.

Yuri Moreno
Yuri Moreno

Great explanation. Tks

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details