Automating Web Search Using LangChain and Google Search APIs

Avikumar Talaviya 01 Jul, 2024
7 min read

Introduction 

Artificial intelligence is expanding in the modern world because to a multitude of studies and inventions in the field from various startups and organizations. Researchers and innovators are creating a wide range of tools and technology to support the creation of LLM-powered applications. With the aid of AI and NLP innovations like LangChain and LLMs, users can get around the limitations of traditional search techniques, such as having to comb through dozens of links and websites to find rLLMselevant information. Instead, users can use search engine APIs like Google Search APIs and LangChain and OpenAI to receive a concise, summarized response to their query along with links to related resources.

In this article, we are going to learn how innovative framework like LangChain with Google search APls can create web automation application to ask and get answers from the information retrieved information of vast array of web resources. This article covers hands on guide to build web automation application from scratch for use cases like research, analysis, etc. So let’s get started:

Learning Objectives

  • Learn about web scraping and automation application using LangChain framework.
  • Learn a step-by-step guide to build a web automation application using LangChain and Google Search APIs.
  • Implement the integration of LangChain with Google Search APIs to automate web searches.
Automating web search using LangChain and Google search APIs

This article was published as a part of the Data Science Blogathon.

What is Web Automation Application and its Workflow?

First, we will look at the typical web research and automation application workflow which is crucial to understand architecture of such LLM powered applications. When user queries to the web research automation application, Google Search API takes query and returns number of web links that will be loaded using web loader which scraps the web pages. loaded web content will then be transformed into readable text format by removing all unwanted html tags. Let’s look the below diagram for more details:

Web Automation application

Finally, scraped and transformed web pages content will be loaded into vector stores such as chroma, pinecone, FAISS, etc for further querying or Q&A for research purpose. Using suitable prompt engineering users can also summarise the web content for further research and analysis.

Web Loader and Transformation

Once the user query brings the links of the web content relevent to the search query, it will thereafter be scraped using “ChromiumLoader” or “HtmlLoader” to load the web content into project environment. once the content is loaded it will then be transformed using “BeautifulSoupTransformer” or “HtmltoTextTransformer” to remove html tags and get the web content for further processing. let’s look at the both the methods with code examples for in depth understanding.

ChromiumLoader

Using Playwright and python’s asyncio “ChromiumLoader” interacts with web pages in the browser to load the content. thereafter, beautifulsoup transformer removes all the html tags like <p>, <span>,<div>,<li>, etc to extracts the text content from it.

# install necessary libraries for the project
!pip install -q langchain-openai langchain playwright beautifulsoup4
!pip install -q langchain_community

# scraping using chromiumloader
from langchain_community.document_loaders import AsyncChromiumLoader
from langchain_community.document_transformers import BeautifulSoupTransformer

# Load HTML
loader = AsyncChromiumLoader(["https://www.wsj.com"], headless=True)

# load using playwright
!playwright install
html = await loader.aload()

# Transform the content using bs4 transformer
bs_transformer = BeautifulSoupTransformer()
documents_transformed = []
for doc in html:
    documents_transformed.extend(bs_transformer.extract_tags(doc.page_content, 
                                                               tags='span'))

HtmlLoader

Similarly, Another alternative for the web content scraping is to use HtmlLoader which uses ‘aiohttp’ library to make async HTTP requests to scrape and load web pages.

# scrpe the web content using html loader
from langchain_community.document_loaders import AsyncHtmlLoader
!pip install -q html2text

# load the content 
urls = ["https://www.espn.com", "https://lilianweng.github.io/posts/2023-06-23-agent/"]
loader = AsyncHtmlLoader(urls)
docs = loader.load()

# html to text tranformation
from langchain_community.document_transformers import Html2TextTransformer

# Html to Text transformer
html2text = Html2TextTransformer()
documents_transformed = html2text.transform_documents(docs)
documents_transformed[0].page_content[0:500]

Scraping with Extraction using LangChain

In this section, we are going to learn entire process of extracting the web content from any given web and scrape it into desired structure using large language model APIs. this is crucial to find summarised and accurate answers for the user query in web automation application. Let’s begin by creating a object for large language model from OpenAI.

# assign OpenAI model using langchai OpenAI
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")

# define a schema to extract content from web page
from langchain.chains import create_extraction_chain

schema = {
    "properties": {
        "news_article_title": {"type": "string"},
        "news_article_summary": {"type": "string"},
    },
    "required": ["news_article_title", "news_article_summary"],
}

# define a extract function to get summerized content from the LLM call
def extract(content: str, schema: dict):
    return create_extraction_chain(schema=schema, llm=llm).run(content)

Above code takes LLM as input along with schema of output in the extract function. this function will be called once the web content is scraped and transformed using html loader and transformer.

# web scraper with bs4
import pprint

from langchain_text_splitters import RecursiveCharacterTextSplitter

# scrape the data using playwright and htmlloader
def scrape_with_playwright(urls, schema):
    loader = AsyncHtmlLoader(urls)
    docs = loader.load()
    html_transformer = Html2TextTransformer()
    docs_transformed = html2text.transform_documents(
        docs
    )
    print("Extracting content with LLM")

    # Grab the first 1000 tokens of the site
    splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
        chunk_size=1000, chunk_overlap=0
    )
    splits = splitter.split_documents(docs_transformed)

    # Process the first split
    extracted_content = extract(schema=schema, content=splits[0].page_content)
    pprint.pprint(extracted_content)
    return extracted_content

# load from the web and scrape the dataset
urls = ["https://www.wsj.com"]
extracted_content = scrape_with_playwright(urls, schema=schema)

In above code, we have created a function called “scrape_with_playwright” to load and transform web page data from any website or series of websites with the use of updated schema of the the output content in title and summary format as shown above in the code example.

Question Answering Over a Web

Now, Developing Q&A application for automating web research using queries can be archived via Google Search APIs and methods like ‘WebResearchRetriever’ from LangChain framework. To begin with the application development first we will look the the application workflow diagram to understand key components of the such LLM based application.

WebResearchRetriever

The diagram above illustrates the entire process from a research question to web scraping and web content storage. It also shows how LLM APIs are called to generate comprehensive responses to user queries using the scraped web content and context. Such an application can reduce our reliance on traditional search methods. To build application, first we need to install certain libraries in the project environment as mentioned below code.

# requirement.txt
langchain==0.2.5
langchain-chroma==0.1.1
langchain-community==0.2.5
langchain-core==0.2.9
langchain-openai==0.1.9
chromadb==0.5.3
openai==1.35.3 
html2text==2024.2.26
google-api-core==2.11.1
google-api-python-client==2.84.0
google-auth==2.27.0
google-auth-httplib2==0.1.1
googleapis-common-protos==1.63.1
tiktoken==0.7.0

Once the installation is complete import the necessary libraries and set OpenAI API keys as well as Google API keys.

# importing langchain tools
from langchain.retrievers.web_research import WebResearchRetriever
from langchain_chroma import Chroma
from langchain_community.utilities import GoogleSearchAPIWrapper
from langchain_openai import ChatOpenAI, OpenAIEmbeddings

# importing os and setting up 
import os
os.environ["GOOGLE_API_KEY"] = "YOUR_API_KEY"
os.environ["GOOGLE_CSE_ID"] = "YOUR_CSE_ID" 
os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"

For more details on how to set above mentioned API keys. Visit this link for further guidance. Next, we will initialize the vector store, LLM, and Google search instances. Following this, we will set up the web retrieval tool to use the LLM for generating multiple queries. These queries will then be executed on Google search APIs to bring relevant web links. The retrieved web pages will be scraped and loaded into vector stores, which will serve as context for answer generation.

# Vectorstore storage
vectorstore = Chroma(
    embedding_function=OpenAIEmbeddings(), persist_directory="./chroma_db_oai"
)

# LLM instance
llm = ChatOpenAI(temperature=0)

# Search API instance
search = GoogleSearchAPIWrapper()

# Initialize the web retrival 
web_research_retriever = WebResearchRetriever.from_llm(
    vectorstore=vectorstore, llm=llm, search=search
)

Once the web retriever is set, now we just need to input user query to using Q&A retrieval chain of langchain for generation of answer from vast array of web resources.

# Run the q&a retrival chain
import logging

logging.basicConfig()
logging.getLogger("langchain.retrievers.web_research").setLevel(logging.INFO)
from langchain.chains import RetrievalQAWithSourcesChain

# take a user input and use q&a chain for web retrival
user_input = "How do LLM Powered Autonomous Agents work?"
qa_chain = RetrievalQAWithSourcesChain.from_chain_type(
    llm, retriever=web_research_retriever
)

# print the result in your environment 
result = qa_chain({"question": user_input})

From the code above, it’s evident that by utilizing the user query alongside the retrieval chain. We can generate a comprehensive and summarized answer from a vast array of web pages.

Conclusion

In this article, we’ve explored the creation of a web automation application leveraging LangChain and Google Search APIs. We started with an introduction to the web automation workflow, outlining the steps involved in transforming raw web data into valuable information for a given user query. We then delved into the specifics of web loading and data transformation, essential for preparing the data for further processing.

Following this, we discussed how to perform scraping and extraction using LangChain. Also highlighting its capabilities in efficiently gathering and processing web data. Finally, we demonstrated how to implement a question-answering system over the web for research purposes. This system provides quick and comprehensive answers from web resources without the need to go through each one individually.

Key Takeaways

  • This article offers a hands-on guide to developing web automation applications. Also demonstrating practical use-cases and benefits of integrating AI-powered tools in search processes.
  • Understanding the complete workflow, from web loading and data transformation to scraping and question answering for web automation application.
  • Leveraging LangChain and Google Search APIs significantly improves search efficiency by providing succinct, summarized answers and linking to relevant resources.

Frequently Asked Questions

Q1. How does search API work?

A. A Search API allows applications to retrieve search results from a search engine programmatically, enabling automated querying and data retrieval.

Q2. How can LangChain used for web automation application?

A. LangChain offers comprehensive tools and methods for loading, transforming, and storing web data in vector stores. Additionally, it includes functions to connect with LLM and Google Search APIs.

Q3. What is the use of web scraping in web automation application?

A. When a user enters a query, the search API retrieves relevant links from web resources. The scraped content from these links is stored in the project, serving as context for answering user queries.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Avikumar Talaviya 01 Jul, 2024

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear