skip to Main Content
Join Us for Comet's Annual Convergence Conference on May 8-9:

Implementing Agents in LangChain

A Guide to Enhancing AI with Strategic Decision-Making and Tool Integration

Implementing Agents in LangChain with Comet + CometLLM
Photo by julien Tromeur on Unsplash

Agents in LangChain

Agents in LangChain are systems that use a language model to interact with other tools.

They can be used for tasks such as grounded question/answering, interacting with APIs, or taking action. LangChain provides:

  • A standard interface for agents.
  • A selection of agents to choose from.
  • Examples of end-to-end agents.

Agents vs. Chains

The core idea of agents is to use an LLM to choose a sequence of actions.

In chains, a sequence of actions is hardcoded (in code). In agents, a language model is used as a reasoning engine to determine which actions to take and in which order. An agent uses a language model to interact with other tools or environments.

Agents involve a language model:

  • Making decisions about which actions to take.
  • Taking those actions.
  • Observing the results.
  • Repeating the process until a desired outcome is achieved.

An agent is different from a chain in that a chain is a sequence of calls, whether to a language model or another utility.

A chain focuses on the flow of information and computation. In contrast, an agent focuses on decision-making and interaction with the environment.

Agents can be used for applications such as personal assistants, question answering, chatbots, querying tabular data, interacting with APIs, extraction, summarization, and evaluation.

Agents use an LLM as a reasoning engine and connect it to two key components: tools and memory.


Want to learn how to build modern software with LLMs using the newest tools and techniques in the field? Check out this free LLMOps course from industry expert Elvis Saravia of DAIR.AI!


What are tools and toolkits?

In LangChain, tools and toolkits provide additional functionality and capabilities to agents.

Tools are individual components that perform specific tasks, such as retrieving information from external sources or processing data.

Conversely, toolkits are collections of tools designed to work together and provide a more comprehensive set of functionalities.

Why do agents even need tools?

Providing an agent with the right tools becomes a powerful system that can execute and implement solutions on your behalf.

Combining an agent’s decision-making abilities with the functionality provided by tools allows it to perform a wide range of tasks effectively.

Here are a few reasons why an agent needs tools:

  • Access to external resources: Tools allow an agent to access and retrieve information from external sources, such as databases, APIs, or web scraping. This enables the agent to gather relevant data and use it for decision-making.
  • Data processing and manipulation: Tools provide the necessary functionality for an agent to process and manipulate data. This includes cleaning and transforming data, performing calculations, or applying machine learning algorithms.
  • Integration with other systems: Tools enable agents to integrate with other systems or services. For example, an agent may need to interact with a chatbot platform, a customer relationship management (CRM) system, or a knowledge base. Tools facilitate this integration and allow agents to exchange information with these systems.
  • Customization and extensibility: While LangChain provides built-in tools, it also allows users to define custom tools. This means an agent can be equipped with tools tailored to its unique requirements. Custom tools can be created to address specific tasks or to integrate with proprietary systems.

Why does an agent need memory?

An agent in LangChain needs memory to store and retrieve information during decision-making.

Memory allows an agent to maintain context and remember previous interactions, which is crucial for providing personalized and coherent responses.

Here are a few reasons why an agent needs memory:

  • Contextual understanding: Memory helps an agent understand the context of a conversation or interaction. By storing previous messages or user inputs, the agent can refer back to them and provide more accurate and relevant responses. This allows the agent to maintain a coherent conversation and understand the user’s intent.
  • Long-term knowledge: Memory enables an agent to accumulate knowledge over time. By storing information in memory, the agent can build a knowledge base and use it to answer questions or provide recommendations. This allows the agent to provide more informed and accurate responses based on past interactions.
  • Personalization: Memory allows an agent to personalize its responses based on the user’s preferences or history. By remembering previous interactions, the agent can tailor its responses to the specific needs or interests of the user. This enhances the user experience and makes the agent more effective in achieving its objectives.
  • Continuity: Memory ensures continuity in a conversation or interaction. The agent can pick up where it left off by storing the conversation history and maintaining a consistent dialogue with the user. This creates a more natural and engaging user experience.

👩🏾‍💻 Let’s see it in code

Let’s get some preliminaries out of the way

%%capture
!pip install langchain openai duckduckgo-search youtube_search wikipedia

import os
import getpass

os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter Your OpenAI API Key:")

And now set up a LLM

from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.llms import OpenAI

llm = OpenAI(temperature=0.1)

Giving the Agent Tools

You can either pass the tools as arguments when initializing the toolkit or individually initialize the desired tools. The tools can be selected from the native tools in LangChain, or you can define custom tools if needed.

You can find the available native tools here and look at the dictionary _EXTRA_OPTIONAL_TOOLS for the key of the tool.

tools = load_tools(["ddg-search", "llm-math", "wikipedia"], llm=llm)

You can inspect the tool name and description:

tools[0].name, tools[0].description

('duckduckgo_search',
 'A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.')

Initialize the agent

There are two ways you can instantiate the agent: AgentExecutor or initialize_agent.

AgentExecutor

The AgentExecutor class is responsible for executing the agent’s actions and managing the agent’s memory.

It takes an agent, a set of tools, and an optional memory object as input.

The AgentExecutor provides a more flexible and customizable way to run the agent, as you can specify the tools and memory to be used.

When to use AgentExecutor:

  • When you want more control over executing the agent’s actions and memory management.
  • When you want to specify the tools and memory to be used by the agent.

Initialize Agent

The initialize_agent function is a convenience function provided by LangChain that simplifies creating an agent.

It takes the agent class, the language model, and an optional list of tools as input.

It automatically initializes the agent with the specified language model and tools.

When to use initialize_agent:

  • When you want a simplified way to create an agent without specifying the memory.
  • When you want to create an agent with default settings, quickly.

If you need more customization and control over the agent’s execution, you should use AgentExecutor.

If you prefer a more straightforward and quicker way to create an agent, you can use initialize_agent.

We’re going the simple and quick route here.

agent = initialize_agent(tools,
                         llm,
                         agent="zero-shot-react-description",
                         verbose=True)

You can inspect the prompt for the agent:

print(agent.agent.llm_chain.prompt.template)
Answer the following questions as best you can. You have access to the following tools:

duckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.
Calculator: Useful for when you need to answer questions about math.
Wikipedia: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, facts, historical events, or other subjects. Input should be a search query.

Use the following format:

Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [duckduckgo_search, Calculator, Wikipedia]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question

Begin!

Question: {input}
Thought:{agent_scratchpad}
query = """
Who is the current Chief AI Scientist at Meta AI? When was he born?
What is his current age? What is the average life expectancy of people where he was born?
"""

agent.run(query)
> Entering new AgentExecutor chain...
 I need to find out who the Chief AI Scientist is, when he was born, and what the average life expectancy is in his country.

Action: duckduckgo_search
Action Input: "Chief AI Scientist Meta AI"
Observation: At the same panel, Yann LeCun, chief AI scientist at Facebook parent Meta, was asked about the current limitations of AI. He focused on generative AI trained on large language models,... The strength of the deep learning era of artificial intelligence has lead to something of a renaissance in corporate R&D in information technology, according to Yann LeCun, chief AI... Meta's chief A.I. scientist calls A.I. doomers 'preposterous' and predicts LLMs are just a passing fad BY Jeremy Kahn June 14, 2023, 12:17 PM PDT Meta Chief A.I. Scientist Yann LeCun... Prof LeCun now works as the chief AI scientist at Meta, the parent company of Facebook, Instagram and WhatsApp. He disagrees with his fellow godfathers that AI is a risk to the human race. One of the 'Godfathers of AI' and Chief AI Scientist at Meta AI (FAIR), Yann LeCun joined us for a special live talk and fireside chat with our Executive Director, Usama Fayyad at...
Thought: Yann LeCun is the Chief AI Scientist at Meta AI.

Action: Wikipedia
Action Input: "Yann LeCun"
Observation: Page: Yann LeCun
Summary: Yann André LeCun ( lə-KUN, French: [ləkœ̃]; originally spelled Le Cun; born 8 July 1960) is a  Turing Award winning French computer scientist working primarily in the fields of machine learning, computer vision, mobile robotics and computational neuroscience. He is the Silver Professor of the Courant Institute of Mathematical Sciences at New York University and Vice-President, Chief AI Scientist at Meta.He is well known for his work on optical character recognition and computer vision using convolutional neural networks (CNN), and is a founding father of convolutional nets. He is also one of the main creators of the DjVu image compression technology (together with Léon Bottou and Patrick Haffner). He co-developed the Lush programming language with Léon Bottou.
LeCun received the 2018 Turing Award (often referred to as the "Nobel Prize of Computing"), together with Yoshua Bengio and Geoffrey Hinton, for their work on deep learning.
The three are sometimes referred to as the "Godfathers of AI" and "Godfathers of Deep Learning".

Page: LeNet
Summary: LeNet is a convolutional neural network structure proposed by LeCun et al. in 1998,. In general, LeNet refers to LeNet-5 and is a simple convolutional neural network. Convolutional neural networks are a kind of feed-forward neural network whose artificial neurons can respond to a part of the surrounding cells in the coverage range and perform well in large-scale image processing.



Page: Geoffrey Hinton
Summary: Geoffrey Everest Hinton  (born 6 December 1947) is a British-Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks. From 2013 to 2023, he divided his time working for Google (Google Brain) and the University of Toronto, before publicly announcing his departure from Google in May 2023 citing concerns about the risks of artificial intelligence (AI) technology. In 2017, he co-founded and became the chief scientific advisor of the Vector Institute in Toronto.With David Rumelhart and Ronald J. Williams, Hinton was co-author of a highly cited paper published in 1986 that popularised the backpropagation algorithm for training multi-layer neural networks, although they were not the first to propose the approach. Hinton is viewed as a leading figure in the deep learning community. The dramatic image-recognition milestone of the AlexNet designed in collaboration with his students Alex Krizhevsky and Ilya Sutskever for the ImageNet challenge 2012 was a breakthrough in the field of computer vision.Hinton received the 2018 Turing Award (often referred to as the "Nobel Prize of Computing"), together with Yoshua Bengio and Yann LeCun, for their work on deep learning. They are sometimes referred to as the "Godfathers of Deep Learning", and have continued to give public talks together.In May 2023, Hinton announced his resignation from Google to be able to "freely speak out about the risks of A.I." He has voiced concerns about deliberate misuse by malicious actors, technological unemployment, and existential risk from artificial general intelligence.


Thought: Yann LeCun was born on July 8, 1960 and is currently 63 years old.

Action: duckduckgo_search
Action Input: "Average life expectancy in France"
Observation: Chart and table of France life expectancy from 1950 to 2023. United Nations projections are also included through the year 2100. The current life expectancy for France in 2023 is 83.13 years, a 0.16% increase from 2022. The life expectancy for France in 2022 was 83.00 years, a 0.16% increase from 2021. The life expectancy for France in 2021 was ... Comparison of average female and male life expectancy as defined by WHO for 2019. Open the original chart in a separate tab and hover over a bubble to highlight it. The squares of bubbles are proportional to country population according to estimation of the UN for 2019. ... France: 82.7: 79.7: 85.5: 5.8: Life expectancy at birth, total (years) - France | Data In 2021, the total life expectancy at birth in France remained nearly unchanged at around 82.32 years. These figures refer to the expected lifespan of the average newborn in a given country... Demographics Distribution of the population in France in 2023, by age group
Thought: The average life expectancy in France is 82.7 years.

Final Answer: Yann LeCun is the current Chief AI Scientist at Meta AI. He was born on July 8, 1960 and is currently 63 years old. The average life expectancy in France, where he was born, is 82.7 years.

> Finished chain.
Yann LeCun is the current Chief AI Scientist at Meta AI. He was born on July 8, 1960 and is currently 63 years old. The average life expectancy in France, where he was born, is 82.7 years.

Conclusion

As the digital and AI landscapes evolve, the role of agents within LangChain takes on new significance.

Agents, powered by language models act as dynamic systems interfacing with various tools, performing tasks ranging from grounded question answering to API interactions. This blog post has delved into the architecture of these agents, contrasting them with chains and highlighting their decision-making prowess. Equipped with tools and memory, agents within LangChain offer a sophisticated way to handle complex tasks, pushing the boundaries of what’s possible with AI.

They stand as a testament to the future of automated systems, promising smarter, more intuitive interactions within digital environments.

Harpreet Sahota

Back To Top