LangChain

Basic

The provided code demonstrates how to create a simple chatbot using the LangChain library and the ChatAnthropic model. The chatbot can provide expert advice on a given topic based on user input.

from langchain_core.prompts import ChatPromptTemplate
from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(model="claude-3-5-sonnet-20240620")

prompt_template = ChatPromptTemplate.from_messages([
    ('system', "You are an expert in {topic}."),
    ('user', "What are the steps to solve the following problem? {question}")
])

llm = prompt_template | model
response = llm.invoke({"topic": "time management", "question": "How can I improve my productivity?"})
print(response.content)

Response

As an expert in time management, I'd suggest the following steps to improve your productivity:

1. Assess your current situation
2. Set clear goals
3. Prioritize tasks
4. Create a schedule

Code explanation:

  • Input

    • topic - specifies the topic.

    • question - specific the question.

  • Output

    • generated response - The output will be a text response containing steps or advice related to the specified topic and question.

Basic - Chain of Thought

The Chain of Thought system is a two-stage process that uses large language models (LLMs) to answer questions on a given topic by generating a series of reasoning steps and then arriving at a logical conclusion based on those steps.

from langchain_core.prompts import ChatPromptTemplate
from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(model="claude-3-5-sonnet-20240620")

prompt_template_1 = ChatPromptTemplate.from_messages([
    ('system', "You are an expert in {topic}."),
    ('user', "What are the steps to solve the following problem? {question}")
])

llm_1 = prompt_template_1 | model

prompt_template_2 = ChatPromptTemplate.from_messages([
    ('system', "Provide a concise answer to the following question."),
    ('user', "For the steps given, {steps}, what is the most logical conclusion?")
])

llm_2 = {"steps": llm_1} | prompt_template_2 | model

response = llm_2.invoke({"topic": "time management", "question": "How can I improve my productivity?"})

print(response.content)

Response

Based on the steps provided, the most logical conclusion is:

Improving productivity is a comprehensive process that requires self-awareness, planning, discipline, and continuous effort...

Code explanation:

  • LLM 1

    • Input:

      • topic - A string representing the subject matter or domain of the question.

      • question - A string containing the specific question to be answered.

    • Output:

      • steps - A string containing a series of reasoning steps relevant to answering the question, given the provided topic. Each step represents a logical inference or piece of information that contributes to reaching the final conclusion.

  • LLM 2

    • Input:

      • steps - The string output from LLM 1, which contains the series of reasoning steps.

    • Output:

      • logical conclusion - A string representing the final answer or conclusion to the original question, derived by analyzing and synthesizing the reasoning steps from LLM 1.

The process flow looks as follow:

  • The topic and question are provided as input to LLM 1.

  • LLM 1 generates a series of reasoning steps based on the topic and question.

  • The reasoning steps from LLM 1 are passed as input to LLM 2.

  • LLM 2 analyzes the reasoning steps and draws a logical conclusion, which serves as the final answer to the original question.

jupyter notebook code example

Last updated