LangChain is a framework written in Python and JavaScript that provides tools to manipulate and build applications based on LLMs. You can see LangChain installation instructions here . LangChain aims to solve problems when working with LLMs so below are the core modules of LangChain.
Model I/O
There are many LLMs for you to use (OpenAI, Hugging Face…), LangChain provides an interface so you can interact with different models without any difficulty.

You can use model LLMs from providers like OpenAI with an API key:
from langchain_openai import ChatOpenAI
from langchain_openai import OpenAI
llm = OpenAI(openai_api_key="...")
chat_model = ChatOpenAI()
You can also interact directly with models running locally using Ollama . This is an open source project, used to interact with LLMs without having to connect to external service providers. We will learn about Ollama in another article. Here is an example of using Ollama in LangChain:
from langchain_community.llms import Ollama
from langchain_community.chat_models import ChatOllama
llm = Ollama(model="llama2")
chat_model = ChatOllama()
With objects llm
and chat_model
, you can start interacting with the LLMs model.
Prompts
LangChain provides a prompt template that helps you structure the input for LLMs effectively. You can create a dynamic prompt, containing parameters that change depending on the intended use.
prompt = PromptTemplate(
template="Tell me a joke in {language}", input_variables=["language"]
)
print(prompt.format(language="spanish"))
'Tell me a joke in spanish'
Output Parsers
The output of an LLM is text, in many cases you want the returned result to be another format such as JSON, CSV… output parsers will help you do this. Let’s look at the example below:
template = "Generate a list of 5 {text}.\n\n{format_instructions}"
chat_prompt = ChatPromptTemplate.from_template(template)
chat_prompt = chat_prompt.partial(format_instructions=output_parser.get_format_instructions())
chain = chat_prompt | chat_model | output_parser
chain.invoke({"text": "colors"})
['red', 'blue', 'green', 'yellow', 'orange']
Retrieval
Typically, LLMs will be limited by the data set at training time. Just like Chat-GPT will not be able to answer questions related to events that take place after 2021. In many cases, you also need the model to understand other documents you request. RAG (Retrieval Augmented Generation) was born to solve these problems.

LangChain provides modules to help you build a complete RAG application. From embedding to manipulating externally added data.
Document Loaders
This is a module that supports loading documents from sources such as Github, S3… along with many different formats such as .txt
, .csv
…:
from langchain_community.document_loaders import TextLoader
loader = TextLoader("./main.rb")
documents = loader.load()
[Document(page_content='puts("Hello LangChain!")\n', metadata={'source': './main.rb'})]
Text Splitting
After loading data, it will be processed to extract meaningful information, then divided into small parts before moving to the next step.
from langchain.text_splitter import Language
from langchain.text_splitter import RecursiveCharacterTextSplitter
splitter = RecursiveCharacterTextSplitter.from_language(
language=Language.RUBY, chunk_size=2000, chunk_overlap=200
)
documents = splitter.split_documents(raw_documents)
[Document(page_content='puts("Hello LangChain!")', metadata={'source': './main.rb'})]
Text Embedding Models
The data continues to be converted to vector space and you can also cache them for reuse.
from langchain_community.embeddings import OllamaEmbeddings
embeddings_model = OllamaEmbeddings()
embeddings = embeddings_model.embed_documents(documents)
Vector Stores
Data after conversion to vector can be saved to vector store. LangChain provides a module for you to do this:
from langchain_community.vectorstores import Chroma
db = Chroma.from_documents(documents, OllamaEmbeddings())
Retrievers
Now you can manipulate the above data through retrievers.
from langchain.chains import RetrievalQA
from langchain.llms import Ollama
retriever = db.as_retriever(search_kwargs={"k": 1})
llm = Ollama(model="codellama:7b")
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever)
qa.invoke("What's inside this document?")["result"]
'The text "Hello LangChain!" is a string of characters written in the Ruby programming language. It is not clear what "LangChain" refers to or what context it may be used in, so I cannot provide a helpful answer without more information.'
Agents
For each user request, you need to perform different steps to return the desired results. For example, with a simple question, users only need a regular text answer. But when you want to display tabular results or a request to export data to PDF format, you will need to perform additional actions to get the final result. LangChain Agents will help you solve the above problem.

Each agent can be considered a collection of many tools, a tool will include the following main components:
- Name : Name of the tool
- Description : Is a brief description of the tool’s intended use
- JSON schema : Contains tool input information
- Function call : Is the main content that will be called when running the tool
In which name, description and JSON schema are the most important components, they are used in all prompts.
Built-In Tool
Lang chain provides many built-in tools for you to use. Here is an example:
from langchain_community.tools import WikipediaQueryRun
from langchain_community.utilities import WikipediaAPIWrapper
api_wrapper = WikipediaAPIWrapper(top_k_results=1, doc_content_chars_max=100)
tool = WikipediaQueryRun(api_wrapper=api_wrapper)
print("Name:", tool.name)
print("Description:", tool.description)
print("JSON schema:", tool.args)
Tool information:
Name: wikipedia
Description: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, facts, historical events, or other subjects. Input should be a search query.
JSON Schema: {'query': {'title': 'Query', 'type': 'string'}}
You can use the tool simply as follows:
tool.run({"query": "LangChain"})
'Page: LangChain\nSummary: LangChain is a framework designed to simplify the creation of applications '
By using the tool, you can fully interact with external data and on the internet, thereby maximizing the power of the model.
Defining Custom Tools
Not limited by built-in tools, LangChain gives you a way to create your own tool to suit any purpose. To declare a tool you need to use @tool
a decorator:
from langchain.tools import tool
@tool
def search(query: str) -> str:
"""Look up things online."""
return "LangChain"
@tool
def multiply(a: int, b: int) -> int:
"""Multiply two numbers."""
return a * b
Then you need to create an agent from the above tools:
from langchain.agents import initialize_agent, AgentType
from langchain.llms import Ollama
tools = [search, multiply]
agent = initialize_agent(
tools,
Ollama(),
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
)
You can then use:
agent.invoke("Multiply two numbers: 2 and 3")
> Entering new AgentExecutor chain...
Action: multiply(a: int, b: int) -> int
Action_input: {"a": 2, "b": 3}
> Finished chain.
{'input': 'Multiply two numbers: 2 and 3', 'output': 'Action: multiply(a: int, b: int) -> int\nAction_input: {"a": 2, "b": 3}\n'}
Try with another input:
agent.invoke("Go online and search")
> Entering new AgentExecutor chain...
Action: search
Action_input: {'query': {'title': 'Query', 'type': 'string'}}
> Finished chain.
{'input': 'Go online and search', 'output': "Action: search\nAction_input: {'query': {'title': 'Query', 'type': 'string'}}\n\n"}
Agent worked exactly as we expected.
Conclusion
LangChain is a very powerful framework that makes it easy for you to interact and harness the power of LLMs. The article provides you with basic information about LangChain. In the following sections we will use LangChain to solve specific problems that promise to be very interesting.
Source : https://viblo.asia/p/langchain-la-gi-WR5JRBPQJGv
горшок напольный высокий купить [url=https://kashpo-napolnoe-rnd.ru]https://kashpo-napolnoe-rnd.ru[/url] .
Thanks for the article. Here is a website on the topic – https://kanunnikovao.ru/
Free Hussiepass Premium Account
Join our vip membership to get latest & working hussiepass accounts as
well collection of porn passwords and login access to thousands other xxx paysites.
Good way of telling, and pleasant article to get information about my presentation subject matter, which
i am going to convey in institution of higher education.
I always used to read article in news papers but now as I am a user
of net thus from now I am using net for content, thanks to web.
песню скачать [url=https://comedysong.ru/musicset/elmanana/]песню скачать[/url] .
Профессиональная https://narkologicheskaya-klinika43.ru. Лечение зависимостей, капельницы, вывод из запоя, реабилитация. Анонимно, круглосуточно, с поддержкой врачей и психологов.
Highly descriptive blog, Ӏ loved tһat bit. Will tһere be a
ⲣart 2?
Here is my blog post math Tuition singapore
You’re so cool! I do not suppose I have read anything like this before.
So wonderful to discover someone with some original thoughts on this topic.
Really.. many thanks for starting this up. This site is one thing that is needed on the web, someone with a little originality!
I couldn’t resist commenting. Very well written!
Okay, I’ve heard of some wild weight loss trends, but this one
actually makes some scientific sense! I appreciate how you explained
the thermogenic effect—it’s simple but fascinating. I
might give it a try alongside my regular routine. Has anyone
seen real results from this?
рулонные шторы блэкаут пятигорск (Повтор) Карниз Купить Пятигорск: Просто и Быстро
завьялов илья поинт пей Завьялов Илья: PointPay – платформа для международного крипто-трейдинга
A [url=https://taxi-prive.com/limo-service-seattle-airport/] Taxi Prive Airport Limo [/url] service offers luxurious and convenient ground transportation. Focusing on airports, this service provides a seamless transition for travelers. Unlike standard taxis, a Taxi Prive Airport Limo ensures a private, comfortable experience with professional chauffeurs. Passengers can enjoy stress-free travel, knowing their limo will be waiting upon arrival. Popular for business and leisure travelers alike, this service prioritizes punctuality, discretion, and high-end vehicles equipped with modern amenities. Booking a Taxi Prive Airport Limo guarantees a stylish and efficient journey to and from the airport. – https://taxi-prive.com/limo-service-seattle-airport/
Greate article. Keep writing such kind of info on your blog.
Im really impressed by your site.
Hello there, You’ve performed an incredible job. I’ll certainly digg
it and in my view suggest to my friends. I’m confident they’ll be
benefited from this web site.
Hi, its pleasant paragraph concerning media print, we all know
media is a wonderful soujrce of information.
Here is my web page починить стиральную машину
Hello! This is my 1st comment here so I just wanted to give a quick shout out and
say I really enjoy reading through your blog posts.
Can you suggest any other blogs/websites/forums that cover the same topics?
Many thanks!