自然语言处理从入门到应用——LangChain:索引(Indexes)-[基础知识]

分类目录:《自然语言处理从入门到应用》总目录


索引(Indexes)是指为了使LLM与文档更好地进行交互而对其进行结构化的方式。在链中,索引最常用于“检索”步骤中,该步骤指的是根据用户的查询返回最相关的文档:

  • 索引不仅可用于检索,还可用于其他目的
  • 检索可以使用除索引之外的其他逻辑来查找相关文档

因此,我们有一个称为Retriever的接口概念,这是大多数链所使用的接口。当我们谈论索引和检索时,通常是指索引和检索非结构化数据(如文本文档)。对于与结构化数据(例如SQL表等)或API的交互,请参阅相应的用例部分以获取相关功能的链接。

LangChain 主要关注于构建索引,目标是使用它们作为检索器。为了更好地理解这意味着什么,有必要突出显示基本检索器接口是什么。LangChain 的baseRetriever类如下:

from abc import ABC, abstractmethod
from typing import List
from langchain.schema import Document
 
class BaseRetriever(ABC):
    @abstractmethod
    def get_relevant_documents(self, query: str) -> List[Document]:
        """Get texts relevant for a query.
 
        Args:
            query: string to find relevant texts for
 
        Returns:
            List of relevant documents
        """

上述代码中的get_relevant_documents方法可以按照我们认为合适的方式实现。当然,我们也协助构建我们认为有用的检索器。我们主要关注的检索器类型是Vectorstore检索器。在本文的其余部分中,我们都将关注这一点。为了理解什么是向量库检索器,理解向量库是什么非常重要。默认情况下,LangChain使用Chroma作为向量存储来索引和搜索嵌入。要执行下面的代码,我们首先需要安装chromadb

pip install chromadb

下面这个例子展示了对文档的问题回答。我们选择这个例子作为开始的例子,因为它很好地组合了许多不同的元素如(文本分割器、嵌入、向量存储等) ,还演示了如何在链中使用它们。通过文件回答问题包括四个步骤:

  1. 创建索引
  2. 从该索引创建检索器
  3. 创建一个问题回答链
  4. 提出问题

每个步骤都有多个子步骤和可能的配置。在本文中,我们将主要关注创建索引。我们将展示这样做的一行程序,然后分解实际发生的情况。首先,让我们导入一些无论如何都会使用的通用类:

from langchain.chains import RetrievalQA
from langchain.llms import OpenAI

接下来在通用设置中,让我们指定要使用的文档加载程序。我们可以在Github下载state_of_the_union.txt文件

from langchain.document_loaders import TextLoader
loader = TextLoader('../state_of_the_union.txt', encoding='utf8')
创建索引

为了尽快开始,我们可以使用VectorstoreIndexCreator

from langchain.indexes import VectorstoreIndexCreator
index = VectorstoreIndexCreator().from_loaders([loader])

日志输出:

Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.

现在已经创建了索引,我们可以使用它来询问数据的问题。需要注意的是,在引擎盖下,这实际上也在执行一些步骤,我们将在本文后面介绍这些步骤。

query = "What did the president say about Ketanji Brown Jackson"
index.query(query)

输出:

" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."

输入:

query = "What did the president say about Ketanji Brown Jackson"
index.query_with_sources(query)

输出:

{'question': 'What did the president say about Ketanji Brown Jackson',
 'answer': " The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, one of the nation's top legal minds, to continue Justice Breyer's legacy of excellence, and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\n",
 'sources': '../state_of_the_union.txt'}

VectorstoreIndexCreator返回的是VectorStoreIndexWrapper,它提供了这些优秀的查询和query_with_sources功能。如果我们只是想直接访问向量存储,我们也可以这样做:

index.vectorstore

输出:

<langchain.vectorstores.chroma.Chroma at 0x119aa5940>

如果我们想要访问VectorstoreRetriever,我们可以使用:

index.vectorstore.as_retriever()

输出:

VectorStoreRetriever(vectorstore=<langchain.vectorstores.chroma.Chroma object at 0x119aa5940>, search_kwargs={})

演练
VectorstoreIndexCreator在加载文件后有三个主要步骤:

  • 将文档分割成块
  • 为每个文档创建嵌入
  • 在向量库中存储文档和嵌入

让我们用代码来演示一下:

documents = loader.load()

接下来,我们将把文档分割成块:

from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)

然后,我们将选择要使用的嵌入:

from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()

现在我们创建用作索引的向量存储:

from langchain.vectorstores import Chroma
db = Chroma.from_documents(texts, embeddings)

日志输出:

Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.

这就是创建索引的过程,然后,我们在一个检索接口中公开这个索引“”

retriever = db.as_retriever()

然后,像之前一样,我们创建一个链,并使用它来回答问题:

qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=retriever)

query = "What did the president say about Ketanji Brown Jackson"
qa.run(query)

输出:

" The President said that Judge Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He said she is a consensus builder and has received a broad range of support from organizations such as the Fraternal Order of Police and former judges appointed by Democrats and Republicans."

VectorstoreIndexCreator只是所有这些逻辑的包装器。我们还可以使用文本分割器、嵌入以及向量存储中进行配置。例如,我们可以按以下方式配置它:

index_creator = VectorstoreIndexCreator(
    vectorstore_cls=Chroma, 
    embedding=OpenAIEmbeddings(),
    text_splitter=CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
)

参考文献:
[1] LangChain官方网站:https://www.langchain.com/
[2] LangChain ️ 中文网,跟着LangChain一起学LLM/GPT开发:https://www.langchain.com.cn/
[3] LangChain中文网 - LangChain 是一个用于开发由语言模型驱动的应用程序的框架:http://www.cnlangchain.com/

你可能感兴趣的:(自然语言处理从入门到应用,人工智能,深度学习,自然语言处理,langchain,Indexes)