LLM之RAG实战(二十四)| LlamaIndex高级检索(三):句子窗口检索

       这是本系列关于高级检索技术的第三篇文章,之前的两篇分别介绍构建基本的RAG和父文档检索技术,本文我们将深入研究句子窗口检索技术。我将介绍如何设置它,并使用TruEval来测量其性能,并将其性能与我们在前几篇文章中介绍的其他技术进行比较。

一、语句窗口检索介绍

       在句子窗口检索中,我们对文档的片段进行检索,然后返回检索到的相关句子的多个句子,然后根据该相关句子及其上下的句子窗口生成LLM的合成。如下图所示:

LLM之RAG实战(二十四)| LlamaIndex高级检索(三):句子窗口检索_第1张图片

       上面的图片,相关的句子是红色的,然后中间的相关句子以及句子上面和下面窗口句子一起传递给LLM以执行其响应(RAG的生成部分)。我们可以控制相关句子周围的句子窗口的大小。那么我们为什么要这么做呢?

       基于嵌入的检索最适合使用较小的句子。因此,基本上使用基于句子的检索,我们将用于搜索相关块的块与传递给LLM进行合成的最终文档解耦。让我们实现一个句子窗口检索器。

二、加载文档

       我们需要执行的第一步是加载文档。我们将再次使用我们在过去其他文章中使用的工会演讲数据。以下是我们加载文档代码:

from llama_index import (    SimpleDirectoryReader,)# load documentdocuments = SimpleDirectoryReader(    input_dir="../dataFiles/").load_data(show_progress=True)print(len(documents))

       运行这段代码,下面是输出的屏幕截图:

       从上面的图片中,我们可以看到我们只有一个单独的页面或文档,因为文档的长度是1。如果您使用一个文档,如具有多个页面的PDF文件,将所有页面合并到一个文档中有助于准确地将文档拆分为块或LlamaIndex中所称的“节点”。

       以下是如何将多个文档(页面)合并为一个文档:

from llama_index import (    SimpleDirectoryReader,    Document)# load documentdocuments = SimpleDirectoryReader(    input_dir="../dataFiles/").load_data(show_progress=True)# merge pages into onedocument = Document(text="\n\n".join([doc.text for doc in documents]))print(document)

       在我们的案例中,这不是必要的,但为了了解情况或使用多页PDF文档的人。

三、语句窗口检索器设置

       首先,考虑如何设置一个将文档分解为多个句子的SentenceWindowNodeParser,然后在窗口大小范围内为每个句子增加或添加周围的句子,以创建更大的上下文。这可能让人难以理解。下面,我使用一个例子来解释一下:

from llama_index.node_parser import SentenceWindowNodeParser# create the sentence window node parsernode_parser = SentenceWindowNodeParser.from_defaults(    window_size=2,    window_metadata_key="window",    original_text_metadata_key="original_text",)# Toy example to play around withtext = "I love programming. Python is my most favorite language. I love LLMs. I love LlamaIndex."# Get nodesnodes = node_parser.get_nodes_from_documents([Document(text=text)])# Print out individual nodesprint([x.text for x in nodes])# Print out the window around the second nodeprint(nodes[1].metadata["window"])

        以下是Jupyter笔记本的代码输出:

LLM之RAG实战(二十四)| LlamaIndex高级检索(三):句子窗口检索_第2张图片

       你可以看到原句子(“Python is my most favorite language”)两侧的窗口为2,前面一句话,后面一句话。下面的解释来自LlamaIndex官方文档:

By default, the sentence window is 5 sentences on either side of the original sentence.

In this case, chunk size settings are not used, in favor of following the window settings.

四、建立索引

       让我们继续构建索引,我们需要两件事,第一件是LLM,我们将使用OpenAI gpt-3.5-turbo,然后我们需要一个服务上下文来指定嵌入模型、LLM和节点解析器(我们在上面创建的句子窗口)。

      对于嵌入模型,我将使用LlamaIndex中提供的OpenAIEmbedding模型,您可以使用任何其他想要使用的嵌入模型。

# creating OpenAI gpt-3.5-turbo LLM and OpenAIEmbedding modelllm = OpenAI(model="gpt-3.5-turbo", temperature=0.1)embed_model = OpenAIEmbedding()# creating the service contextsentence_context = ServiceContext.from_defaults(    llm=llm,    embed_model=embed_model,    node_parser=node_parser,)

       由于我们将node_parser作为PensioneWindowNodeParser传入,因此它将在后台执行操作:获取每个句子,用它周围的句子对其进行扩充,并创建嵌入,将其存储在矢量存储中。看看下面的图片,为下面图片中的每个文本创建嵌入(一个例子)。其中红色文本是原始句子,其周围的白色文本是增强文本。将为它们创建一个嵌入,并对每个句子重复此操作,每次使用不同的窗口。

LLM之RAG实战(二十四)| LlamaIndex高级检索(三):句子窗口检索_第3张图片

       我们还需要设置一个向量存储索引并使其持久化,这意味着创建的嵌入将存储在向量存储中,以避免每次运行应用程序时创建新嵌入的重复和成本。为此,我们必须检查存储的索引是否存在于内存中;如果没有,我们创建另一个并加载现有的。

import osfrom llama_index import (    SimpleDirectoryReader,    Document,    StorageContext,    load_index_from_storage)from llama_index.node_parser import SentenceWindowNodeParserfrom llama_index.llms import OpenAIfrom llama_index.embeddings import OpenAIEmbeddingfrom llama_index import ServiceContextfrom llama_index import VectorStoreIndexfrom decouple import config# set env variablesos.environ["OPENAI_API_KEY"] = config("OPENAI_API_KEY")# load documentdocuments = SimpleDirectoryReader(    input_dir="../dataFiles/").load_data(show_progress=True)# merge pages into onedocument = Document(text="\n\n".join([doc.text for doc in documents]))node_parser = SentenceWindowNodeParser.from_defaults(    window_size=3,    window_metadata_key="window",    original_text_metadata_key="original_text",)# creating OpenAI gpt-3.5-turbo LLM and OpenAIEmbedding modelllm = OpenAI(model="gpt-3.5-turbo", temperature=0.1)embed_model = OpenAIEmbedding()# creating the service contextsentence_context = ServiceContext.from_defaults(    llm=llm,    embed_model=embed_model,    node_parser=node_parser,)if not os.path.exists("./storage"):    # creating the vector store index    index = VectorStoreIndex.from_documents(        [document], service_context=sentence_context    )    # make vector store persistant    index.storage_context.persist(persist_dir="./storage")else:    # load vector store indexed if they exist    index = load_index_from_storage(        StorageContext.from_defaults(persist_dir="./storage"),        service_context=sentence_context    )

       运行此代码并确保其正常工作而不会出现错误。这将在Python文件夹所在的项目目录中创建一个新文件夹,此文件夹应命名为storage

五、创建元数据替换后处理器

       MetaDataReplacementPostProcessor在我们执行了相关区块的检索后开始使用,它将用位于句子窗口内的实际周围文本替换检索到的节点周围的元数据。基本上,元数据替换后处理器会产生这样的结果:

LLM之RAG实战(二十四)| LlamaIndex高级检索(三):句子窗口检索_第4张图片

       其中红色文本是相关的和检索到的文本。白色文本是位于元数据替换后处理器放置的上下文窗口中的周围文本。为进一步澄清一些事情,可以看看下面的代码:

LLM之RAG实战(二十四)| LlamaIndex高级检索(三):句子窗口检索_第5张图片

       你可以从上面的图片中看到,在第44单元中,我们检索到了原始句子“Python is my most favorite language”。在单元格45和46中,我们应用了元数据后处理器,以及如何将周围的完整句子扩充到原始句子中。

六、添加Reranker

       顾名思义,重新排序器所做的基本上是根据句子的相关性来重新排序。我们将使用BAAI/bge-reranker-base来执行重新排序,这个模型可以在Huggingface上找到。

       那么,为什么我们需要重排序呢?看看这张图片:

LLM之RAG实战(二十四)| LlamaIndex高级检索(三):句子窗口检索_第6张图片

       单元格48的图像不太清楚,因为这是该单元格中的代码:

from llama_index import QueryBundlefrom llama_index.schema import TextNode, NodeWithScorequery = QueryBundle("I love Python programming")scored_nodes = [    NodeWithScore(node=TextNode(text="Programming can be boring, bugs all day"), score=0.6),    NodeWithScore(node=TextNode(text="Python is my most favorite programming language"), score=0.4),]

       你可以看到原始的查询是“I love Python Programming”,我们有两个分数节点,我们手动分配了分数:分数分别为0.6和0.4。通过人工判断,您可以判断出第二句与用户查询更相关,但手动分配的分数更高。通过重新排序,该模型可以通过基本上更改排名分数来帮助解决这一问题。根据你的人类判断,在根据查询输入重新排序后,你认为哪一个句子的排名会更高?第二句对吗?

       是的,第二句比第二句更相关,因此应该有更高的分数。你可以看到这就是重新排序模型所做的事情(参考上面的Jupyter笔记本图片)。

       通常我们使用重新排序将查询与现有节点进行匹配,以找到最相关的节点。

       以下是将元数据替换后处理器和重新排序添加到管道的代码:

import osfrom llama_index import (    SimpleDirectoryReader,    Document,    StorageContext,    load_index_from_storage)from llama_index.node_parser import SentenceWindowNodeParserfrom llama_index.llms import OpenAIfrom llama_index.embeddings import OpenAIEmbeddingfrom llama_index import ServiceContextfrom llama_index import VectorStoreIndexfrom llama_index.indices.postprocessor import MetadataReplacementPostProcessorfrom llama_index.indices.postprocessor import SentenceTransformerRerankfrom decouple import config# set env variablesos.environ["OPENAI_API_KEY"] = config("OPENAI_API_KEY")# load documentdocuments = SimpleDirectoryReader(    input_dir="../dataFiles/").load_data(show_progress=True)# merge pages into onedocument = Document(text="\n\n".join([doc.text for doc in documents]))node_parser = SentenceWindowNodeParser.from_defaults(    window_size=3,    window_metadata_key="window",    original_text_metadata_key="original_text",)# creating OpenAI gpt-3.5-turbo LLM and OpenAIEmbedding modelllm = OpenAI(model="gpt-3.5-turbo", temperature=0.1)embed_model = OpenAIEmbedding()# creating the service contextsentence_context = ServiceContext.from_defaults(    llm=llm,    embed_model=embed_model,    node_parser=node_parser,)if not os.path.exists("./storage"):    # creating the vector store index    index = VectorStoreIndex.from_documents(        [document], service_context=sentence_context    )    # make vector store persistant    index.storage_context.persist(persist_dir="./storage")else:    # load vector store indexed if they exist    index = load_index_from_storage(        StorageContext.from_defaults(persist_dir="./storage"),        service_context=sentence_context    )# add meta data replacement post processorpostproc = MetadataReplacementPostProcessor(    target_metadata_key="window")# link: https://huggingface.co/BAAI/bge-reranker-basererank = SentenceTransformerRerank(    top_n=2, model="BAAI/bge-reranker-base")

       运行此代码会导致一些与丢失的库相关的错误。请确保继续安装所需的所有丢失的库

LLM之RAG实战(二十四)| LlamaIndex高级检索(三):句子窗口检索_第7张图片

       你可以从图片中看到,我们需要安装pip install torch sentence-transformers。一旦你安装了它,你还需要再次运行代码,这一次一些库将自动安装,根据你的网速,可能需要一些时间才能完成。这是我正在进行的下载。

        下载完成后,我们可以添加查询引擎并对其进行测试,以下是最终代码:

import osfrom llama_index import (    SimpleDirectoryReader,    Document,    StorageContext,    load_index_from_storage)from llama_index.node_parser import SentenceWindowNodeParserfrom llama_index.llms import OpenAIfrom llama_index.embeddings import OpenAIEmbeddingfrom llama_index import ServiceContextfrom llama_index import VectorStoreIndexfrom llama_index.indices.postprocessor import MetadataReplacementPostProcessorfrom llama_index.indices.postprocessor import SentenceTransformerRerankfrom decouple import config# set env variablesos.environ["OPENAI_API_KEY"] = config("OPENAI_API_KEY")# load documentdocuments = SimpleDirectoryReader(    input_dir="../dataFiles/").load_data(show_progress=True)# merge pages into onedocument = Document(text="\n\n".join([doc.text for doc in documents]))node_parser = SentenceWindowNodeParser.from_defaults(    window_size=3,    window_metadata_key="window",    original_text_metadata_key="original_text",)# creating OpenAI gpt-3.5-turbo LLM and OpenAIEmbedding modelllm = OpenAI(model="gpt-3.5-turbo", temperature=0.1)embed_model = OpenAIEmbedding()# creating the service contextsentence_context = ServiceContext.from_defaults(    llm=llm,    embed_model=embed_model,    node_parser=node_parser,)if not os.path.exists("./storage"):    # creating the vector store index    index = VectorStoreIndex.from_documents(        [document], service_context=sentence_context    )    # make vector store persistant    index.storage_context.persist(persist_dir="./storage")else:    # load vector store indexed if they exist    index = load_index_from_storage(        StorageContext.from_defaults(persist_dir="./storage"),        service_context=sentence_context    )# add meta data replacement post processorpostproc = MetadataReplacementPostProcessor(    target_metadata_key="window")# link: https://huggingface.co/BAAI/bge-reranker-basererank = SentenceTransformerRerank(    top_n=2, model="BAAI/bge-reranker-base")# query enginesentence_window_engine = index.as_query_engine(    similarity_top_k=5, node_postprocessors=[postproc, rerank])# test it outresponse = sentence_window_engine.query(    "What did the president say about covid-19?")print(response)

       您现在可以构建一个句子窗口检索器,这是一种先进的RAG技术。现在让我们继续评估模型,使用的最佳句子窗口大小是多少?它如何影响相关性和基础性?语句窗口如何影响成本?相对于基本的RAG管道和父文档检索技术,句子窗口是如何执行的?让我们开始寻找这些问题的答案

七、RAG评估

在评估阶段,有几个问题很想找到答案:

  1. 句子窗口大小最佳是多少?

  2. 在句子窗口大小和groundedness或responses(幻觉)之间进行权衡。

  3. 句子窗口大小与response相关性的关系

  4. 上下文相关性与groundedness的关系

  5. 成本与语句窗口大小的关系

7.1 句子窗口大小和groundedness或responses(幻觉)之间进行权衡

       随着句子窗口的增加,groundedness也会随之增加。这是因为LLM有更多的上下文作为其响应的基础,而不是幻觉或训练数据。那我为什么说句子窗口大小与groundedness成正比呢?让我解释一下。

       当句子窗口很小时,LLM将生成的响应将具有较低的groundedness,因为上下文没有向LLM提供足够的信息——因此,它开始使用从训练数据中获得的现有知识,我们称之为幻觉。

       相反,如果窗口大小太大,则由于LLM被提供了大量信息作为其最终响应的基础,因此,它最终会偏离所提供的信息,因为它太大,无法用其中的所有这些信息组成响应。

       看看这个图表,它只是我上面解释的一个草图,它不是基于任何数据。

LLM之RAG实战(二十四)| LlamaIndex高级检索(三):句子窗口检索_第8张图片

7.2 句子窗口大小与response相关性的关系

      随着句子窗口大小的增加,生成的回答的相关性也会有所增加。为什么?

      上下文越多,答案就越相关?在过多的背景下,LLM可能会分心,也可能不会分心,依靠自己的训练数据并开始产生幻觉。太少的背景,LLM开始产生幻觉,相关性下降,groundedness也随之下降。在某些情况下,相关性可能很高,但groundedness会下降。只是也许训练数据有一些信息可以用来回答用户特定的问题,只是也许。

       依赖性的增加也意味着groundedness的增加,直到某一点,相关性将开始相对于上下文窗口(句子窗口)的数量变平或下降。

LLM之RAG实战(二十四)| LlamaIndex高级检索(三):句子窗口检索_第9张图片

7.3 成本与语句窗口大小的关系

       随着语句窗口大小的增加,价格也会增加,因为越来越多的token被用来发出请求并得到回复。语句窗口越大,token越多,成本就越高。

LLM之RAG实战(二十四)| LlamaIndex高级检索(三):句子窗口检索_第10张图片

       让我们实际测试一下。为此,我将把迄今为止我们所拥有的代码转换为一组函数,我们可以调用这些函数并传入不同的参数进行测试和微调。让我们将代码转换为两个主要函数,一个用于创建索引,另一个用于查询引擎。以下是我们完成此操作后的代码:

import osfrom llama_index import (    SimpleDirectoryReader,    Document,    StorageContext,    load_index_from_storage)from llama_index.node_parser import SentenceWindowNodeParserfrom llama_index.llms import OpenAIfrom llama_index.embeddings import OpenAIEmbeddingfrom llama_index import ServiceContextfrom llama_index import VectorStoreIndexfrom llama_index.indices.postprocessor import MetadataReplacementPostProcessorfrom llama_index.indices.postprocessor import SentenceTransformerRerankfrom decouple import config# set env variablesos.environ["OPENAI_API_KEY"] = config("OPENAI_API_KEY")# load documentdocuments = SimpleDirectoryReader(    input_dir="../dataFiles/").load_data(show_progress=True)# merge pages into onedocument = Document(text="\n\n".join([doc.text for doc in documents]))def create_indexes(    documents: Document,    index_save_dir: str,    window_size: int = 4,    llm_model: str = "gpt-3.5-turbo",    temperature: float = 0.1):    node_parser = SentenceWindowNodeParser.from_defaults(        window_size=window_size,        window_metadata_key="window",        original_text_metadata_key="original_text",    )    # creating OpenAI gpt-3.5-turbo LLM and OpenAIEmbedding model    llm = OpenAI(model=llm_model, temperature=temperature)    embed_model = OpenAIEmbedding()    # creating the service context    sentence_context = ServiceContext.from_defaults(        llm=llm,        embed_model=embed_model,        node_parser=node_parser,    )    if not os.path.exists(index_save_dir):        # creating the vector store index        index = VectorStoreIndex.from_documents(            [document], service_context=sentence_context        )        # make vector store persistant        index.storage_context.persist(persist_dir=index_save_dir)    else:        # load vector store indexed if they exist        index = load_index_from_storage(            StorageContext.from_defaults(persist_dir=index_save_dir),            service_context=sentence_context        )    return indexdef create_query_engine(    sentence_index: VectorStoreIndex,    similarity_top_k: int = 6,    rerank_top_n: int = 5,    rerank_model: str = "BAAI/bge-reranker-base",):    # add meta data replacement post processor    postproc = MetadataReplacementPostProcessor(        target_metadata_key="window"    )    # link: https://huggingface.co/BAAI/bge-reranker-base    rerank = SentenceTransformerRerank(        top_n=rerank_top_n,        model=rerank_model    )    sentence_window_engine = sentence_index.as_query_engine(        similarity_top_k=similarity_top_k,        node_postprocessors=[postproc, rerank]    )    return sentence_window_engine# create indexindex = create_indexes(    documents=documents,    index_save_dir="./storage",    window_size=3,    llm_model="gpt-3.5-turbo",    temperature=0.1)# create query enginesentence_window_engine = create_query_engine(    sentence_index=index,    similarity_top_k=5,    rerank_top_n=2,)response = sentence_window_engine.query(    "What did the president say about covid-19?")print(response)

       既然我们有了这个,让我们继续进行评估。我们需要评估的第一件事是收集问题,在这里我们可以使用以下问题列表:

  1. What measures did the speaker announce to support Ukraine in the conflict mentioned?

  2. How does the speaker propose to address the challenges faced by the United States in the face of global conflicts, specifically mentioning Russia’s actions?

  3. What is the speaker’s plan to combat inflation and its impact on American families?

  4. How does the speaker suggest the United States will support the Ukrainian people beyond just military assistance?

  5. What is the significance of the speaker’s reference to the NATO alliance in the context of recent global events?

  6. Can you detail the economic sanctions mentioned by the speaker that are being enforced against Russia?

  7. What actions have been taken by the U.S. Department of Justice in response to the crimes of Russian oligarchs as mentioned in the speech?

  8. How does the speaker describe the American response to COVID-19 and the current state of the pandemic in the country?

  9. What are the four common-sense steps the speaker mentions for moving forward safely in the context of COVID-19?

  10. How does the speaker address the economic issues such as job creation, infrastructure, and the manufacturing sector in the United States?

将这些问题复制粘贴到名为eval_questions.txt的文本文件中

LLM之RAG实战(二十四)| LlamaIndex高级检索(三):句子窗口检索_第11张图片

      我们将阅读这些问题,并使用for循环将其传递给TruLens以获得评估。如果你在这一系列文章中关注了很长时间,请进入ParentDocumentRetrieval文件夹,复制到default.sqlite数据库,并将其移动到SentenceWindowRetrieval文件夹中,该数据库中有我们迄今为止所做的所有现有技术的记录,这将使我们能够跟踪实验。

LLM之RAG实战(二十四)| LlamaIndex高级检索(三):句子窗口检索_第12张图片

      如果您还没有跟上,请忽略在数据库上复制的步骤。你也可以从[1]下载数据库。下载数据库后,您可以这样引用它:

tru = Tru(database_file="/default.sqlite")

7.4 设置TruLens

      让我们开始设置TruLens进行评估,如果你愿意,你可以在另一个文件中这样做,这是最好的方法。但在这种情况下,为了保持简单,我将所有内容都放在同一个文件main.py文件中。

7.4.1 句子窗口大小3

       以下是要评估句子窗口大小3的代码:

import osfrom typing import Listfrom llama_index import (    SimpleDirectoryReader,    Document,    StorageContext,    load_index_from_storage)from llama_index.node_parser import SentenceWindowNodeParserfrom llama_index.llms import OpenAIfrom llama_index.embeddings import OpenAIEmbeddingfrom llama_index import ServiceContextfrom llama_index import VectorStoreIndexfrom llama_index.indices.postprocessor import MetadataReplacementPostProcessorfrom llama_index.indices.postprocessor import SentenceTransformerRerankfrom llama_index.llms import OpenAI# for loading environment variablesfrom decouple import configfrom trulens_eval import Feedback, Tru, TruLlamafrom trulens_eval.feedback import Groundednessfrom trulens_eval.feedback.provider.openai import OpenAI as OpenAITruLensimport numpy as np# set env variablesos.environ["OPENAI_API_KEY"] = config("OPENAI_API_KEY")# load documentdocuments = SimpleDirectoryReader(    input_dir="../dataFiles/").load_data(show_progress=True)# merge pages into onedocument = Document(text="\n\n".join([doc.text for doc in documents]))def create_indexes(    documents: Document,    index_save_dir: str,    window_size: int = 4,    llm_model: str = "gpt-3.5-turbo",    temperature: float = 0.1):    node_parser = SentenceWindowNodeParser.from_defaults(        window_size=window_size,        window_metadata_key="window",        original_text_metadata_key="original_text",    )    # creating OpenAI gpt-3.5-turbo LLM and OpenAIEmbedding model    llm = OpenAI(model=llm_model, temperature=temperature)    embed_model = OpenAIEmbedding()    # creating the service context    sentence_context = ServiceContext.from_defaults(        llm=llm,        embed_model=embed_model,        node_parser=node_parser,    )    if not os.path.exists(index_save_dir):        # creating the vector store index        index = VectorStoreIndex.from_documents(            [document], service_context=sentence_context        )        # make vector store persistant        index.storage_context.persist(persist_dir=index_save_dir)    else:        # load vector store indexed if they exist        index = load_index_from_storage(            StorageContext.from_defaults(persist_dir=index_save_dir),            service_context=sentence_context        )    return indexdef create_query_engine(    sentence_index: VectorStoreIndex,    similarity_top_k: int = 6,    rerank_top_n: int = 5,    rerank_model: str = "BAAI/bge-reranker-base",):    # add meta data replacement post processor    postproc = MetadataReplacementPostProcessor(        target_metadata_key="window"    )    # link: https://huggingface.co/BAAI/bge-reranker-base    rerank = SentenceTransformerRerank(        top_n=rerank_top_n,        model=rerank_model    )    sentence_window_engine = sentence_index.as_query_engine(        similarity_top_k=similarity_top_k,        node_postprocessors=[postproc, rerank]    )    return sentence_window_engine# create indexindex = create_indexes(    documents=documents,    index_save_dir="./storage",    window_size=3,    llm_model="gpt-3.5-turbo",    temperature=0.1)# create query enginesentence_window_engine = create_query_engine(    sentence_index=index,    similarity_top_k=5,    rerank_top_n=2,)# RAG pipeline evalstru = Tru()openai = OpenAITruLens()grounded = Groundedness(groundedness_provider=OpenAITruLens())# Define a groundedness feedback functionf_groundedness = Feedback(grounded.groundedness_measure_with_cot_reasons).on(    TruLlama.select_source_nodes().node.text).on_output().aggregate(grounded.grounded_statements_aggregator)# Question/answer relevance between overall question and answer.f_qa_relevance = Feedback(openai.relevance).on_input_output()# Question/statement relevance between question and each context chunk.f_qs_relevance = Feedback(openai.qs_relevance).on_input().on(    TruLlama.select_source_nodes().node.text).aggregate(np.mean)tru_query_engine_recorder = TruLlama(sentence_window_engine,                                     app_id='sentence_window_size_3',                                     feedbacks=[f_groundedness, f_qa_relevance, f_qs_relevance])eval_questions = []with open("./eval_questions.txt", "r") as eval_qn:    for qn in eval_qn:        qn_stripped = qn.strip()        eval_questions.append(qn_stripped)def run_eval(eval_questions: List[str]):    for qn in eval_questions:        # eval using context window        with tru_query_engine_recorder as recording:            sentence_window_engine.query(qn)run_eval(eval_questions=eval_questions)# run dashboardtru.run_dashboard()

7.4.2 句子窗口大小6

      我们如何将窗口大小更改为6。注意,我已将TruLlama中的app_id更改为sentence_window_size_6。我还将save_dir更改为sentence_window_size_6_index

import osfrom typing import Listfrom llama_index import (    SimpleDirectoryReader,    Document,    StorageContext,    load_index_from_storage)from llama_index.node_parser import SentenceWindowNodeParserfrom llama_index.llms import OpenAIfrom llama_index.embeddings import OpenAIEmbeddingfrom llama_index import ServiceContextfrom llama_index import VectorStoreIndexfrom llama_index.indices.postprocessor import MetadataReplacementPostProcessorfrom llama_index.indices.postprocessor import SentenceTransformerRerankfrom llama_index.llms import OpenAI# for loading environment variablesfrom decouple import configfrom trulens_eval import Feedback, Tru, TruLlamafrom trulens_eval.feedback import Groundednessfrom trulens_eval.feedback.provider.openai import OpenAI as OpenAITruLensimport numpy as np# set env variablesos.environ["OPENAI_API_KEY"] = config("OPENAI_API_KEY")# load documentdocuments = SimpleDirectoryReader(    input_dir="../dataFiles/").load_data(show_progress=True)# merge pages into onedocument = Document(text="\n\n".join([doc.text for doc in documents]))def create_indexes(    documents: Document,    index_save_dir: str,    window_size: int = 4,    llm_model: str = "gpt-3.5-turbo",    temperature: float = 0.1):    node_parser = SentenceWindowNodeParser.from_defaults(        window_size=window_size,        window_metadata_key="window",        original_text_metadata_key="original_text",    )    # creating OpenAI gpt-3.5-turbo LLM and OpenAIEmbedding model    llm = OpenAI(model=llm_model, temperature=temperature)    embed_model = OpenAIEmbedding()    # creating the service context    sentence_context = ServiceContext.from_defaults(        llm=llm,        embed_model=embed_model,        node_parser=node_parser,    )    if not os.path.exists(index_save_dir):        # creating the vector store index        index = VectorStoreIndex.from_documents(            [document], service_context=sentence_context        )        # make vector store persistant        index.storage_context.persist(persist_dir=index_save_dir)    else:        # load vector store indexed if they exist        index = load_index_from_storage(            StorageContext.from_defaults(persist_dir=index_save_dir),            service_context=sentence_context        )    return indexdef create_query_engine(    sentence_index: VectorStoreIndex,    similarity_top_k: int = 6,    rerank_top_n: int = 5,    rerank_model: str = "BAAI/bge-reranker-base",):    # add meta data replacement post processor    postproc = MetadataReplacementPostProcessor(        target_metadata_key="window"    )    # link: https://huggingface.co/BAAI/bge-reranker-base    rerank = SentenceTransformerRerank(        top_n=rerank_top_n,        model=rerank_model    )    sentence_window_engine = sentence_index.as_query_engine(        similarity_top_k=similarity_top_k,        node_postprocessors=[postproc, rerank]    )    return sentence_window_engine# create indexindex = create_indexes(    documents=documents,    index_save_dir="./sentence_window_size_6_index",    window_size=3,    llm_model="gpt-3.5-turbo",    temperature=0.1)# create query enginesentence_window_engine = create_query_engine(    sentence_index=index,    similarity_top_k=5,    rerank_top_n=2,)# RAG pipeline evalstru = Tru()openai = OpenAITruLens()grounded = Groundedness(groundedness_provider=OpenAITruLens())# Define a groundedness feedback functionf_groundedness = Feedback(grounded.groundedness_measure_with_cot_reasons).on(    TruLlama.select_source_nodes().node.text).on_output().aggregate(grounded.grounded_statements_aggregator)# Question/answer relevance between overall question and answer.f_qa_relevance = Feedback(openai.relevance).on_input_output()# Question/statement relevance between question and each context chunk.f_qs_relevance = Feedback(openai.qs_relevance).on_input().on(    TruLlama.select_source_nodes().node.text).aggregate(np.mean)tru_query_engine_recorder = TruLlama(sentence_window_engine,                                     app_id='sentence_window_size_6',                                     feedbacks=[f_groundedness, f_qa_relevance, f_qs_relevance])eval_questions = []with open("./eval_questions.txt", "r") as eval_qn:    for qn in eval_qn:        qn_stripped = qn.strip()        eval_questions.append(qn_stripped)def run_eval(eval_questions: List[str]):    for qn in eval_questions:        # eval using context window        with tru_query_engine_recorder as recording:            sentence_window_engine.query(qn)run_eval(eval_questions=eval_questions)# run dashboardtru.run_dashboard()

LLM之RAG实战(二十四)| LlamaIndex高级检索(三):句子窗口检索_第13张图片

       正如我们上面讨论的那样,你可以注意到上下文大小、相关性和groundedness的趋势,如果数据清晰明了,我不想为你做任何解释。我要求您拥有更多的上下文窗口大小、不同的嵌入模型,甚至使用不同的LLM一起来处理这一问题,并找到最适合您的RAG管道用例的方法。

       同样,回到构建的其他管道,尝试使用一组问题(运行测试需要10个问题),因为目前,其他管道,如基本RAG和父文档检索,都只使用了一个问题。将它们与我们在本案中构建的句子窗口管道进行比较是不公平的。

        上述挑战可以[2]找到相关代码。以下是一些结果的屏幕截图:

LLM之RAG实战(二十四)| LlamaIndex高级检索(三):句子窗口检索_第14张图片

LLM之RAG实战(二十四)| LlamaIndex高级检索(三):句子窗口检索_第15张图片

      使用句子窗口检索,我们使用的tokens更少,几乎是原来的1/4倍,相关成本更低。更好的是,我们的答案相关性、上下文相关性和groundedness都很好。

参考文献:

[1] https://github.com/Princekrampah/AdvancedRAGTechniques_LlamaIndex

[2] https://github.com/Princekrampah/AdvancedRAGTechniques_LlamaIndex

[3] https://ai.gopubby.com/advance-retrieval-techniques-in-rag-part-03-sentence-window-retrieval-9f246cffa07b

你可能感兴趣的:(RAG,笔记,人工智能)