通過智能分塊和元數(shù)據(jù)集成獲得更好的搜索結果
通常,我們開發(fā)基于 LLM 的檢索應用程序的知識庫包含大量各種格式的數(shù)據(jù)。為了向LLM提供最相關的上下文來回答知識庫中特定部分的問題,我們依賴于對知識庫中的文本進行分塊并將其放在方便的位置。
分塊
分塊是將文本分割成有意義的單元以改進信息檢索的過程。通過確保每個塊代表一個集中的想法或觀點,分塊有助于保持內(nèi)容的上下文完整性。
在本文中,我們將討論分塊的三個方面:
· 糟糕的分塊如何導致結果相關性降低
· 良好的分塊如何帶來更好的結果
· 如何通過元數(shù)據(jù)進行良好的分塊,從而獲得具有良好語境的結果
為了有效地展示分塊的重要性,我們將采用同一段文本,對其應用 3 種不同的分塊方法,并檢查如何根據(jù)查詢檢索信息。
分塊并存儲至 Qdrant
讓我們看看下面的代碼,它展示了對同一文本進行分塊的三種不同方法。
Python
import qdrant_client
from qdrant_client.models import PointStruct, Distance, VectorParams
import openai
import yaml
# Load configuration
with open('config.yaml', 'r') as file:
config = yaml.safe_load(file)
# Initialize Qdrant client
client = qdrant_client.QdrantClient(config['qdrant']['url'], api_key=config['qdrant']['api_key'])
# Initialize OpenAI with the API key
openai.api_key = config['openai']['api_key']
def embed_text(text):
print(f"Generating embedding for: '{text[:50]}'...") # Show a snippet of the text being embedded
response = openai.embeddings.create(
input=[text], # Input needs to be a list
model=config['openai']['model_name']
)
embedding = response.data[0].embedding # Access using the attribute, not as a dictionary
print(f"Generated embedding of length {len(embedding)}.") # Confirm embedding generation
return embedding
# Function to create a collection if it doesn't exist
def create_collection_if_not_exists(collection_name, vector_size):
collections = client.get_collections().collections
if collection_name not in [collection.name for collection in collections]:
client.create_collection(
collection_name=collection_name,
vectors_config=VectorParams(size=vector_size, distance=Distance.COSINE)
)
print(f"Created collection: {collection_name} with vector size: {vector_size}") # Collection creation
else:
print(f"Collection {collection_name} already exists.") # Collection existence check
# Text to be chunked which is flagged for AI and Plagiarism but is just used for illustration and example.
text = """
Artificial intelligence is transforming industries across the globe. One of the key areas where AI is making a significant impact is healthcare. AI is being used to develop new drugs, personalize treatment plans, and even predict patient outcomes. Despite these advancements, there are challenges that must be addressed. The ethical implications of AI in healthcare, data privacy concerns, and the need for proper regulation are all critical issues. As AI continues to evolve, it is crucial that these challenges are not overlooked. By addressing these issues head-on, we can ensure that AI is used in a way that benefits everyone.
"""
# Poor Chunking Strategy
def poor_chunking(text, chunk_size=40):
chunks = [text[i:i + chunk_size] for i in range(0, len(text), chunk_size)]
print(f"Poor Chunking produced {len(chunks)} chunks: {chunks}") # Show chunks produced
return chunks
# Good Chunking Strategy
def good_chunking(text):
import re
sentences = re.split(r'(?<=[.!?]) +', text)
print(f"Good Chunking produced {len(sentences)} chunks: {sentences}") # Show chunks produced
return sentences
# Good Chunking with Metadata
def good_chunking_with_metadata(text):
chunks = good_chunking(text)
metadata_chunks = []
for chunk in chunks:
if "healthcare" in chunk:
metadata_chunks.append({"text": chunk, "source": "Healthcare Section", "topic": "AI in Healthcare"})
elif "ethical implications" in chunk or "data privacy" in chunk:
metadata_chunks.append({"text": chunk, "source": "Challenges Section", "topic": "AI Challenges"})
else:
metadata_chunks.append({"text": chunk, "source": "General", "topic": "AI Overview"})
print(f"Good Chunking with Metadata produced {len(metadata_chunks)} chunks: {metadata_chunks}") # Show chunks produced
return metadata_chunks
# Store chunks in Qdrant
def store_chunks(chunks, collection_name):
if len(chunks) == 0:
print(f"No chunks were generated for the collection '{collection_name}'.")
return
# Generate embedding for the first chunk to determine vector size
sample_text = chunks[0] if isinstance(chunks[0], str) else chunks[0]["text"]
sample_embedding = embed_text(sample_text)
vector_size = len(sample_embedding)
create_collection_if_not_exists(collection_name, vector_size)
for idx, chunk in enumerate(chunks):
text = chunk if isinstance(chunk, str) else chunk["text"]
embedding = embed_text(text)
payload = chunk if isinstance(chunk, dict) else {"text": text} # Always ensure there's text in the payload
client.upsert(collection_name=collection_name, points=[
PointStruct(id=idx, vector=embedding, payload=payload)
])
print(f"Chunks successfully stored in the collection '{collection_name}'.")
# Execute chunking and storing separately for each strategy
print("Starting poor_chunking...")
store_chunks(poor_chunking(text), "poor_chunking")
print("Starting good_chunking...")
store_chunks(good_chunking(text), "good_chunking")
print("Starting good_chunking_with_metadata...")
store_chunks(good_chunking_with_metadata(text), "good_chunking_with_metadata")
上面的代碼執(zhí)行以下操作:
· embed_text方法接收文本,使用 OpenAI 嵌入模型生成嵌入,并返回生成的嵌入。
· 初始化用于分塊和后續(xù)內(nèi)容檢索的文本字符串
· 糟糕的分塊策略: 將文本分成每 40 個字符的塊
· 良好的分塊策略:根據(jù)句子拆分文本以獲得更有意義的上下文
· 具有元數(shù)據(jù)的良好分塊策略:向句子級塊添加適當?shù)脑獢?shù)據(jù)
· 一旦為塊生成了嵌入,它們就會存儲在 Qdrant Cloud 中相應的集合中。
請記住,創(chuàng)建不良分塊只是為了展示不良分塊如何影響檢索。
下面是來自 Qdrant Cloud 的塊的屏幕截圖,您可以看到元數(shù)據(jù)被添加到句子級塊中以指示來源和主題。
基于分塊策略的檢索結果
現(xiàn)在讓我們編寫一些代碼來根據(jù)查詢從 Qdrant Vector DB 中檢索內(nèi)容。
Python
import qdrant_client
from qdrant_client.models import PointStruct, Distance, VectorParams
import openai
import yaml
# Load configuration
with open('config.yaml', 'r') as file:
config = yaml.safe_load(file)
# Initialize Qdrant client
client = qdrant_client.QdrantClient(config['qdrant']['url'], api_key=config['qdrant']['api_key'])
# Initialize OpenAI with the API key
openai.api_key = config['openai']['api_key']
def embed_text(text):
print(f"Generating embedding for: '{text[:50]}'...") # Show a snippet of the text being embedded
response = openai.embeddings.create(
input=[text], # Input needs to be a list
model=config['openai']['model_name']
)
embedding = response.data[0].embedding # Access using the attribute, not as a dictionary
print(f"Generated embedding of length {len(embedding)}.") # Confirm embedding generation
return embedding
# Function to create a collection if it doesn't exist
def create_collection_if_not_exists(collection_name, vector_size):
collections = client.get_collections().collections
if collection_name not in [collection.name for collection in collections]:
client.create_collection(
collection_name=collection_name,
vectors_config=VectorParams(size=vector_size, distance=Distance.COSINE)
)
print(f"Created collection: {collection_name} with vector size: {vector_size}") # Collection creation
else:
print(f"Collection {collection_name} already exists.") # Collection existence check
# Text to be chunked which is flagged for AI and Plagiarism but is just used for illustration and example.
text = """
Artificial intelligence is transforming industries across the globe. One of the key areas where AI is making a significant impact is healthcare. AI is being used to develop new drugs, personalize treatment plans, and even predict patient outcomes. Despite these advancements, there are challenges that must be addressed. The ethical implications of AI in healthcare, data privacy concerns, and the need for proper regulation are all critical issues. As AI continues to evolve, it is crucial that these challenges are not overlooked. By addressing these issues head-on, we can ensure that AI is used in a way that benefits everyone.
"""
# Poor Chunking Strategy
def poor_chunking(text, chunk_size=40):
chunks = [text[i:i + chunk_size] for i in range(0, len(text), chunk_size)]
print(f"Poor Chunking produced {len(chunks)} chunks: {chunks}") # Show chunks produced
return chunks
# Good Chunking Strategy
def good_chunking(text):
import re
sentences = re.split(r'(?<=[.!?]) +', text)
print(f"Good Chunking produced {len(sentences)} chunks: {sentences}") # Show chunks produced
return sentences
# Good Chunking with Metadata
def good_chunking_with_metadata(text):
chunks = good_chunking(text)
metadata_chunks = []
for chunk in chunks:
if "healthcare" in chunk:
metadata_chunks.append({"text": chunk, "source": "Healthcare Section", "topic": "AI in Healthcare"})
elif "ethical implications" in chunk or "data privacy" in chunk:
metadata_chunks.append({"text": chunk, "source": "Challenges Section", "topic": "AI Challenges"})
else:
metadata_chunks.append({"text": chunk, "source": "General", "topic": "AI Overview"})
print(f"Good Chunking with Metadata produced {len(metadata_chunks)} chunks: {metadata_chunks}") # Show chunks produced
return metadata_chunks
# Store chunks in Qdrant
def store_chunks(chunks, collection_name):
if len(chunks) == 0:
print(f"No chunks were generated for the collection '{collection_name}'.")
return
# Generate embedding for the first chunk to determine vector size
sample_text = chunks[0] if isinstance(chunks[0], str) else chunks[0]["text"]
sample_embedding = embed_text(sample_text)
vector_size = len(sample_embedding)
create_collection_if_not_exists(collection_name, vector_size)
for idx, chunk in enumerate(chunks):
text = chunk if isinstance(chunk, str) else chunk["text"]
embedding = embed_text(text)
payload = chunk if isinstance(chunk, dict) else {"text": text} # Always ensure there's text in the payload
client.upsert(collection_name=collection_name, points=[
PointStruct(id=idx, vector=embedding, payload=payload)
])
print(f"Chunks successfully stored in the collection '{collection_name}'.")
# Execute chunking and storing separately for each strategy
print("Starting poor_chunking...")
store_chunks(poor_chunking(text), "poor_chunking")
print("Starting good_chunking...")
store_chunks(good_chunking(text), "good_chunking")
print("Starting good_chunking_with_metadata...")
store_chunks(good_chunking_with_metadata(text), "good_chunking_with_metadata")
上面的代碼執(zhí)行以下操作:
· 定義查詢并生成查詢的嵌入
· 搜索查詢設置為"ethical implications of AI in healthcare"。
· 該retrieve_and_print函數(shù)搜索特定的 Qdrant 集合并檢索最接近查詢嵌入的前 3 個向量。
現(xiàn)在讓我們看看輸出:
python retrieval_test.py
Results from 'poor_chunking' collection for the query: 'ethical implications of AI in healthcare':
Result 1:
Text: . The ethical implications of AI in heal
Source: N/A
Topic: N/A
Result 2:
Text: ant impact is healthcare. AI is being us
Source: N/A
Topic: N/A
Result 3:
Text:
Artificial intelligence is transforming
Source: N/A
Topic: N/A
Results from 'good_chunking' collection for the query: 'ethical implications of AI in healthcare':
Result 1:
Text: The ethical implications of AI in healthcare, data privacy concerns, and the need for proper regulation are all critical issues.
Source: N/A
Topic: N/A
Result 2:
Text: One of the key areas where AI is making a significant impact is healthcare.
Source: N/A
Topic: N/A
Result 3:
Text: By addressing these issues head-on, we can ensure that AI is used in a way that benefits everyone.
Source: N/A
Topic: N/A
Results from 'good_chunking_with_metadata' collection for the query: 'ethical implications of AI in healthcare':
Result 1:
Text: The ethical implications of AI in healthcare, data privacy concerns, and the need for proper regulation are all critical issues.
Source: Healthcare Section
Topic: AI in Healthcare
Result 2:
Text: One of the key areas where AI is making a significant impact is healthcare.
Source: Healthcare Section
Topic: AI in Healthcare
Result 3:
Text: By addressing these issues head-on, we can ensure that AI is used in a way that benefits everyone.
Source: General
Topic: AI Overview
同一搜索查詢的輸出根據(jù)實施的分塊策略而有所不同。
· 分塊策略不佳:您可以注意到,這里的結果不太相關,這是因為文本被分成了任意的小塊。
· 良好的分塊策略:這里的結果更相關,因為文本被分成句子,保留了語義含義。
· 使用元數(shù)據(jù)進行良好的分塊策略:這里的結果最準確,因為文本經(jīng)過深思熟慮地分塊并使用元數(shù)據(jù)進行增強。
從實驗中得出的推論
· 分塊需要精心制定策略,并且塊大小不宜太小或太大。
· 分塊不當?shù)囊粋€例子是,塊太小,在非自然的地方切斷句子,或者塊太大,同一個塊中包含多個主題,這使得檢索非?;靵y。
· 分塊的整個想法都圍繞著為 LLM 提供更好的背景的概念。
· 元數(shù)據(jù)通過提供額外的上下文層極大地增強了結構正確的分塊。例如,我們已將來源和主題作為元數(shù)據(jù)元素添加到我們的分塊中。
· 檢索系統(tǒng)受益于這些附加信息。例如,如果元數(shù)據(jù)表明某個區(qū)塊屬于“醫(yī)療保健部分”,則系統(tǒng)可以在進行與醫(yī)療保健相關的查詢時優(yōu)先考慮這些區(qū)塊。
· 通過改進分塊,結果可以結構化和分類。如果查詢與同一文本中的多個上下文匹配,我們可以通過查看塊的元數(shù)據(jù)來確定信息屬于哪個上下文或部分。
牢記這些策略,并在基于 LLM 的搜索應用程序中分塊取得成功。