www.久久久久|狼友网站av天堂|精品国产无码a片|一级av色欲av|91在线播放视频|亚洲无码主播在线|国产精品草久在线|明星AV网站在线|污污内射久久一区|婷婷综合视频网站

當前位置:首頁 > 物聯(lián)網(wǎng) > 智能應用
[導讀]通常,我們開發(fā)基于 LLM 的檢索應用程序的知識庫包含大量各種格式的數(shù)據(jù)。為了向LLM提供最相關的上下文來回答知識庫中特定部分的問題,我們依賴于對知識庫中的文本進行分塊并將其放在方便的位置。

通常,我們開發(fā)基于 LLM 的檢索應用程序的知識庫包含大量各種格式的數(shù)據(jù)。為了向LLM提供最相關的上下文來回答知識庫中特定部分的問題,我們依賴于對知識庫中的文本進行分塊并將其放在方便的位置。

分塊

分塊是將文本分割成有意義的單元以改進信息檢索的過程。通過確保每個塊代表一個集中的想法或觀點,分塊有助于保持內(nèi)容的上下文完整性。

在本文中,我們將討論分塊的三個方面:

· 糟糕的分塊如何導致結果相關性降低

· 良好的分塊如何帶來更好的結果

· 如何通過元數(shù)據(jù)進行良好的分塊,從而獲得具有良好語境的結果

為了有效地展示分塊的重要性,我們將采用同一段文本,對其應用 3 種不同的分塊方法,并檢查如何根據(jù)查詢檢索信息。

分塊并存儲至 Qdrant

讓我們看看下面的代碼,它展示了對同一文本進行分塊的三種不同方法。

Python

import qdrant_client

from qdrant_client.models import PointStruct, Distance, VectorParams

import openai

import yaml

# Load configuration

with open('config.yaml', 'r') as file:

config = yaml.safe_load(file)

# Initialize Qdrant client

client = qdrant_client.QdrantClient(config['qdrant']['url'], api_key=config['qdrant']['api_key'])

# Initialize OpenAI with the API key

openai.api_key = config['openai']['api_key']

def embed_text(text):

print(f"Generating embedding for: '{text[:50]}'...") # Show a snippet of the text being embedded

response = openai.embeddings.create(

input=[text], # Input needs to be a list

model=config['openai']['model_name']

)

embedding = response.data[0].embedding # Access using the attribute, not as a dictionary

print(f"Generated embedding of length {len(embedding)}.") # Confirm embedding generation

return embedding

# Function to create a collection if it doesn't exist

def create_collection_if_not_exists(collection_name, vector_size):

collections = client.get_collections().collections

if collection_name not in [collection.name for collection in collections]:

client.create_collection(

collection_name=collection_name,

vectors_config=VectorParams(size=vector_size, distance=Distance.COSINE)

)

print(f"Created collection: {collection_name} with vector size: {vector_size}") # Collection creation

else:

print(f"Collection {collection_name} already exists.") # Collection existence check

# Text to be chunked which is flagged for AI and Plagiarism but is just used for illustration and example.

text = """

Artificial intelligence is transforming industries across the globe. One of the key areas where AI is making a significant impact is healthcare. AI is being used to develop new drugs, personalize treatment plans, and even predict patient outcomes. Despite these advancements, there are challenges that must be addressed. The ethical implications of AI in healthcare, data privacy concerns, and the need for proper regulation are all critical issues. As AI continues to evolve, it is crucial that these challenges are not overlooked. By addressing these issues head-on, we can ensure that AI is used in a way that benefits everyone.

"""

# Poor Chunking Strategy

def poor_chunking(text, chunk_size=40):

chunks = [text[i:i + chunk_size] for i in range(0, len(text), chunk_size)]

print(f"Poor Chunking produced {len(chunks)} chunks: {chunks}") # Show chunks produced

return chunks

# Good Chunking Strategy

def good_chunking(text):

import re

sentences = re.split(r'(?<=[.!?]) +', text)

print(f"Good Chunking produced {len(sentences)} chunks: {sentences}") # Show chunks produced

return sentences

# Good Chunking with Metadata

def good_chunking_with_metadata(text):

chunks = good_chunking(text)

metadata_chunks = []

for chunk in chunks:

if "healthcare" in chunk:

metadata_chunks.append({"text": chunk, "source": "Healthcare Section", "topic": "AI in Healthcare"})

elif "ethical implications" in chunk or "data privacy" in chunk:

metadata_chunks.append({"text": chunk, "source": "Challenges Section", "topic": "AI Challenges"})

else:

metadata_chunks.append({"text": chunk, "source": "General", "topic": "AI Overview"})

print(f"Good Chunking with Metadata produced {len(metadata_chunks)} chunks: {metadata_chunks}") # Show chunks produced

return metadata_chunks

# Store chunks in Qdrant

def store_chunks(chunks, collection_name):

if len(chunks) == 0:

print(f"No chunks were generated for the collection '{collection_name}'.")

return

# Generate embedding for the first chunk to determine vector size

sample_text = chunks[0] if isinstance(chunks[0], str) else chunks[0]["text"]

sample_embedding = embed_text(sample_text)

vector_size = len(sample_embedding)

create_collection_if_not_exists(collection_name, vector_size)

for idx, chunk in enumerate(chunks):

text = chunk if isinstance(chunk, str) else chunk["text"]

embedding = embed_text(text)

payload = chunk if isinstance(chunk, dict) else {"text": text} # Always ensure there's text in the payload

client.upsert(collection_name=collection_name, points=[

PointStruct(id=idx, vector=embedding, payload=payload)

])

print(f"Chunks successfully stored in the collection '{collection_name}'.")

# Execute chunking and storing separately for each strategy

print("Starting poor_chunking...")

store_chunks(poor_chunking(text), "poor_chunking")

print("Starting good_chunking...")

store_chunks(good_chunking(text), "good_chunking")

print("Starting good_chunking_with_metadata...")

store_chunks(good_chunking_with_metadata(text), "good_chunking_with_metadata")

上面的代碼執(zhí)行以下操作:

· embed_text方法接收文本,使用 OpenAI 嵌入模型生成嵌入,并返回生成的嵌入。

· 初始化用于分塊和后續(xù)內(nèi)容檢索的文本字符串

· 糟糕的分塊策略: 將文本分成每 40 個字符的塊

· 良好的分塊策略:根據(jù)句子拆分文本以獲得更有意義的上下文

· 具有元數(shù)據(jù)的良好分塊策略:向句子級塊添加適當?shù)脑獢?shù)據(jù)

· 一旦為塊生成了嵌入,它們就會存儲在 Qdrant Cloud 中相應的集合中。

請記住,創(chuàng)建不良分塊只是為了展示不良分塊如何影響檢索。

下面是來自 Qdrant Cloud 的塊的屏幕截圖,您可以看到元數(shù)據(jù)被添加到句子級塊中以指示來源和主題。

基于分塊策略的檢索結果

現(xiàn)在讓我們編寫一些代碼來根據(jù)查詢從 Qdrant Vector DB 中檢索內(nèi)容。

Python

import qdrant_client

from qdrant_client.models import PointStruct, Distance, VectorParams

import openai

import yaml

# Load configuration

with open('config.yaml', 'r') as file:

config = yaml.safe_load(file)

# Initialize Qdrant client

client = qdrant_client.QdrantClient(config['qdrant']['url'], api_key=config['qdrant']['api_key'])

# Initialize OpenAI with the API key

openai.api_key = config['openai']['api_key']

def embed_text(text):

print(f"Generating embedding for: '{text[:50]}'...") # Show a snippet of the text being embedded

response = openai.embeddings.create(

input=[text], # Input needs to be a list

model=config['openai']['model_name']

)

embedding = response.data[0].embedding # Access using the attribute, not as a dictionary

print(f"Generated embedding of length {len(embedding)}.") # Confirm embedding generation

return embedding

# Function to create a collection if it doesn't exist

def create_collection_if_not_exists(collection_name, vector_size):

collections = client.get_collections().collections

if collection_name not in [collection.name for collection in collections]:

client.create_collection(

collection_name=collection_name,

vectors_config=VectorParams(size=vector_size, distance=Distance.COSINE)

)

print(f"Created collection: {collection_name} with vector size: {vector_size}") # Collection creation

else:

print(f"Collection {collection_name} already exists.") # Collection existence check

# Text to be chunked which is flagged for AI and Plagiarism but is just used for illustration and example.

text = """

Artificial intelligence is transforming industries across the globe. One of the key areas where AI is making a significant impact is healthcare. AI is being used to develop new drugs, personalize treatment plans, and even predict patient outcomes. Despite these advancements, there are challenges that must be addressed. The ethical implications of AI in healthcare, data privacy concerns, and the need for proper regulation are all critical issues. As AI continues to evolve, it is crucial that these challenges are not overlooked. By addressing these issues head-on, we can ensure that AI is used in a way that benefits everyone.

"""

# Poor Chunking Strategy

def poor_chunking(text, chunk_size=40):

chunks = [text[i:i + chunk_size] for i in range(0, len(text), chunk_size)]

print(f"Poor Chunking produced {len(chunks)} chunks: {chunks}") # Show chunks produced

return chunks

# Good Chunking Strategy

def good_chunking(text):

import re

sentences = re.split(r'(?<=[.!?]) +', text)

print(f"Good Chunking produced {len(sentences)} chunks: {sentences}") # Show chunks produced

return sentences

# Good Chunking with Metadata

def good_chunking_with_metadata(text):

chunks = good_chunking(text)

metadata_chunks = []

for chunk in chunks:

if "healthcare" in chunk:

metadata_chunks.append({"text": chunk, "source": "Healthcare Section", "topic": "AI in Healthcare"})

elif "ethical implications" in chunk or "data privacy" in chunk:

metadata_chunks.append({"text": chunk, "source": "Challenges Section", "topic": "AI Challenges"})

else:

metadata_chunks.append({"text": chunk, "source": "General", "topic": "AI Overview"})

print(f"Good Chunking with Metadata produced {len(metadata_chunks)} chunks: {metadata_chunks}") # Show chunks produced

return metadata_chunks

# Store chunks in Qdrant

def store_chunks(chunks, collection_name):

if len(chunks) == 0:

print(f"No chunks were generated for the collection '{collection_name}'.")

return

# Generate embedding for the first chunk to determine vector size

sample_text = chunks[0] if isinstance(chunks[0], str) else chunks[0]["text"]

sample_embedding = embed_text(sample_text)

vector_size = len(sample_embedding)

create_collection_if_not_exists(collection_name, vector_size)

for idx, chunk in enumerate(chunks):

text = chunk if isinstance(chunk, str) else chunk["text"]

embedding = embed_text(text)

payload = chunk if isinstance(chunk, dict) else {"text": text} # Always ensure there's text in the payload

client.upsert(collection_name=collection_name, points=[

PointStruct(id=idx, vector=embedding, payload=payload)

])

print(f"Chunks successfully stored in the collection '{collection_name}'.")

# Execute chunking and storing separately for each strategy

print("Starting poor_chunking...")

store_chunks(poor_chunking(text), "poor_chunking")

print("Starting good_chunking...")

store_chunks(good_chunking(text), "good_chunking")

print("Starting good_chunking_with_metadata...")

store_chunks(good_chunking_with_metadata(text), "good_chunking_with_metadata")

上面的代碼執(zhí)行以下操作:

· 定義查詢并生成查詢的嵌入

· 搜索查詢設置為"ethical implications of AI in healthcare"。

· 該retrieve_and_print函數(shù)搜索特定的 Qdrant 集合并檢索最接近查詢嵌入的前 3 個向量。

現(xiàn)在讓我們看看輸出:

python retrieval_test.py

Results from 'poor_chunking' collection for the query: 'ethical implications of AI in healthcare':

Result 1:

Text: . The ethical implications of AI in heal

Source: N/A

Topic: N/A

Result 2:

Text: ant impact is healthcare. AI is being us

Source: N/A

Topic: N/A

Result 3:

Text:

Artificial intelligence is transforming

Source: N/A

Topic: N/A

Results from 'good_chunking' collection for the query: 'ethical implications of AI in healthcare':

Result 1:

Text: The ethical implications of AI in healthcare, data privacy concerns, and the need for proper regulation are all critical issues.

Source: N/A

Topic: N/A

Result 2:

Text: One of the key areas where AI is making a significant impact is healthcare.

Source: N/A

Topic: N/A

Result 3:

Text: By addressing these issues head-on, we can ensure that AI is used in a way that benefits everyone.

Source: N/A

Topic: N/A

Results from 'good_chunking_with_metadata' collection for the query: 'ethical implications of AI in healthcare':

Result 1:

Text: The ethical implications of AI in healthcare, data privacy concerns, and the need for proper regulation are all critical issues.

Source: Healthcare Section

Topic: AI in Healthcare

Result 2:

Text: One of the key areas where AI is making a significant impact is healthcare.

Source: Healthcare Section

Topic: AI in Healthcare

Result 3:

Text: By addressing these issues head-on, we can ensure that AI is used in a way that benefits everyone.

Source: General

Topic: AI Overview

同一搜索查詢的輸出根據(jù)實施的分塊策略而有所不同。

· 分塊策略不佳:您可以注意到,這里的結果不太相關,這是因為文本被分成了任意的小塊。

· 良好的分塊策略:這里的結果更相關,因為文本被分成句子,保留了語義含義。

· 使用元數(shù)據(jù)進行良好的分塊策略:這里的結果最準確,因為文本經(jīng)過深思熟慮地分塊并使用元數(shù)據(jù)進行增強。

從實驗中得出的推論

· 分塊需要精心制定策略,并且塊大小不宜太小或太大。

· 分塊不當?shù)囊粋€例子是,塊太小,在非自然的地方切斷句子,或者塊太大,同一個塊中包含多個主題,這使得檢索非?;靵y。

· 分塊的整個想法都圍繞著為 LLM 提供更好的背景的概念。

· 元數(shù)據(jù)通過提供額外的上下文層極大地增強了結構正確的分塊。例如,我們已將來源和主題作為元數(shù)據(jù)元素添加到我們的分塊中。

· 檢索系統(tǒng)受益于這些附加信息。例如,如果元數(shù)據(jù)表明某個區(qū)塊屬于“醫(yī)療保健部分”,則系統(tǒng)可以在進行與醫(yī)療保健相關的查詢時優(yōu)先考慮這些區(qū)塊。

· 通過改進分塊,結果可以結構化和分類。如果查詢與同一文本中的多個上下文匹配,我們可以通過查看塊的元數(shù)據(jù)來確定信息屬于哪個上下文或部分。

牢記這些策略,并在基于 LLM 的搜索應用程序中分塊取得成功。

本站聲明: 本文章由作者或相關機構授權發(fā)布,目的在于傳遞更多信息,并不代表本站贊同其觀點,本站亦不保證或承諾內(nèi)容真實性等。需要轉載請聯(lián)系該專欄作者,如若文章內(nèi)容侵犯您的權益,請及時聯(lián)系本站刪除。
換一批
延伸閱讀

9月2日消息,不造車的華為或將催生出更大的獨角獸公司,隨著阿維塔和賽力斯的入局,華為引望愈發(fā)顯得引人矚目。

關鍵字: 阿維塔 塞力斯 華為

加利福尼亞州圣克拉拉縣2024年8月30日 /美通社/ -- 數(shù)字化轉型技術解決方案公司Trianz今天宣布,該公司與Amazon Web Services (AWS)簽訂了...

關鍵字: AWS AN BSP 數(shù)字化

倫敦2024年8月29日 /美通社/ -- 英國汽車技術公司SODA.Auto推出其旗艦產(chǎn)品SODA V,這是全球首款涵蓋汽車工程師從創(chuàng)意到認證的所有需求的工具,可用于創(chuàng)建軟件定義汽車。 SODA V工具的開發(fā)耗時1.5...

關鍵字: 汽車 人工智能 智能驅動 BSP

北京2024年8月28日 /美通社/ -- 越來越多用戶希望企業(yè)業(yè)務能7×24不間斷運行,同時企業(yè)卻面臨越來越多業(yè)務中斷的風險,如企業(yè)系統(tǒng)復雜性的增加,頻繁的功能更新和發(fā)布等。如何確保業(yè)務連續(xù)性,提升韌性,成...

關鍵字: 亞馬遜 解密 控制平面 BSP

8月30日消息,據(jù)媒體報道,騰訊和網(wǎng)易近期正在縮減他們對日本游戲市場的投資。

關鍵字: 騰訊 編碼器 CPU

8月28日消息,今天上午,2024中國國際大數(shù)據(jù)產(chǎn)業(yè)博覽會開幕式在貴陽舉行,華為董事、質量流程IT總裁陶景文發(fā)表了演講。

關鍵字: 華為 12nm EDA 半導體

8月28日消息,在2024中國國際大數(shù)據(jù)產(chǎn)業(yè)博覽會上,華為常務董事、華為云CEO張平安發(fā)表演講稱,數(shù)字世界的話語權最終是由生態(tài)的繁榮決定的。

關鍵字: 華為 12nm 手機 衛(wèi)星通信

要點: 有效應對環(huán)境變化,經(jīng)營業(yè)績穩(wěn)中有升 落實提質增效舉措,毛利潤率延續(xù)升勢 戰(zhàn)略布局成效顯著,戰(zhàn)新業(yè)務引領增長 以科技創(chuàng)新為引領,提升企業(yè)核心競爭力 堅持高質量發(fā)展策略,塑強核心競爭優(yōu)勢...

關鍵字: 通信 BSP 電信運營商 數(shù)字經(jīng)濟

北京2024年8月27日 /美通社/ -- 8月21日,由中央廣播電視總臺與中國電影電視技術學會聯(lián)合牽頭組建的NVI技術創(chuàng)新聯(lián)盟在BIRTV2024超高清全產(chǎn)業(yè)鏈發(fā)展研討會上宣布正式成立。 活動現(xiàn)場 NVI技術創(chuàng)新聯(lián)...

關鍵字: VI 傳輸協(xié)議 音頻 BSP

北京2024年8月27日 /美通社/ -- 在8月23日舉辦的2024年長三角生態(tài)綠色一體化發(fā)展示范區(qū)聯(lián)合招商會上,軟通動力信息技術(集團)股份有限公司(以下簡稱"軟通動力")與長三角投資(上海)有限...

關鍵字: BSP 信息技術
關閉