From b2dfd1b9776b6405e6531d7a43857065e10ce3bc Mon Sep 17 00:00:00 2001 From: Liam Cavanagh Date: Mon, 13 Mar 2023 08:04:15 -0700 Subject: [PATCH] Update README.md --- README.md | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/README.md b/README.md index bd280a0f37..ca8136c6f5 100644 --- a/README.md +++ b/README.md @@ -62,3 +62,10 @@ Once in the web app: ### Note >Note: The PDF documents used in this demo contain information generated using a language model (Azure OpenAI Service). The information contained in these documents is only for demonstration purposes and does not reflect the opinions or beliefs of Microsoft. Microsoft makes no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the information contained in this document. All rights reserved to Microsoft. + +### FAQ + +***Question***: Why do we need to break up (chunk) the PDFs into chunks when Azure Cognitive Search support searching large documents? + +***Answer***: Chunking allows us to limit the amount of information we send to OpenAI due to token limits. By breaking up the content, it allows us to easily find potential chunks of text that we can inject into OpenAI. The menthod of chunking we use leverages a sliding window of text such that sentences that end one chunk will start the next. This allows us to reduce the chance of losing the context of the text. +