top of page
Javith Abbas

Chunking

Chunking is a fundamental technique in generative AI, particularly in Retrieval Augmented Generation (RAG). It involves breaking down large documents or text sources into smaller, more manageable pieces, known as "chunks." These chunks are crucial for optimizing the information retrieval process by making it easier to search for relevant information and generate responses.


Why is Chunking Important?

  • Improved Retrieval Accuracy: Chunking enhances the precision of information retrieval. By dividing the text into smaller units, RAG models can concentrate on specific sections that are more likely to have pertinent information. This targeted approach results in more accurate retrieval compared to scanning an entire document.

  • Reduced Computational Cost: Processing data in smaller segments significantly cuts down the computational load. This efficiency is vital when managing large datasets or complex queries, making the retrieval process more manageable and less resource intensive.

  • Enhanced Contextual Understanding: By analysing data in chunks, models can better capture the context within each segment. This improved contextual awareness helps a deeper understanding of the relationships between different pieces of information, aiding in generating coherent and contextually right responses.

Breaking down large documents into smaller sections, or "chunks," can be essential when working with embedding models that have a cap on input size. Take the Azure OpenAI embedding models as an example, which accept a maximum of 8,191 tokens per input. Considering that each token usually represents about four characters in typical OpenAI models, this limit roughly translates to about 6,000 words. Therefore, if you are using these models to create embeddings, it's crucial to ensure that your input text does not exceed this token limit to avoid errors and optimize performance.



Chunking Strategies

Fixed-Size

This is the most basic form of chunking where text is divided based on a fixed number of characters. If we chunk the sentence "The quick brown fox jumps over the lazy dog" with a chunk size of 10, we get: 'The quick ', 'brown fox ', 'jumps over', ' the lazy ', 'dog'. Despite their increased computational load and the need for parameter tuning, character splitters excel at handling varied separators within texts.


Recursive Character

Recursive Character Chunking is a more dynamic approach to splitting text. It involves initially breaking down the text into predefined sizes and then recursively processing any chunks aiming to preserve logical text units. Using the same sentence but with recursive character chunking, we might get: 'The quick', 'brown fox', 'jumps', 'over the', 'lazy dog', with each chunk more likely to have complete thoughts.


Sentence

This strategy splits text based on sentence boundaries, which is great for ensuring that full sentences are preserved in each chunk. “The quick brown fox jumps over the lazy dog. It was a sunny day." would yield two chunks: 'The quick brown fox jumps over the lazy dog.' and 'It was a sunny day.'


Semantic

Semantic Chunking is an advanced text segmentation approach that goes beyond superficial features like character or sentence boundaries. Instead, it relies on the inherent meaning within the text, using natural language understanding techniques to group together text segments that share similar themes or topics. In this example, "The quick brown fox jumped over the lazy dog. A swift creature, it moved with elegance. Nearby, the trees swayed gently in the breeze, and the flowers bloomed, heralding the arrival of spring."

If we apply Semantic Chunking to this passage, it would be split according to the underlying semantic content:

  1. Chunk 1: "The quick brown fox jumped over the lazy dog. A swift creature, it moved with elegance." (This chunk groups together the sentences about the fox)

  2. Chunk 2: "Nearby, the trees swayed gently in the breeze, and the flowers bloomed, heralding the arrival of spring." (This chunk captures the sentences related to the environment)

Chunk Size

Chunk size refers to the amount of text, counted either by characters, words, or sentences, that you encapsulate within a single chunk. The chunk size can significantly affect the performance of a language model. Smaller chunks are great for tasks that require a granular focus, such as sentiment analysis at the sentence level. Larger chunks can capture broader themes and contexts, ideal for understanding the overall sentiment in a larger body of text.


Chunk Overlap

Chunk overlap refers to the technique of allowing some text to appear in multiple chunks to ensure that no contextual information is lost at the boundaries between chunks. In this example "The quick brown fox jumps over the lazy dog" with a chunk size of 10 and an overlap of 5. Here’s how the chunks would look:

  1. "The quick b"

  2. "quick brown"

  3. "brown fox j"

  4. "fox jumps o"

  5. "jumps over "

  6. "over the la"

  7. "the lazy do"

  8. "lazy dog"

Start by testing different sizes of overlap. A small overlap might not preserve enough context, while a large overlap could reduce efficiency.


In the context of RAG and other AI applications, chunking serves as a critical tool for dealing with extensive data sets in a practical and effective manner. Tokenization breaks down text to the very basic units (words and punctuation), whereas chunking groups these units into meaningful clusters. Each of these strategies has its own strengths and caters to different requirements. Mastering chunk strategy and size in RAG requires practice and fine-tuning to achieve the best performance.

 

4 views0 comments

Recent Posts

See All

Comments


bottom of page