Enable recursive summarization for large inputs
-
- UpdatedFeb 1, 2024
- 2 minutes to read
- Washington DC
- AI Experiences
Use recursive summarization to break down the requests to the large language models (LLMs) into smaller pieces so that you can maintain the context for generative AI capabilities.
Before you begin
Role required: admin
About this task
LLMs have a maximum number of tokens that can be processed in a single request. Certain fields, such as activity fields, can have more information than can fit in within those restrictions. Recursive summarization breaks the information given to an LLM into chunks, summarizes each chunk individually, and then processes the original request with the summarized chunks. The chunks are organized with overlaps between the pieces so that the context is retained across every piece.
Procedure
Result
Recursive summarization is enabled for the OneExtend Capability for the fields specified in this procedure.