All Classes and Interfaces
Class
Description
Uses a large language model to compress a conversation history and a follow-up query
into a standalone query that captures the essence of the conversation.
Combines documents retrieved based on multiple queries and from multiple data sources
by concatenating them into a single collection of documents.
Augments the user query with contextual data from the content of the provided
documents.
A component for combining documents retrieved based on multiple queries and from
multiple data sources into a single collection of documents.
A component for post-processing retrieved documents based on a query, addressing
challenges such as "lost-in-the-middle", context length restrictions from the model,
and the need to reduce noise and redundancy in the retrieved information.
Component responsible for retrieving
Documents from an underlying data source,
such as a search engine, a vector store, a database, or a knowledge graph.Uses a large language model to expand a query into multiple semantically diverse
variations to capture different perspectives, useful for retrieving additional
contextual information and increasing the chances of finding relevant results.
Assertion utility class that assists in validating arguments for prompt-related
operations.
Represents a query in the context of a Retrieval Augmented Generation (RAG) flow.
A component for augmenting an input query with additional data, useful to provide a
large language model with the necessary context to answer the user query.
A component for expanding the input query into a list of queries, addressing challenges
such as poorly formed queries by providing alternative query formulations, or by
breaking down complex problems into simpler sub-queries.
A component for transforming the input query to make it more effective for retrieval
tasks, addressing challenges such as poorly formed queries, ambiguous terms, complex
vocabulary, or unsupported languages.
Advisor that implements common Retrieval Augmented Generation (RAG) flows using the
building blocks defined in the
org.springframework.ai.rag package and following
the Modular RAG Architecture.Uses a large language model to rewrite a user query to provide better results when
querying a target system, such as a vector store or a web search engine.
Uses a large language model to translate a query to a target language that is supported
by the embedding model used to generate the document embeddings.
Retrieves documents from a vector store that are semantically similar to the input
query.
Builder for
VectorStoreDocumentRetriever.