Skip to content

Glossary

This section contains descriptions of commonly used terminology with vector databases.

Cosine similarity

Cosine similarity is a measure of the cosine of the angle between two vectors. In document embeddings, each dimension can represent a word's frequency or TF-IDF weight. However, two documents of different lengths can have drastically different word frequencies yet the same word distribution. Since this places them in similar directions in vector space but not similar distances, cosine similarity is a great choice.

Database

A database is collection of related data stored and accessed electronically using a Database Management System (DBMS). Databases are likely to be hosted on computer clusters or cloud storage.

Database Management System (DBMS)

The DBMS provides various functions that allow entry, storage and retrieval of large quantities of information and provides ways to manage how that information is organized.

Because of the close relationship between them, the term "database" is often used casually to refer to both a database and the DBMS used to manipulate it.

Dense vector

A dense vector is a type of vector where most of the elements are non-zero. Dense vectors are information-rich, with densely-packed data in every dimension. By providing a compact yet expressive way to represent data, dense vectors are valuable for various tasks in machine learning.

Dot product

The Dot product measures the similarity or alignment between two vectors. It quantifies how much the vectors point in the same direction. For two vectors A and B in n-dimensional space, the Dot product is calculated as the sum of the products of their corresponding elements. The Dot product can be positive (if vectors are aligned in the same direction), negative (if vectors are aligned in opposite directions), or zero (if vectors are orthogonal). With this metric, both magnitude and direction of vectors are used.

Euclidean vector

A Euclidean vector is a geometric object that has magnitude, or length, and direction.

Euclidean distance

Euclidean distance is a measure of the straight-line distance between two points in Euclidean space. In the context of vectors, Euclidean distance is calculated as the square root of the sum of squared differences between corresponding elements of two vectors. With this metric, both magnitude and direction of vectors are used.

Filter

Filter parameters are used for applying custom filtering to your database query. This allows you to include or exclude certain data from your query and can improve the search response time.

A flat search performs an exhaustive search against all vectors in the search space it can be configured with a number of distance metrics. As the search is exhaustive it finds the exact nearest neighbors without approximations.

When using a Flat Inverted File (IVF) search, you firstly train the index on a set of points that are used to generate cluster centroids using a k-means algorithm. The data is not partitioned into a cluster based on centroid distance. The search is performed by running a flat search against the most relevant clusters. As only a subset of the data is searched, the results are returned much quicker, but as a consequence can be less accurate.

Hierarchical Navigable Small Worlds (HNSW)

At a fundamental level, a HNSW incrementally generates a hierarchical, multi-level graph structure which allows searches to navigate through the layers of the graph to find increasingly similar data in the graph to the data being searched greedily. This approach is extremely efficient with search performance a measure of the complexity of the graph.

Hybrid search is a specialized vector search that increases the relevancy of search results by combining two search methods: the precision of keyword-based sparse vector search, and the contextual understanding of semantic dense vector search.

Index

Vector databases utilize a crucial element known as the index to process data. The creation of this index involves applying an algorithm to the vector embeddings stored within the database. This algorithm functions to map these vectors to a specialized data structure, facilitating rapid searches. Searches are more efficient this way due to the index's condensed representation of the original vector data. This compactness reduces memory requirements and enhances accessibility when contrasted with processing searches via raw embeddings.

Inverted File with Product Quantization (IVFPQ)

The IVFPQ (Inverted File with Product Quantization) is a data structure and algorithm used in the Faiss library for efficient approximate nearest neighbor search in high-dimensional spaces. It's leveraging the IVFPQ techniques to accelerate the search process.

The IVFPQ algorithm divides the vector space into a set of Voronoi cells (partitions of a vector space) using the Product Quantization method. Then each cell is associated with an inverted list that stores the identifiers of the vectors falling into that cell in an inverted file structure. This ivf structure provides a mapping between each Voronoi cell and the vectors associated with it.

During the search, a query vector is assigned to a specific Voronoi cell, only the vectors in that cell are considered for the nearest neighbor search. This significantly reduces the search space and improves the efficiency of the search process.

To find the nearest neighbors, the algorithm computes the distances between the query vector and the vectors in the assigned Voronoi cell using a distance metric (e.g., Euclidean distance or cosine similarity).

The search algorithm efficiently traverses the inverted file and retrieves the vectors with the closest distances to the query vector.

LangChain

Langchain is an open-source framework that facilitates the creation of large language model (LLM) based applications and chatbots. It provides a standard interface for interacting with LLMs, as well as a number of features that make it easier to build complex applications.

Large Language Models (LLMs)

Large language models are algorithmic predictors for text, which are able to process enormous amounts of text data. These models come in various forms including: Autoencoder-Based, Sequence-to-Sequence, Recursive Neural Network, and the most well-known, Transformer-Based Models.

Metadata

Metadata is the additional information associated with the vector embeddings stored in a vector database. It enriches the vector entries by providing context, labels, and other relevant attributes. For example, for image vectors, metadata could include labels like "sky," "sunset," or "clouds."

Query

A query is used to extract data from a database and present it in a readable form. In the case of a vector database, you first need to process the queries into the same vector space as the database. For text-based search, this means that plain human text needs to be converted into the same format as the embeddings themselves.

The process involves creating a “query vector.” This is generated from the input text, processing it through your embedding model (and any other feature engineering steps also applied to your other embeddings), and finally creating a query vector that lands in the appropriate part of the vector space. Other than text as an input, code, images, video, audio and other formats can be used, but all will still generate a numerical arrayed query vector.

The query vector should be generated using the same embedding model that was used to generate your content embeddings stored in the database. This is important because you need to map existing content in the same embedding format as your query, in the same vector space, to perform similarity calculations.

Retrieval Augmented Generation (RAG)

Retrieval Augmented Generation, known as RAG, is a framework that optimizes the way LLMs operate. They currently operate within the static knowledge snapshot captured during their training, so face significant challenges staying up-to-date with recent world events. RAG enables these language models to access relevant data from external knowledge bases, enriching their responses with current and contextually accurate information.

Sparse vector

Given a fixed vector space basis, a vector is sparse if it can be represented by a linear combination of a small subset of the basis vectors. Typically, the size of the subset is a small fraction of the dimension of the vector space, but no absolute threshold is imposed.

Time series data

Time series data is a collection of data points that are indexed or listed in chronological order based on time. Time series data allows us to observe how variables change over time. In other words, time serves as a crucial variable, revealing both the adjustments within the data points and the final outcomes.

Vector database

In a vector database, data is automatically arranged spatially by content similarity, and that similarity is on content meaning rather than just keywords. With advances in Machine Learning, machines are now able to understand the content we provide to them.

What’s more, vector databases allow a high degree of granularity on the similarities, so books in a library, for example, could be searched by author's writing style, or story plots, all within the same storage structure, without having to manually label or tag the content itself for exact lookups that have no contextual grounding.