Prefixes for external use.
const ( InputTypeCohereSearchDocumentPrefix string = "search_document: " InputTypeCohereSearchQueryPrefix string = "search_query: " InputTypeCohereClassificationPrefix string = "classification: " InputTypeCohereClusteringPrefix string = "clustering: " )
const BaseURLOpenAI = "https://api.openai.com/v1"
Collection represents a collection of documents. It also has a configured embedding function, which is used when adding documents that don't have embeddings yet.
type Collection struct { Name string // contains filtered or unexported fields }
func (c *Collection) Add(ctx context.Context, ids []string, embeddings [][]float32, metadatas []map[string]string, contents []string) error
Add embeddings to the datastore.
This is a Chroma-like method. For a more Go-idiomatic one, see [AddDocuments].
func (c *Collection) AddConcurrently(ctx context.Context, ids []string, embeddings [][]float32, metadatas []map[string]string, contents []string, concurrency int) error
AddConcurrently is like Add, but adds embeddings concurrently. This is mostly useful when you don't pass any embeddings, so they have to be created. Upon error, concurrently running operations are canceled and the error is returned.
This is a Chroma-like method. For a more Go-idiomatic one, see [AddDocuments].
func (c *Collection) AddDocument(ctx context.Context, doc Document) error
AddDocument adds a document to the collection. If the document doesn't have an embedding, it will be created using the collection's embedding function.
func (c *Collection) AddDocuments(ctx context.Context, documents []Document, concurrency int) error
AddDocuments adds documents to the collection with the specified concurrency. If the documents don't have embeddings, they will be created using the collection's embedding function. Upon error, concurrently running operations are canceled and the error is returned.
func (c *Collection) Count() int
Count returns the number of documents in the collection.
func (c *Collection) Delete(_ context.Context, where, whereDocument map[string]string, ids ...string) error
Delete removes document(s) from the collection.
func (c *Collection) Query(ctx context.Context, queryText string, nResults int, where, whereDocument map[string]string) ([]Result, error)
Query performs an exhaustive nearest neighbor search on the collection.
func (c *Collection) QueryEmbedding(ctx context.Context, queryEmbedding []float32, nResults int, where, whereDocument map[string]string) ([]Result, error)
QueryEmbedding performs an exhaustive nearest neighbor search on the collection.
DB is the chromem-go database. It holds collections, which hold documents.
+----+ 1-n +------------+ n-n +----------+ | DB |-----------| Collection |-----------| Document | +----+ +------------+ +----------+
type DB struct {
// contains filtered or unexported fields
}
func NewDB() *DB
NewDB creates a new in-memory chromem-go DB. While it doesn't write files when you add collections and documents, you can still use DB.Export and DB.Import to export and import the entire DB from a file.
func NewPersistentDB(path string, compress bool) (*DB, error)
NewPersistentDB creates a new persistent chromem-go DB. If the path is empty, it defaults to "./chromem-go". If compress is true, the files are compressed with gzip.
The persistence covers the collections (including their documents) and the metadata. However, it doesn't cover the EmbeddingFunc, as functions can't be serialized. When some data is persisted, and you create a new persistent DB with the same path, you'll have to provide the same EmbeddingFunc as before when getting an existing collection and adding more documents to it.
Currently, the persistence is done synchronously on each write operation, and each document addition leads to a new file, encoded as gob. In the future we will make this configurable (encoding, async writes, WAL-based writes, etc.).
In addition to persistence for each added collection and document you can use DB.Export and DB.Import to export and import the entire DB to/from a file, which also works for the pure in-memory DB.
func (db *DB) CreateCollection(name string, metadata map[string]string, embeddingFunc EmbeddingFunc) (*Collection, error)
CreateCollection creates a new collection with the given name and metadata.
func (db *DB) DeleteCollection(name string) error
DeleteCollection deletes the collection with the given name. If the collection doesn't exist, this is a no-op. If the DB is persistent, it also removes the collection's directory. You shouldn't hold any references to the collection after calling this method.
func (db *DB) Export(filePath string, compress bool, encryptionKey string) error
Export exports the DB to a file at the given path. The file is encoded as gob, optionally compressed with flate (as gzip) and optionally encrypted with AES-GCM. This works for both the in-memory and persistent DBs. If the file exists, it's overwritten, otherwise created.
Deprecated: Use DB.ExportToFile instead.
func (db *DB) ExportToFile(filePath string, compress bool, encryptionKey string) error
ExportToFile exports the DB to a file at the given path. The file is encoded as gob, optionally compressed with flate (as gzip) and optionally encrypted with AES-GCM. This works for both the in-memory and persistent DBs. If the file exists, it's overwritten, otherwise created.
func (db *DB) ExportToWriter(writer io.Writer, compress bool, encryptionKey string) error
ExportToWriter exports the DB to a writer. The stream is encoded as gob, optionally compressed with flate (as gzip) and optionally encrypted with AES-GCM. This works for both the in-memory and persistent DBs. If the writer has to be closed, it's the caller's responsibility.
func (db *DB) GetCollection(name string, embeddingFunc EmbeddingFunc) *Collection
GetCollection returns the collection with the given name. The embeddingFunc param is only used if the DB is persistent and was just loaded from storage, in which case no embedding func is set yet (funcs are not (de-)serializable). It can be nil, in which case the default one will be used. The returned collection is a reference to the original collection, so any methods on the collection like Add() will be reflected on the DB's collection. Those operations are concurrency-safe. If the collection doesn't exist, this returns nil.
func (db *DB) GetOrCreateCollection(name string, metadata map[string]string, embeddingFunc EmbeddingFunc) (*Collection, error)
GetOrCreateCollection returns the collection with the given name if it exists in the DB, or otherwise creates it. When creating:
func (db *DB) Import(filePath string, encryptionKey string) error
Import imports the DB from a file at the given path. The file must be encoded as gob and can optionally be compressed with flate (as gzip) and encrypted with AES-GCM. This works for both the in-memory and persistent DBs. Existing collections are overwritten.
- filePath: Mandatory, must not be empty - encryptionKey: Optional, must be 32 bytes long if provided
Deprecated: Use DB.ImportFromFile instead.
func (db *DB) ImportFromFile(filePath string, encryptionKey string) error
ImportFromFile imports the DB from a file at the given path. The file must be encoded as gob and can optionally be compressed with flate (as gzip) and encrypted with AES-GCM. This works for both the in-memory and persistent DBs. Existing collections are overwritten.
- filePath: Mandatory, must not be empty - encryptionKey: Optional, must be 32 bytes long if provided
func (db *DB) ImportFromReader(reader io.ReadSeeker, encryptionKey string) error
ImportFromReader imports the DB from a reader. The stream must be encoded as gob and can optionally be compressed with flate (as gzip) and encrypted with AES-GCM. This works for both the in-memory and persistent DBs. Existing collections are overwritten. If the writer has to be closed, it's the caller's responsibility.
- reader: An implementation of io.ReadSeeker - encryptionKey: Optional, must be 32 bytes long if provided
func (db *DB) ListCollections() map[string]*Collection
ListCollections returns all collections in the DB, mapping name->Collection. The returned map is a copy of the internal map, so it's safe to directly modify the map itself. Direct modifications of the map won't reflect on the DB's map. To do that use the DB's methods like CreateCollection() and DeleteCollection(). The map is not an entirely deep clone, so the collections themselves are still the original ones. Any methods on the collections like Add() for adding documents will be reflected on the DB's collections and are concurrency-safe.
func (db *DB) Reset() error
Reset removes all collections from the DB. If the DB is persistent, it also removes all contents of the DB directory. You shouldn't hold any references to old collections after calling this method.
Document represents a single document.
type Document struct { ID string Metadata map[string]string Embedding []float32 Content string }
func NewDocument(ctx context.Context, id string, metadata map[string]string, embedding []float32, content string, embeddingFunc EmbeddingFunc) (Document, error)
NewDocument creates a new document, including its embeddings. Metadata is optional. If the embeddings are not provided, they are created using the embedding function. You can leave the content empty if you only want to store embeddings. If embeddingFunc is nil, the default embedding function is used.
If you want to create a document without embeddings, for example to let Collection.AddDocuments create them concurrently, you can create a document with `chromem.Document{...}` instead of using this constructor.
EmbeddingFunc is a function that creates embeddings for a given text. chromem-go will use OpenAI`s "text-embedding-3-small" model by default, but you can provide your own function, using any model you like. The function must return a *normalized* vector, i.e. the length of the vector must be 1. OpenAI's and Mistral's embedding models do this by default. Some others like Nomic's "nomic-embed-text-v1.5" don't.
type EmbeddingFunc func(ctx context.Context, text string) ([]float32, error)
func NewEmbeddingFuncAzureOpenAI(apiKey string, deploymentURL string, apiVersion string, model string) EmbeddingFunc
NewEmbeddingFuncAzureOpenAI returns a function that creates embeddings for a text using the Azure OpenAI API. The `deploymentURL` is the URL of the deployed model, e.g. "https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME" See https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/embeddings?tabs=console#how-to-get-embeddings
func NewEmbeddingFuncCohere(apiKey string, model EmbeddingModelCohere) EmbeddingFunc
NewEmbeddingFuncCohere returns a function that creates embeddings for a text using Cohere's API. One important difference to OpenAI's and other's APIs is that Cohere differentiates between document embeddings and search/query embeddings. In order for this embedding func to do the differentiation, you have to prepend the text with either "search_document" or "search_query". We'll cut off that prefix before sending the document/query body to the API, we'll just use the prefix to choose the right "input type" as they call it.
When you set up a chromem-go collection with this embedding function, you might want to create the document separately with NewDocument and then cut off the prefix before adding the document to the collection. Otherwise, when you query the collection, the returned documents will still have the prefix in their content.
cohereFunc := chromem.NewEmbeddingFuncCohere(cohereApiKey, chromem.EmbeddingModelCohereEnglishV3) content := "The sky is blue because of Rayleigh scattering." // Create the document with the prefix. contentWithPrefix := chromem.InputTypeCohereSearchDocumentPrefix + content doc, _ := NewDocument(ctx, id, metadata, nil, contentWithPrefix, cohereFunc) // Remove the prefix so that later query results don't have it. doc.Content = content _ = collection.AddDocument(ctx, doc)
This is not necessary if you don't keep the content in the documents, as chromem-go also works when documents only have embeddings. You can also keep the prefix in the document, and only remove it after querying.
We plan to improve this in the future.
func NewEmbeddingFuncDefault() EmbeddingFunc
NewEmbeddingFuncDefault returns a function that creates embeddings for a text using OpenAI`s "text-embedding-3-small" model via their API. The model supports a maximum text length of 8191 tokens. The API key is read from the environment variable "OPENAI_API_KEY".
func NewEmbeddingFuncJina(apiKey string, model EmbeddingModelJina) EmbeddingFunc
NewEmbeddingFuncJina returns a function that creates embeddings for a text using the Jina API.
func NewEmbeddingFuncLocalAI(model string) EmbeddingFunc
NewEmbeddingFuncLocalAI returns a function that creates embeddings for a text using the LocalAI API. You can start a LocalAI instance like this:
docker run -it -p 127.0.0.1:8080:8080 localai/localai:v2.7.0-ffmpeg-core bert-cpp
And then call this constructor with model "bert-cpp-minilm-v6". But other embedding models are supported as well. See the LocalAI documentation for details.
func NewEmbeddingFuncMistral(apiKey string) EmbeddingFunc
NewEmbeddingFuncMistral returns a function that creates embeddings for a text using the Mistral API.
func NewEmbeddingFuncMixedbread(apiKey string, model EmbeddingModelMixedbread) EmbeddingFunc
NewEmbeddingFuncMixedbread returns a function that creates embeddings for a text using the mixedbread.ai API.
func NewEmbeddingFuncOllama(model string, baseURLOllama string) EmbeddingFunc
NewEmbeddingFuncOllama returns a function that creates embeddings for a text using Ollama's embedding API. You can pass any model that Ollama supports and that supports embeddings. A good one as of 2024-03-02 is "nomic-embed-text". See https://ollama.com/library/nomic-embed-text baseURLOllama is the base URL of the Ollama API. If it's empty, "http://localhost:11434/api" is used.
func NewEmbeddingFuncOpenAI(apiKey string, model EmbeddingModelOpenAI) EmbeddingFunc
NewEmbeddingFuncOpenAI returns a function that creates embeddings for a text using the OpenAI API.
func NewEmbeddingFuncOpenAICompat(baseURL, apiKey, model string, normalized *bool) EmbeddingFunc
NewEmbeddingFuncOpenAICompat returns a function that creates embeddings for a text using an OpenAI compatible API. For example:
The `normalized` parameter indicates whether the vectors returned by the embedding model are already normalized, as is the case for OpenAI's and Mistral's models. The flag is optional. If it's nil, it will be autodetected on the first request (which bears a small risk that the vector just happens to have a length of 1).
type EmbeddingModelCohere string
const ( EmbeddingModelCohereMultilingualV2 EmbeddingModelCohere = "embed-multilingual-v2.0" EmbeddingModelCohereEnglishLightV2 EmbeddingModelCohere = "embed-english-light-v2.0" EmbeddingModelCohereEnglishV2 EmbeddingModelCohere = "embed-english-v2.0" EmbeddingModelCohereMultilingualLightV3 EmbeddingModelCohere = "embed-multilingual-light-v3.0" EmbeddingModelCohereEnglishLightV3 EmbeddingModelCohere = "embed-english-light-v3.0" EmbeddingModelCohereMultilingualV3 EmbeddingModelCohere = "embed-multilingual-v3.0" EmbeddingModelCohereEnglishV3 EmbeddingModelCohere = "embed-english-v3.0" )
type EmbeddingModelJina string
const ( EmbeddingModelJina2BaseEN EmbeddingModelJina = "jina-embeddings-v2-base-en" EmbeddingModelJina2BaseDE EmbeddingModelJina = "jina-embeddings-v2-base-de" EmbeddingModelJina2BaseCode EmbeddingModelJina = "jina-embeddings-v2-base-code" EmbeddingModelJina2BaseZH EmbeddingModelJina = "jina-embeddings-v2-base-zh" )
type EmbeddingModelMixedbread string
const ( EmbeddingModelMixedbreadUAELargeV1 EmbeddingModelMixedbread = "UAE-Large-V1" EmbeddingModelMixedbreadBGELargeENV15 EmbeddingModelMixedbread = "bge-large-en-v1.5" EmbeddingModelMixedbreadGTELarge EmbeddingModelMixedbread = "gte-large" EmbeddingModelMixedbreadE5LargeV2 EmbeddingModelMixedbread = "e5-large-v2" EmbeddingModelMixedbreadMultilingualE5Large EmbeddingModelMixedbread = "multilingual-e5-large" EmbeddingModelMixedbreadMultilingualE5Base EmbeddingModelMixedbread = "multilingual-e5-base" EmbeddingModelMixedbreadAllMiniLML6V2 EmbeddingModelMixedbread = "all-MiniLM-L6-v2" EmbeddingModelMixedbreadGTELargeZh EmbeddingModelMixedbread = "gte-large-zh" )
type EmbeddingModelOpenAI string
const ( EmbeddingModelOpenAI2Ada EmbeddingModelOpenAI = "text-embedding-ada-002" EmbeddingModelOpenAI3Small EmbeddingModelOpenAI = "text-embedding-3-small" EmbeddingModelOpenAI3Large EmbeddingModelOpenAI = "text-embedding-3-large" )
Result represents a single result from a query.
type Result struct { ID string Metadata map[string]string Embedding []float32 Content string // The cosine similarity between the query and the document. // The higher the value, the more similar the document is to the query. // The value is in the range [-1, 1]. Similarity float32 }