Serverless AI Context Database Creator

AI service that loads and processes text data into vector embeddings for semantic search and retrieval.

Endpoint

Upload and process content data into vector embeddings for a specified collection. PUT will overwrite existing data.

POST | PUThttps://api.xtartapp.com/data/memory

*Authentication

You must use a Bearer token provided by the dashboard to use for authentication. Add the token to the Authorization header.

Request body

NameTypeRequiredMaxDescription
collectionstringYes-The name of the collection to store the data in. This will be used as part of the collection identifier.
documentstringYes200Unique identifier for the document. Must be less than 200 characters and contain only letters, numbers, underscores, and hyphens.
contentstringYes50000The text content to process and store. Will be automatically chunked into smaller pieces for vector embedding.

Response

NameTypeDescription
dataobjectThe data of the upload process
data.sizenumberNumber of document chunks created and stored from the uploaded content
data.documentstringThe document identifier used for storage
data.collectionstringThe name of the collection where the data was stored
metadataobjectMetadata about the data loader upload process
metadata.costnumberThe cost of the data loader upload request (approximately $0.000001 per document chunk)

Errors

{
   "error": "string", 
   "code": "string"
}
HTTP StatusError CodeMessage
400INVALID_PAYLOADOccurs when the payload is invalid.
400INVALID_REQUESTEx.: The request is invalid.
400DATA_LOADER_ERROREx: could not load data | parsing error
400HARM_CONTENTEx.: The content is harmful and cannot be processed.
400INSUFFICIENT_BALANCEThe user has insufficient balance
401UNAUTHORIZEDEx.: The token is invalid.
429TOO_MANY_REQUESTSEx.: The request limit has been reached.