Vector Ingestion Platform
OmniVec
Any source. format. model. destination.
Ingest, embed, and search across everything — one platform, zero glue code.
Real-time pipelines
Any vector store
AI-powered agents
© OmniVec Docs GitHub

Welcome back

Sign in with your access token to continue.
Generate a token via CLI:
curl -X POST /api/auth/tokens

OmniVec

Vector Ingestion Platform
Overview
Dashboard Health Agent
Configuration
Sources 0 Vector Stores 0 Pipelines 0
Playground
Vector Search
Operations
Deployments Metrics
DocGrok
Models 0 Transform Pipelines Health Deployments
Deployment
Import / Export
DocGrok
OmniVec

Dashboard

Sources
0
Connected
Destinations
0
Vector stores
Pipelines
0
Active
Embedded / Total
0
Today: 0
Failed
0
Today: 0
Throughput
--
docs/sec
Avg Latency
—

Active Pipelines

Pipeline Source Listener Status
No active pipelines

Sources

Name Type Status Health

No sources configured

Add a source to start ingesting documents

Destinations

Name Type Endpoint Status Health

No destinations configured

Add a destination to store your vectors

Pipelines

Name Flow Status Health Source Docs Embedded Progress

No pipelines created

Create a pipeline to connect sources to destinations

Models

Model Source Type Status Health Resources Actions
Loading models...

Transform Pipelines

Transform Source Applies To Stages Version Description
Loading transforms...

DocGrok Health

Loading health...

DocGrok Deployments

Loading...
Loading deployments...

Metrics

Loading metrics...

Agent

AI ops assistant — diagnose pipelines, query state, tail logs. Read-only.

Vector Search Playground

Select indexes...

Enter a search query

Your query will be embedded using Azure OpenAI and matched against stored vectors

Health Checks

Loading health checks...

Deployments

Loading...
Loading deployments...

Import / Export

Backup / migrate deployment

Export sources, vector stores, pipelines, models, and assistants as a JSON bundle and import them back here or into another OmniVec deployment. You choose which resource types to include; pipelines/source checkpoints can be bundled so in-progress runs resume after restore.

Tips

  • Secrets (connection strings, API keys, passwords) are redacted as "***" by default — toggle Include secrets to embed them.
  • Enable Include checkpoints to carry over processing progress so imported pipelines can resume where they left off.
  • Pipeline filter restricts the bundle to just the listed pipelines and the sources / vector stores / models / assistants they reference.
  • Imports land pipelines in paused state by default — review and resume explicitly.
  • On conflict, choose skip (safe), overwrite (replace), or rename (copy with new IDs).

Source Details

Export Deployment

Options

Pick exactly which items go into the bundle. A section with nothing selected is omitted entirely.

Secrets are redacted as "***" when unchecked.
Loading resources...
Loading...

Import Deployment

Options

Imported pipelines always land in paused state.


Pick a file to preview its contents.
Select a bundle file to preview & choose items.

Destination Details

Add Source

Configuration
Real-Time Processing
Changes are detected and processed automatically
Authentication & Access

Add Vector Destination

Vector Store Configuration
Authentication & Access

Pipeline Details

Create Pipeline

General
Source
Transform
Destination
Options
Configuration
Authentication
Multi-step transform pipeline defined in DocGrok
Directly embed content using a single model — no separate pipeline needed
Note: The managed identity needs Cosmos DB Built-in Data Contributor role (read + write) on the source container.
Select which vector embedding field to use for this pipeline
⚠ No vector policies found on destination. Test the connection or check vector indexing configuration.
✓ Source and destination are the same container (inline mode recommended)
Vector Store Configuration
Authentication
If enabled, all existing documents in the source will be processed immediately
Truncate embeds full text as one vector. Chunk splits text and creates multiple vector documents.
Chunk Settings
Each chunk document will include the chunk text alongside the embedding vector
Variables: {source} (filename), {chunk} (index), {source_hash}, {pipeline}
Queue mode: Change feed creates jobs, pipeline worker processes them. Inline mode: Change feed patches documents directly (faster for same source/destination).
Target doc name in destination. Variables: {source} (filename), {source_hash}, {pipeline}, {job}

Add External Model

Model Configuration
Must match the deployment name in Azure OpenAI exactly
Authentication
No key needed

Grant the AKS managed identity access to your Azure OpenAI resource:

Azure Portal:

  1. Open your Azure OpenAI resource → Access Control (IAM)
  2. Click Add role assignment
  3. Select role: Cognitive Services OpenAI User
  4. Assign to: your AKS cluster's managed identity

Or via CLI:

Bash:
az role assignment create \ --assignee <aks-managed-identity-client-id> \ --role "Cognitive Services OpenAI User" \ --scope <azure-openai-resource-id>
PowerShell:
az role assignment create ` --assignee <aks-managed-identity-client-id> ` --role "Cognitive Services OpenAI User" ` --scope <azure-openai-resource-id>
Must match the deployment name in Azure OpenAI exactly

No key needed

OmniVec uses the AKS cluster's workload identity to authenticate with Azure OpenAI. Just grant access:

Azure Portal (easiest):

  1. Open your Azure OpenAI resource → Access Control (IAM)
  2. Click Add role assignment
  3. Select role: Cognitive Services OpenAI User
  4. Assign to: your AKS cluster's managed identity

Or via CLI:

az role assignment create \ --assignee <aks-managed-identity-client-id> \ --role "Cognitive Services OpenAI User" \ --scope <azure-openai-resource-id>

Model Details

Create Transform Pipeline

Create Transform Pipeline

Lowercase letters, digits and dashes. Must not collide with a built-in. Cannot be changed after creation.
Comma-separated. Each must start with a dot. The dispatcher picks the first transform whose list matches the input file.
The last stage must be of type embed. Stage order = execution order.