Your files. Your intelligence.

A local embedding engine that understands every file on your machine — text, images, audio, video — without ever leaving your device.

Not search. Not sort. Understand.

Scroll

Every file tells a story. EmbedCore reads them all — extracting meaning, building connections, and making your entire library searchable by concept, not just by name.

The Basics

What is a
vector embedding?

It sounds technical, but the idea is simple. Think of it as giving every file a set of coordinates — not for where it is, but for what it means.

Your file

A photo, a document, a song — any file on your machine that contains some kind of meaning.

becomes

A point in meaning-space

A list of numbers (a "vector") that captures the essence of that file — its topics, mood, content, and context.

01

Files have meaning

A photo of a sunset, a jazz song, and an essay about nature are all different files — but they share something in common. Traditional search can't see that connection because it only looks at filenames and keywords.

02

AI reads the meaning

An embedding model (a small AI that runs on your machine) looks at each file and converts its meaning into a list of numbers. Think of it like GPS coordinates, but instead of marking a place on a map, they mark a place in "meaning space."

03

Similar things land nearby

Files with similar meanings get similar coordinates. That sunset photo, the jazz song, and the nature essay all end up in the same neighbourhood — so when you search for one, you can find the others.

That's it. Vector embedding is just a way to turn "what something means" into numbers — so a computer can finally search by meaning, not just by name.

Privacy-First

Your data never
leaves your machine.

Every computation runs locally. No API keys, no cloud sync, no telemetry. EmbedCore processes your files on your hardware — your embeddings, your metadata, your control.

0
Cloud dependencies
0
Network calls
100%
On-device
Architecture

Built to be extended.

Modules are standalone executables that communicate via a simple JSON protocol. Write them in any language — Rust, Python, Go, C++ — with zero framework lock-in.

Language Agnostic

Write modules in any language. If it can read stdin and write stdout, it can be an EmbedCore module.

Self-Contained

Each module bundles its own models and dependencies. Install or remove modules without affecting anything else.

Zero Lock-In

No framework assumptions. Modules choose their own ML runtime — ONNX, PyTorch, TensorFlow, or pure DSP.

module_protocol.json
{ "type": "handshake", "module_name": "audio-features", "version": "1.0.0", "capabilities": { "embedding_layers": [{ "name": "timbral", "dimensions": 128 }], "supported_extensions": [".mp3", ".flac", ".wav", ".ogg"] } }
Multi-Modal

Every format.
One search.

From documents to photos to music to video — EmbedCore modules can understand and index any file type, enabling searches that cross modality boundaries.

01

File Detection

Filesystem watcher detects new and changed files in real time

02

Change Verification

Two-tier detection: fast mtime check, then SHA-256 hash for accuracy

03

Priority Queuing

User requests first, then new files, then reprocessing jobs

04

Dependency Resolution

Topological sort ensures modules execute in the right order

05

Concurrent Execution

Independent modules process files in parallel for maximum throughput

Pipeline

Intelligent
from ingestion to index.

The processing pipeline handles everything — from detecting file changes to orchestrating module execution — so your library stays indexed without you thinking about it.

Real-Time Watching

Filesystem events trigger processing automatically. Add files to a watched folder and they're indexed within moments.

Cycle Detection

Dependency graph analysis prevents circular module chains via depth-first search before any processing begins.

All On Your Hardware

Every computation — from hashing to embedding — runs on your CPU and GPU. Nothing is offloaded to external services.

Platform

One binary.
Every desktop.

Built with Tauri and Rust, EmbedCore ships as a single native binary with no runtime dependencies. Lightweight, fast, and truly cross-platform.

macOS

Native .app bundle with Apple Silicon and Intel support. Lightweight, no Electron overhead.

Linux

AppImage and .deb packages. Runs on any modern distribution with minimal dependencies.

Windows

Native .exe installer. Leverages WebView2 for a snappy, memory-efficient desktop experience.

Single Binary. Zero Dependencies.

No runtime installations, no system libraries, no configuration. Download, run, and start indexing. Powered by Tauri's lean native architecture — a fraction of the memory footprint of Electron-based apps.

Coming Soon

The future of
local search.

EmbedCore is being built for people who believe their files — and the intelligence extracted from them — should stay on their machine. Be the first to know when it launches.

No spam. One email when we launch.