Documentation Index
Fetch the complete documentation index at: https://doc.ambientsoul.ai/llms.txt
Use this file to discover all available pages before exploring further.
Storage API Reference
The storage crate provides the memory persistence layer for Soul Kernel.
Overview
The storage system implements a hybrid approach combining SQLite for reliable persistence and vector storage for semantic search capabilities.
Core Types
MemoryEvent
The fundamental unit of memory storage.
pub struct MemoryEvent {
pub id: Uuid,
pub timestamp: DateTime<Utc>,
pub author: String,
pub event_type: MemoryEventType,
pub content: String,
pub embedding: Vec<f32>,
pub metadata: serde_json::Value,
}
Fields:
id - Unique identifier (UUID v4)
timestamp - UTC timestamp of event creation
author - Device or source identifier
event_type - Type of memory event
content - Human-readable content
embedding - Vector representation for similarity search
metadata - Additional JSON metadata
MemoryEventType
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum MemoryEventType {
Observation, // External perceptions
Interaction, // User interactions
System, // System events
}
MemoryQuery
Parameters for vector similarity search.
pub struct MemoryQuery {
pub embedding: Vec<f32>,
pub top_k: usize,
pub score_threshold: Option<f32>,
pub filter: Option<MemoryFilter>,
}
Fields:
embedding - Query vector for similarity search
top_k - Maximum number of results to return
score_threshold - Minimum similarity score (0.0 to 1.0)
filter - Optional filters to apply
MemoryFilter
pub struct MemoryFilter {
pub event_types: Option<Vec<MemoryEventType>>,
pub authors: Option<Vec<String>>,
pub after: Option<DateTime<Utc>>,
pub before: Option<DateTime<Utc>>,
}
Traits
MemoryStore
The core trait that all storage implementations must provide.
#[async_trait]
pub trait MemoryStore: Send + Sync {
async fn insert_event(&self, event: &MemoryEvent) -> Result<Uuid>;
async fn query_embeddings(&self, query: &MemoryQuery) -> Result<QueryResult>;
async fn get_event(&self, id: &Uuid) -> Result<Option<MemoryEvent>>;
async fn get_events_since(&self, timestamp: i64, limit: usize) -> Result<Vec<MemoryEvent>>;
async fn migrate(&self) -> Result<()>;
async fn compact(&self) -> Result<()>;
}
Implementations
HybridMemoryStore
The recommended implementation combining SQLite and vector search.
impl HybridMemoryStore {
/// Create with SQLite file and optional Qdrant URL
pub async fn new<P: AsRef<Path>>(
db_path: P,
qdrant_url: Option<&str>
) -> Result<Self>
/// Create in-memory store (for testing)
pub async fn in_memory() -> Result<Self>
}
SqliteMemoryStore
Direct SQLite implementation for simple use cases.
impl SqliteMemoryStore {
pub fn new<P: AsRef<Path>>(path: P) -> Result<Self>
pub fn in_memory() -> Result<Self>
}
Error Handling
StorageError
#[derive(Error, Debug)]
pub enum StorageError {
#[error("Database error: {0}")]
Database(#[from] rusqlite::Error),
#[error("Vector store error: {0}")]
VectorStore(String),
#[error("Serialization error: {0}")]
Serialization(#[from] serde_json::Error),
#[error("IO error: {0}")]
Io(#[from] std::io::Error),
#[error("Not found: {0}")]
NotFound(String),
#[error("Invalid configuration: {0}")]
InvalidConfig(String),
#[error("Migration error: {0}")]
Migration(String),
}
Usage Examples
Basic Usage
use storage::{HybridMemoryStore, MemoryStore, MemoryEvent, MemoryEventType};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize storage
let store = HybridMemoryStore::new("memories.db", None).await?;
store.migrate().await?;
// Create a memory
let event = MemoryEvent::new(
"device_1".to_string(),
MemoryEventType::Observation,
"Saw a beautiful sunset".to_string(),
vec![0.1, 0.2, 0.3, 0.4], // Embedding from LLM
);
// Store it
let id = store.insert_event(&event).await?;
println!("Stored memory: {}", id);
Ok(())
}
Vector Search
use storage::{MemoryQuery, MemoryFilter, MemoryEventType};
// Search for similar memories
let query = MemoryQuery {
embedding: vec![0.15, 0.25, 0.35, 0.45],
top_k: 5,
score_threshold: Some(0.7),
filter: Some(MemoryFilter {
event_types: Some(vec![MemoryEventType::Observation]),
authors: None,
after: None,
before: None,
}),
};
let results = store.query_embeddings(&query).await?;
for result in results.events {
println!("Score: {:.3} - {}", result.score, result.event.content);
}
Synchronization
// Get events for sync
let last_sync = chrono::Utc::now().timestamp() - 3600; // 1 hour ago
let events = store.get_events_since(last_sync, 100).await?;
for event in events {
// Process for synchronization
sync_to_cloud(&event).await?;
}
| Operation | Performance |
|---|
| Single Insert | ~65μs |
| Query (1k events) | ~2.1ms |
| Bulk Insert (10k) | ~92ms |
| Get by ID | ~100μs |
Best Practices
- Batch Inserts: Use transactions for bulk operations
- Embedding Size: Keep embeddings reasonable (384 dims recommended)
- Indexing: Add custom indexes for frequent query patterns
- Compaction: Run
compact() during maintenance windows
- Error Handling: Always handle
StorageError appropriately
Thread Safety
All storage implementations are thread-safe and can be shared across async tasks using Arc.
use std::sync::Arc;
let store = Arc::new(HybridMemoryStore::new("memories.db", None).await?);
// Share across tasks
let store_clone = store.clone();
tokio::spawn(async move {
// Use store_clone safely
});
Migration Support
The storage layer includes automatic schema migrations:
// Migrations run automatically on first use
store.migrate().await?;
Current schema version: 1
Configuration
SQLite Settings
The SQLite adapter uses these optimizations:
- WAL mode for better concurrency
- Normal synchronous mode
- 64MB cache size
- In-memory temp store
Vector Store Settings
When using Qdrant (currently mocked):
- HNSW index with M=16, ef=64
- Cosine similarity metric
- Automatic collection creation
See Also
Change Log
- 2025-06-13: Initial API documentation created