Documentation Index
Fetch the complete documentation index at: https://doc.ambientsoul.ai/llms.txt
Use this file to discover all available pages before exploring further.
gRPC API Reference
The Soul Kernel exposes its core functionality through a gRPC API, enabling language-agnostic communication between the Rust kernel and various shells (iOS, Unity, Physical AI).
Overview
The gRPC API provides three core operations:
- Init: Initialize a new Soul session
- Ask: Process queries with streaming responses
- Remember: Store memories in the Soul’s memory graph
Service Definition
service SoulKernel {
rpc Init(InitRequest) returns (InitResponse);
rpc Ask(AskRequest) returns (stream AskResponse);
rpc Remember(RememberRequest) returns (RememberResponse);
}
Endpoints
Init - Create a Soul Session
Initialize a new Soul with a name and optional metadata.
Request:
message InitRequest {
string soul_name = 1; // Name for the Soul
map<string, string> metadata = 2; // Optional metadata
}
Response:
message InitResponse {
string soul_id = 1; // Unique Soul identifier
string session_token = 2; // Session authentication token
int64 created_at = 3; // Unix timestamp
}
Example:
let request = InitRequest {
soul_name: "Assistant".to_string(),
metadata: HashMap::from([
("platform", "ios"),
("version", "1.0.0"),
]),
};
Ask - Query the Soul
Send queries to the Soul and receive streaming responses. Supports progressive text generation, thinking status, and skill execution feedback.
Request:
message AskRequest {
string soul_id = 1; // Soul ID from Init
string session_token = 2; // Session token from Init
string query = 3; // User's query
repeated string skill_hints = 4; // Optional skill suggestions
}
Response (Streaming):
message AskResponse {
oneof content {
string text_chunk = 1; // Partial text response
ThinkingStatus thinking = 2; // Processing status
SkillExecution skill = 3; // Skill execution info
}
ResponseMetadata metadata = 10; // Response metadata
}
Response Types:
-
Text Chunks: Progressive text generation
content: TextChunk("Hello, ")
content: TextChunk("how can ")
content: TextChunk("I help?")
-
Thinking Status: Processing feedback
message ThinkingStatus {
string status = 1; // Status message
float progress = 2; // Progress (0.0-1.0)
}
-
Skill Execution: Skill activity feedback
message SkillExecution {
string skill_name = 1;
string status = 2;
map<string, string> results = 3;
}
Remember - Store Memories
Store new memories in the Soul’s memory graph.
Request:
message RememberRequest {
string soul_id = 1;
string session_token = 2;
Memory memory = 3;
}
message Memory {
string content = 1; // Memory content
repeated float embedding = 2; // Vector embedding
map<string, string> metadata = 3; // Memory metadata
}
Response:
message RememberResponse {
string memory_id = 1; // Unique memory identifier
bool stored = 2; // Storage success
int64 stored_at = 3; // Unix timestamp
}
Common Types
All responses include metadata for observability:
message ResponseMetadata {
string correlation_id = 1; // Request correlation ID
int64 timestamp = 2; // Response timestamp
int32 latency_ms = 3; // Processing latency
string kernel_version = 4; // Kernel version
}
Client Examples
Rust Client
use soul_kernel::v1::soul_kernel_client::SoulKernelClient;
// Connect to server
let mut client = SoulKernelClient::connect("http://localhost:50051").await?;
// Initialize Soul
let init_resp = client.init(InitRequest {
soul_name: "MyCompanion".to_string(),
metadata: HashMap::new(),
}).await?;
// Ask a question
let mut stream = client.ask(AskRequest {
soul_id: init_resp.soul_id.clone(),
session_token: init_resp.session_token.clone(),
query: "What's the weather like?".to_string(),
skill_hints: vec!["weather".to_string()],
}).await?.into_inner();
// Process streaming response
while let Some(response) = stream.message().await? {
match response.content {
Some(Content::TextChunk(text)) => print!("{}", text),
Some(Content::Thinking(status)) => println!("🤔 {}", status.status),
Some(Content::Skill(exec)) => println!("⚡ {}: {}", exec.skill_name, exec.status),
None => {}
}
}
Python Client
import grpc
import soul_kernel_pb2 as sk
import soul_kernel_pb2_grpc as sk_grpc
# Connect to server
channel = grpc.insecure_channel('localhost:50051')
client = sk_grpc.SoulKernelStub(channel)
# Initialize Soul
init_resp = client.Init(sk.InitRequest(
soul_name="PyCompanion",
metadata={"lang": "python"}
))
# Ask a question
responses = client.Ask(sk.AskRequest(
soul_id=init_resp.soul_id,
session_token=init_resp.session_token,
query="Tell me a joke"
))
# Process streaming response
for response in responses:
if response.HasField('text_chunk'):
print(response.text_chunk, end='')
JavaScript/TypeScript Client
const client = new SoulKernelClient('localhost:50051');
// Initialize Soul
const { soulId, sessionToken } = await client.init({
soulName: 'JSCompanion',
metadata: { platform: 'web' }
});
// Ask with streaming
const stream = client.ask({
soulId,
sessionToken,
query: 'What can you do?'
});
for await (const response of stream) {
if (response.textChunk) {
process.stdout.write(response.textChunk);
}
}
Error Handling
The gRPC API uses standard gRPC status codes:
| Code | Meaning | Example |
|---|
OK | Success | Normal response |
INVALID_ARGUMENT | Bad request | Missing soul_id |
UNAUTHENTICATED | Invalid session | Expired token |
NOT_FOUND | Soul not found | Unknown soul_id |
RESOURCE_EXHAUSTED | Rate limited | Too many requests |
INTERNAL | Server error | Database failure |
- Latency: p95 < 1s (online), < 6s (offline)
- Streaming: First chunk typically arrives within 100-200ms
- Throughput: Supports 1000+ concurrent connections
- Binary Size: Server binary < 20MB
Security
- Session tokens expire after 24 hours
- All responses include correlation IDs for tracing
- TLS encryption recommended for production
- Rate limiting per soul_id
Proto File Location
The complete protobuf definition is located at:
kernel/proto/soul_kernel.proto
To generate client code for your language:
protoc --proto_path=kernel/proto \
--<lang>_out=. \
--<lang>-grpc_out=. \
kernel/proto/soul_kernel.proto
Change Log
| Date | Version | Changes |
|---|
| 2025-06-12 | 0.1.0 | Initial gRPC API implementation with Init/Ask/Remember |