Architecture
Cerulion Core is built on a dual-transport architecture that optimizes for both local and network communication. This page explains how the system is designed and how components work together.System Overview
Cerulion Core provides a unified pub/sub API that automatically selects the best transport for each communication path:Dual Transport System
Local Transport (iceoryx2)
Purpose: Ultra-low latency communication between processes on the same machine. How it works:- Publisher allocates shared memory region
- Data is written directly to shared memory (no serialization)
- Subscriber reads from the same memory location
- Zero-copy means no data copying occurs
- Latency: < 1 μs for small messages
- Throughput: Limited by memory bandwidth
- Range: Same machine only
- Serialization: None (raw bytes)
The zero-copy design means that for local communication, Cerulion Core is essentially just a shared memory allocator. There’s no serialization overhead, making it ideal for high-frequency sensor data or real-time control loops.
Network Transport (Zenoh)
Purpose: Reliable communication across machines and networks. How it works:- Publisher sends message to background thread (non-blocking)
- Background thread serializes data to bytes
- Serialized bytes sent over Zenoh network
- Subscriber receives and deserializes
- Latency: 1-10 ms (network dependent)
- Throughput: Network bandwidth limited
- Range: Any machine on network
- Serialization: Automatic (raw bytes for Copy types)
Network publishing happens asynchronously on a background thread. This means local publishing never blocks waiting for network I/O, ensuring high availability even when the network is slow.
Component Architecture
Publisher
The Publisher component manages both local and network publishing: Key features:- Dual-path publishing: Sends to both local and network simultaneously
- Non-blocking network: Network sends don’t block local publishing
- Latest-message semantics: Only the most recent message is queued for network
- Automatic thread management: Background thread starts on create, shuts down on drop
Subscriber
The Subscriber component automatically selects the best transport: Key features:- Auto-detection: Tries local first, falls back to network
- Network-only mode: Can force network transport with
Some(true) - Non-blocking receive: Returns immediately if no message available
- Type safety: Validates message type at compile time (Rust) or runtime (Python/C++)
TopicManager
Centralized management for publishers and subscribers: Key features:- Shared session: Single Zenoh session for all pub/sub pairs (reduces overhead)
- Automatic discovery: Subscribers announce themselves, publishers auto-enable network
- Type safety: Validates that topics use consistent message types
- Keep-alive tracking: Automatically disables network when subscribers disconnect
Serialization System
Automatic Serialization
Cerulion Core automatically implements serialization for anyCopy type:
The Rust trait is automatically implemented for
Copy types. In Python and C++, you implement to_bytes() and from_bytes() methods directly on your message classes/structs.- For local transport: Data is sent as raw bytes (zero-copy)
- For network transport: Data is serialized to bytes using
#[repr(C)]layout - Deserialization reconstructs the struct from bytes
Optional Protobuf Support
For cross-language compatibility or schema evolution, you can use protobuf:However, the default
SerializableMessage implementation still uses raw bytes for Copy types, which is faster for same-language communication. Protobuf is useful for cross-language compatibility and schema evolution.Multi-Language Design
Cerulion Core is designed for multi-language support from the ground up: Current status:- ✅ Rust: Fully working with automatic code generation
- ⏳ Python: Design complete, implementation pending
- ⏳ C++: Design complete, implementation pending
- Rust:
#[repr(C)]attribute - Python:
struct.packformat strings - C++:
__attribute__((packed))
Data Flow
Local Communication Flow
Steps:- Publisher allocates shared memory region via iceoryx2
- Publisher writes data directly to shared memory
- Subscriber reads from the same memory location
- No serialization, no copying, minimal latency
Network Communication Flow
Steps:- Publisher sends data to background thread (non-blocking)
- Background thread serializes data to bytes
- Serialized bytes sent over Zenoh network
- Network subscriber receives and deserializes
- Local publishing never waits for network I/O
Performance Characteristics
| Operation | Transport | Latency | Blocking |
|---|---|---|---|
| Local send | iceoryx2 | < 1 μs | Minimal (memory alloc) |
| Network send (serialization) | Background thread | N/A | Non-blocking |
| Network send (zenoh) | Background thread | 1-10 ms | Non-blocking |
| Local receive | iceoryx2 | < 1 μs | Non-blocking |
| Network receive | Zenoh | 1-10 ms | Non-blocking |
The non-blocking network design ensures that local communication always has priority. Network latency or failures don’t affect local pub/sub performance.