Skip to main content

Architecture

Cerulion Core is built on a dual-transport architecture that optimizes for both local and network communication. This page explains how the system is designed and how components work together.

System Overview

Cerulion Core provides a unified pub/sub API that automatically selects the best transport for each communication path:

Dual Transport System

Local Transport (iceoryx2)

Purpose: Ultra-low latency communication between processes on the same machine. How it works:
  1. Publisher allocates shared memory region
  2. Data is written directly to shared memory (no serialization)
  3. Subscriber reads from the same memory location
  4. Zero-copy means no data copying occurs
Characteristics:
  • Latency: < 1 μs for small messages
  • Throughput: Limited by memory bandwidth
  • Range: Same machine only
  • Serialization: None (raw bytes)
The zero-copy design means that for local communication, Cerulion Core is essentially just a shared memory allocator. There’s no serialization overhead, making it ideal for high-frequency sensor data or real-time control loops.

Network Transport (Zenoh)

Purpose: Reliable communication across machines and networks. How it works:
  1. Publisher sends message to background thread (non-blocking)
  2. Background thread serializes data to bytes
  3. Serialized bytes sent over Zenoh network
  4. Subscriber receives and deserializes
Characteristics:
  • Latency: 1-10 ms (network dependent)
  • Throughput: Network bandwidth limited
  • Range: Any machine on network
  • Serialization: Automatic (raw bytes for Copy types)
Network publishing happens asynchronously on a background thread. This means local publishing never blocks waiting for network I/O, ensuring high availability even when the network is slow.

Component Architecture

Publisher

The Publisher component manages both local and network publishing: Key features:
  • Dual-path publishing: Sends to both local and network simultaneously
  • Non-blocking network: Network sends don’t block local publishing
  • Latest-message semantics: Only the most recent message is queued for network
  • Automatic thread management: Background thread starts on create, shuts down on drop

Subscriber

The Subscriber component automatically selects the best transport: Key features:
  • Auto-detection: Tries local first, falls back to network
  • Network-only mode: Can force network transport with Some(true)
  • Non-blocking receive: Returns immediately if no message available
  • Type safety: Validates message type at compile time (Rust) or runtime (Python/C++)

TopicManager

Centralized management for publishers and subscribers: Key features:
  • Shared session: Single Zenoh session for all pub/sub pairs (reduces overhead)
  • Automatic discovery: Subscribers announce themselves, publishers auto-enable network
  • Type safety: Validates that topics use consistent message types
  • Keep-alive tracking: Automatically disables network when subscribers disconnect
See the Topic Manager guide for details.

Serialization System

Automatic Serialization

Cerulion Core automatically implements serialization for any Copy type:
pub trait SerializableMessage: Send + Sync + 'static {
    fn to_bytes(&self) -> Result<Vec<u8>, Box<dyn Error>>;
    fn from_bytes(bytes: &[u8]) -> Result<Self, Box<dyn Error>>;
}

// Blanket implementation for Copy types
impl<T> SerializableMessage for T
where
    T: Copy + Send + Sync + 'static,
{
    fn to_bytes(&self) -> Result<Vec<u8>, Box<dyn Error>> {
        Ok(raw::struct_to_bytes(self))
    }
    // ...
}
The Rust trait is automatically implemented for Copy types. In Python and C++, you implement to_bytes() and from_bytes() methods directly on your message classes/structs.
How it works:
  1. For local transport: Data is sent as raw bytes (zero-copy)
  2. For network transport: Data is serialized to bytes using #[repr(C)] layout
  3. Deserialization reconstructs the struct from bytes
Raw byte serialization is safe for primitive types and #[repr(C)] structs containing only primitives. Avoid using it for types with pointers, references, or heap-allocated data.

Optional Protobuf Support

For cross-language compatibility or schema evolution, you can use protobuf:
use prost::Message;

#[derive(Clone, PartialEq, Message)]
pub struct ProtoSensorData {
    #[prost(uint64, tag = "1")]
    pub timestamp: u64,
    #[prost(float, tag = "2")]
    pub temperature: f32,
}
However, the default SerializableMessage implementation still uses raw bytes for Copy types, which is faster for same-language communication. Protobuf is useful for cross-language compatibility and schema evolution.

Multi-Language Design

Cerulion Core is designed for multi-language support from the ground up: Current status:
  • Rust: Fully working with automatic code generation
  • Python: Design complete, implementation pending
  • C++: Design complete, implementation pending
Binary compatibility: All languages produce identical memory layouts using:
  • Rust: #[repr(C)] attribute
  • Python: struct.pack format strings
  • C++: __attribute__((packed))
See the Multi-Language Support page for details.

Data Flow

Local Communication Flow

Steps:
  1. Publisher allocates shared memory region via iceoryx2
  2. Publisher writes data directly to shared memory
  3. Subscriber reads from the same memory location
  4. No serialization, no copying, minimal latency

Network Communication Flow

Steps:
  1. Publisher sends data to background thread (non-blocking)
  2. Background thread serializes data to bytes
  3. Serialized bytes sent over Zenoh network
  4. Network subscriber receives and deserializes
  5. Local publishing never waits for network I/O
See the Async Design page for detailed implementation.

Performance Characteristics

OperationTransportLatencyBlocking
Local sendiceoryx2< 1 μsMinimal (memory alloc)
Network send (serialization)Background threadN/ANon-blocking
Network send (zenoh)Background thread1-10 msNon-blocking
Local receiveiceoryx2< 1 μsNon-blocking
Network receiveZenoh1-10 msNon-blocking
The non-blocking network design ensures that local communication always has priority. Network latency or failures don’t affect local pub/sub performance.

Design Principles

1. Zero-Copy First

Local communication uses zero-copy shared memory whenever possible. Serialization only occurs for network transport.

2. High Availability

Network failures don’t block local communication. The system prioritizes availability over guaranteed network delivery.

3. Type Safety

Compile-time guarantees in Rust, runtime validation in Python/C++. Prevents message compatibility errors.

4. Automatic Everything

Serialization, transport selection, and code generation are all automatic. Developers focus on application logic.

5. Latest-Message Semantics

Network transport uses latest-message semantics. If network is slow, newer messages replace older ones in the queue.

Next Steps