Skip to main content

Introduction

High-bandwidth data streams like video and LiDAR are the backbone of modern robotics—but they’re also where traditional middleware breaks down. Cerulion is designed from the ground up to handle these demanding workloads with predictable performance.

Sane Presets

Stream-optimized modes instead of 30 QoS knobs

True Backpressure

Predictable behavior when subscribers can’t keep up

Zero-Copy Paths

Minimize serialization and memory copies

Observability

Per-topic throughput, drop reasons, and latency histograms

Stream Presets

Instead of tuning 30 QoS parameters, Cerulion provides presets that match real robotics workloads:

Live Video Streams

Optimized for: Camera feeds, compressed video, visual SLAM
SettingValueWhy
HistoryLatest-onlyStale frames are useless
LatencyBoundedFixed max delay, then drop
ReliabilityLossy OKMissing frame beats frozen stream
#[node(period_ms = 33, preset = "live_video")]
pub fn camera() -> (#[out("frame")] Image) {
    // 30 FPS camera feed
    (capture_frame())
}
Optimized for: Point clouds, laser scans, depth sensors
SettingValueWhy
HistoryLatest-onlyOld scans create phantom obstacles
RedundancyOptionalCritical safety data can be sent twice
LatencyBoundedStale data is dangerous
#[node(period_ms = 100, preset = "live_lidar")]
pub fn lidar() -> (#[out("scan")] PointCloud2) {
    // 10 Hz LiDAR scan
    (read_lidar())
}
Presets configure multiple settings at once. You can override individual parameters when needed, but the defaults are tuned for real-world robotics use cases.

True Backpressure

When a subscriber can’t keep up with incoming data, traditional middleware queues grow silently until the system becomes unresponsive. Cerulion provides true end-to-end backpressure with explicit, predictable behavior.

How It Works

  1. Publisher learns subscriber capacity and paces transmission accordingly
  2. When the queue fills, an explicit policy determines what happens
  3. Clear metrics distinguish between drop reasons

Backpressure Policies

Drop Oldest

Keep the latest data, discard old messages. Best for sensor streams where only current state matters.

Drop New

Reject new messages when full. Best when every message must be processed in order.

Block

Publisher waits for queue space. Best for reliable data that can tolerate latency.
// Configure backpressure policy per output
#[node(period_ms = 33)]
pub fn camera() -> (
    #[out("frame", overflow = "drop_oldest")] Image
) {
    (capture_frame())
}
“Mystery lag” happens when queues grow silently until everything explodes. Cerulion’s explicit policies and metrics eliminate this failure mode entirely.

Minimizing Copies and Serialization

High-bandwidth streams can’t afford unnecessary memory copies or serialization overhead. Cerulion provides multiple strategies:

Serialization Formats

Zero read-time deserialization — access fields directly from the wire format.
// Data is accessed directly, no parsing step
let width = frame.width();  // Direct memory access
let data = frame.data();    // No copy, just pointer
Best for: Large messages read partially, cross-language compatibility

Copy Reduction

ScenarioTraditionalCerulion
Local pub/subSerialize → Copy → DeserializeZero-copy (shared memory)
Network sendCopy → Serialize → Copy → SendSerialize in-place → Send
Large fields (Image::data)Multiple allocationsSingle contiguous buffer
For local communication, Cerulion uses shared memory (iceoryx2) with true zero-copy semantics. The publisher writes directly to memory that the subscriber reads—no serialization, no copies.

Transport Selection

Not all data benefits from the same transport. Cerulion lets you choose the right tool for each stream:

UDP vs TCP for Streaming

TransportStrengthsWeaknesses
UDPLow latency for small messagesChunking overhead for large data, no ordering
TCPEfficient streaming, ordered deliveryConnection overhead, head-of-line blocking
UDP chunking adds significant overhead for video streaming. A 1 MB frame split into UDP packets requires reassembly logic, timeout handling, and duplicate detection—all of which TCP handles natively.

Structured Environments

For intra-robot communication where network topology is known:
  • Skip discovery overhead with explicit endpoints
  • Use TCP for reliable streaming
  • Video compression support (coming soon)
# Graph configuration with explicit transport
nodes:
  - id: camera
    type: camera_node
    transport:
      type: tcp
      endpoint: "192.168.1.10:5000"

Bandwidth Observability

You can’t optimize what you can’t measure. Cerulion exposes detailed metrics for every topic:

Per-Topic Throughput

Monitor publish and subscribe rates independently:
cerulion topic hz /camera/image
# Output: pub: 30.1 Hz, sub: 29.8 Hz, dropped: 0.3 Hz
Distinguish between failure modes:
ReasonMeaningAction
Queue overflowSubscriber too slowIncrease queue or optimize subscriber
Network lossPackets dropped in transitCheck network, add redundancy
Fragment timeoutLarge message reassembly failedReduce message size or switch transport
End-to-end latency distribution from publish to subscriber callback:
cerulion topic latency /camera/image
# p50: 1.2ms, p95: 3.4ms, p99: 8.1ms, max: 12.3ms
Track queue fill levels over time to detect backpressure before drops occur:
cerulion topic buffer /camera/image
# current: 3/10, avg: 4.2/10, peak: 9/10
Use cerulion tui for a real-time dashboard showing throughput, latency, and buffer status across all topics.

Next Steps