Skip to main content
A 1080p camera frame is ~6 MB. Traditional middleware serializes it on every publish. At 30 Hz, thatโ€™s 180 MB/s of serialization overhead โ€” per topic. Latency scales linearly with payload size, eating into your frame budget at exactly the moment you need it most. Cerulionโ€™s zero-copy transport is payload-agnostic: a 6 MB image has the same transport overhead as a 24-byte pose message, because neither gets serialized or copied for local subscribers. Payload size doesnโ€™t affect local latency. Period.

Stream Presets

Coming Soon Instead of tuning 30 QoS parameters, Cerulion will provide presets that match real robotics workloads:

Live Video Streams

Optimized for: Camera feeds, compressed video, visual SLAM
SettingValueWhy
HistoryLatest-onlyStale frames are useless
LatencyBoundedFixed max delay, then drop
ReliabilityLossy OKMissing frame beats frozen stream
#[node(period_ms = 33, preset = "live_video")]
pub fn camera() -> (#[out("frame")] Image) {
    // 30 FPS camera feed
    // Preset handles QoS โ€” latest-only, bounded latency, lossy OK
    (capture_frame())
}
Optimized for: Point clouds, laser scans, depth sensors
SettingValueWhy
HistoryLatest-onlyOld scans create phantom obstacles
RedundancyOptionalCritical safety data can be sent twice
LatencyBoundedStale data is dangerous
#[node(period_ms = 100, preset = "live_lidar")]
pub fn lidar() -> (#[out("scan")] PointCloud2) {
    // 10 Hz LiDAR scan
    // Preset: latest-only, bounded latency, optional redundancy
    (read_lidar())
}
Optimized for: Twist commands, joint targets, waypoints
SettingValueWhy
HistoryLatest-onlyStale commands cause oscillation
ReliabilityBest-effortSpeed over guaranteed delivery
LatencyMinimalControl loops are latency-sensitive
#[node(period_ms = 10, preset = "control")]
pub fn planner() -> (#[out("cmd_vel")] Twist) {
    // 100 Hz control output
    // Preset: minimal latency, latest-only, best-effort
    (compute_velocity())
}
When presets ship, they will configure multiple settings at once. Youโ€™ll be able to override individual parameters when needed, and the defaults will be tuned for real-world robotics workloads โ€” not synthetic benchmarks.

Backpressure

Coming Soon When a subscriber canโ€™t keep up, traditional middleware queues grow silently until the system becomes unresponsive. Cerulion will provide explicit, predictable backpressure. The publisher knows subscriber capacity and paces accordingly. When the queue fills, an explicit policy determines what happens.

Drop Oldest

Keep latest data, discard old. Best for sensor streams where only current state matters.

Drop New

Reject new messages when full. Best when every message must be processed in order.

Block

Publisher waits for queue space. Best for reliable data that can tolerate latency.
// Configure backpressure policy per output
#[node(period_ms = 33)]
pub fn camera() -> (
    #[out("frame", overflow = "drop_oldest")] Image
) {
    // If the subscriber falls behind, old frames are dropped
    // No silent queue growth, no mystery lag
    (capture_frame())
}
โ€œMystery lagโ€ happens when queues grow silently until everything explodes. When backpressure ships, Cerulionโ€™s explicit policies will eliminate this failure mode by making queue behavior visible and predictable.

Minimizing Copies

High-bandwidth streams canโ€™t afford unnecessary memory copies or serialization overhead:
ScenarioTraditional MiddlewareCerulion
๐Ÿ  Local pub/subSerialize โ†’ Copy โ†’ DeserializeZero-copy (shared memory)
๐ŸŒ Network sendCopy โ†’ Serialize โ†’ Copy โ†’ SendSerialize in-place โ†’ Send
๐Ÿ“ Large fields (Image::data)Multiple allocationsSingle contiguous buffer
Optimized for Rust-to-Rust communication โ€” minimal overhead even for dynamically-sized fields.Image data is stored in a single contiguous buffer โ€” no intermediate allocations, no fragmentation. When network deserialization is needed, it is a single-pass read from that buffer directly into the typed struct.Best for: Maximum throughput, Rust-native pipelines

Observability

You canโ€™t optimize what you canโ€™t measure. Cerulion exposes detailed metrics for every topic:

Per-topic throughput

Monitor publish rate with cerulion topic hz:
cerulion topic hz camera/image
# Subscribed to: camera/image
# Average rate: 30.1 Hz
#   Min: 29.8 Hz
#   Max: 30.4 Hz
#   Std Dev: 0.1 Hz
# Window: 100 messages
Distinguish between failure modes:
ReasonMeaningAction
Queue overflowSubscriber too slowIncrease queue or optimize subscriber
Network lossPackets dropped in transitCheck network, add redundancy
Fragment timeoutLarge message reassembly failedReduce message size or switch transport
Coming SoonEnd-to-end latency distribution from publish to subscriber callback โ€” p50, p95, p99, and max latency per topic.
Coming SoonTrack queue fill levels over time to detect backpressure before drops occur โ€” current, average, and peak occupancy per topic.
Use cerulion tui for real-time node status, topic flow, and throughput. Monitor per-topic publish rate now with cerulion topic hz. Latency histograms and buffer occupancy metrics are coming soon.
Ready to try Cerulion? Start with the quickstart and have a working camera-to-display pipeline in 5 minutes โ€” or schedule a 15-min call with the team.