Introduction
High-bandwidth data streams like video and LiDAR are the backbone of modern robotics—but they’re also where traditional middleware breaks down. Cerulion is designed from the ground up to handle these demanding workloads with predictable performance.Sane Presets
Stream-optimized modes instead of 30 QoS knobs
True Backpressure
Predictable behavior when subscribers can’t keep up
Zero-Copy Paths
Minimize serialization and memory copies
Observability
Per-topic throughput, drop reasons, and latency histograms
Stream Presets
Instead of tuning 30 QoS parameters, Cerulion provides presets that match real robotics workloads:Live Video Streams
Live Video Streams
Optimized for: Camera feeds, compressed video, visual SLAM
| Setting | Value | Why |
|---|---|---|
| History | Latest-only | Stale frames are useless |
| Latency | Bounded | Fixed max delay, then drop |
| Reliability | Lossy OK | Missing frame beats frozen stream |
Live LiDAR Streams
Live LiDAR Streams
Optimized for: Point clouds, laser scans, depth sensors
| Setting | Value | Why |
|---|---|---|
| History | Latest-only | Old scans create phantom obstacles |
| Redundancy | Optional | Critical safety data can be sent twice |
| Latency | Bounded | Stale data is dangerous |
True Backpressure
When a subscriber can’t keep up with incoming data, traditional middleware queues grow silently until the system becomes unresponsive. Cerulion provides true end-to-end backpressure with explicit, predictable behavior.How It Works
- Publisher learns subscriber capacity and paces transmission accordingly
- When the queue fills, an explicit policy determines what happens
- Clear metrics distinguish between drop reasons
Backpressure Policies
Drop Oldest
Keep the latest data, discard old messages. Best for sensor streams where only current state matters.
Drop New
Reject new messages when full. Best when every message must be processed in order.
Block
Publisher waits for queue space. Best for reliable data that can tolerate latency.
Minimizing Copies and Serialization
High-bandwidth streams can’t afford unnecessary memory copies or serialization overhead. Cerulion provides multiple strategies:Serialization Formats
- FlatBuffers
- Cerulion Wire Format
Zero read-time deserialization — access fields directly from the wire format.Best for: Large messages read partially, cross-language compatibility
Copy Reduction
| Scenario | Traditional | Cerulion |
|---|---|---|
| Local pub/sub | Serialize → Copy → Deserialize | Zero-copy (shared memory) |
| Network send | Copy → Serialize → Copy → Send | Serialize in-place → Send |
Large fields (Image::data) | Multiple allocations | Single contiguous buffer |
For local communication, Cerulion uses shared memory (iceoryx2) with true zero-copy semantics. The publisher writes directly to memory that the subscriber reads—no serialization, no copies.
Transport Selection
Not all data benefits from the same transport. Cerulion lets you choose the right tool for each stream:UDP vs TCP for Streaming
| Transport | Strengths | Weaknesses |
|---|---|---|
| UDP | Low latency for small messages | Chunking overhead for large data, no ordering |
| TCP | Efficient streaming, ordered delivery | Connection overhead, head-of-line blocking |
UDP chunking adds significant overhead for video streaming. A 1 MB frame split into UDP packets requires reassembly logic, timeout handling, and duplicate detection—all of which TCP handles natively.
Structured Environments
For intra-robot communication where network topology is known:- Skip discovery overhead with explicit endpoints
- Use TCP for reliable streaming
- Video compression support (coming soon)
Bandwidth Observability
You can’t optimize what you can’t measure. Cerulion exposes detailed metrics for every topic:Per-Topic Throughput
Per-Topic Throughput
Monitor publish and subscribe rates independently:
Drop Reasons
Drop Reasons
Distinguish between failure modes:
| Reason | Meaning | Action |
|---|---|---|
| Queue overflow | Subscriber too slow | Increase queue or optimize subscriber |
| Network loss | Packets dropped in transit | Check network, add redundancy |
| Fragment timeout | Large message reassembly failed | Reduce message size or switch transport |
Latency Histogram
Latency Histogram
End-to-end latency distribution from publish to subscriber callback:
Buffer Occupancy
Buffer Occupancy
Track queue fill levels over time to detect backpressure before drops occur: