Skip to main content
Three components. Dual-path transport. Zero boilerplate. Cerulion’s pub/sub system is built from three components: the TopicManager that owns the session, the Publisher that sends to both local and network paths simultaneously, and the Subscriber that auto-selects the fastest available route. You interact with one API — the transport details are invisible. TopicManager owns the session. Publisher sends. Subscriber discovers.

TopicManager

The TopicManager is the central coordinator — it owns a single Zenoh session shared across all pub/sub pairs, reducing memory and connection overhead. One session, many pub/sub pairs. The TopicManager eliminates per-topic connection overhead.

Shared session

A single Zenoh session serves all publishers and subscribers. In ROS2, each node often manages its own DDS participant — multiplying discovery traffic and memory usage. Cerulion shares one session across everything.
Subscribers announce themselves on the network. When a remote subscriber appears, the TopicManager auto-enables network publishing for that topic. When the last subscriber disconnects, network publishing stops — no wasted serialization.
The TopicManager validates that publishers and subscribers on the same topic use the same message type. Mismatches are caught at startup, not at runtime.
Remote subscribers send periodic keep-alive signals. If a subscriber goes silent, the TopicManager disables network publishing for that topic — saving CPU cycles that would otherwise be spent serializing data nobody receives.

Publisher

The Publisher uses a dual-path architecture: local publishing is synchronous and fast, network publishing happens asynchronously on a background thread. Local publishing never waits for network I/O. Local path: ~60 ns. Network path: asynchronous, non-blocking. Both fire on every publish.

Dual-path publishing

Every send() writes to shared memory (local subscribers) and queues for the network thread (remote subscribers) — simultaneously. Local subscribers see the data in nanoseconds. Remote subscribers get it as fast as the network allows.
The network background thread serializes and transmits independently. If the network is slow or congested, local publishing continues at full speed. The publisher never blocks.
Only the most recent message is queued for network transmission. If the publisher sends faster than the network can deliver, stale messages are replaced — not queued. For real-time systems, current data always beats old data.
The background thread starts on creation and shuts down on drop. No manual thread management, no cleanup code. Rust’s ownership system guarantees clean shutdown.

Subscriber

The Subscriber automatically discovers topics and selects the optimal transport path. If a topic exists locally, it reads from shared memory. If not, it requests the topic over the network. Like a GPS that always picks the fastest route — local shared memory when available, network when necessary.

Topic discovery

Cerulion displays all available topics across every device on the network. You subscribe by name — the system handles routing.
When the topic exists on the same machine, the subscriber reads directly from the publisher’s shared memory region. No serialization, no network stack, no copies. This is why local latency is ~60 ns instead of milliseconds.
When the topic is only available on a remote machine, the subscriber requests it over Zenoh. The remote publisher starts serializing and transmitting. Latest-message semantics ensure the subscriber always gets current data.
When a node runs and no message is waiting on an input, the transport layer returns immediately — no blocking, no spinning. The scheduler controls when nodes execute, not the transport layer.

How They Work Together

The TopicManager orchestrates everything. Publishers and subscribers don’t know or care whether they’re communicating locally or over the network.
Ready to try Cerulion? Start with the quickstart and have a working camera-to-display pipeline in 5 minutes — or schedule a 15-min call with the team.