Ultra-low latency distributed messaging network
This project is currently under development
FastPubSub delivers up to 64KB messages from Writers to many Listeners via a global relay fabric. We probe paths in real time and route around overloaded interconnects and BGP detours to reduce latency spikes and keep delivery consistent — especially for real users behind ISPs, mobile networks, and firewalls.
Engineer note: WebSocket edge (QUIC planned) · QUIC relay backbone · Online-only fanout (no persistence)
Built entirely in Rust for maximum performance, memory safety, and minimal overhead.
Our relays dynamically route packets based on real-time latency measurements (RTT), not hop count.
Uses QUIC over UDP between relays to form an overlay mesh that can route around BGP detours and congested interconnects.
Easy integration for clients via standard WebSockets. Logical channels instead of raw TCP streams.
Start coding in minutes with our lightweight SDKs for major languages. No complex configuration required.
Built on open standards: WebSocket for edge connectivity (QUIC planned for clients) and QUIC for the relay backbone.
In latency-sensitive systems—trading signals, multiplayer state, live bidding—milliseconds matter, but latency spikes matter even more. FastPubSub continuously probes network paths and routes around congested interconnects and BGP detours to keep delivery fast and consistent, especially across regions and real-world ISPs.
Stop overpaying for idle infrastructure. Our global mesh network allows you to scale instantly without the capital expenditure of building and operating your own Points of Presence (PoPs).
The public Internet changes minute to minute. FastPubSub runs on a distributed relay fabric that detects route degradation and shifts traffic to better paths within seconds, maintaining stable delivery even when links congest, peers flap, or parts of the network fail.
BGP is built for reachability, not speed. It can pick longer or congested routes. Our relays continuously measure RTT, jitter, and loss. If a path degrades, we recompute routes in seconds and steer traffic to a better hop.
Pure fire-and-forget delivery. No disk persistence means fewer moving parts and no storage bottlenecks—messages flow from Writer to Listener without hitting a database.
FastPubSub is message-oriented, not stream-oriented. We don't enforce global ordering across a channel, so there is no single serialized queue that every message must wait behind. Each subscriber has independent flow control—slow listeners won't stall fast ones.
No central broker bottleneck. Writers publish to the nearest relay, the relay mesh forwards the message along the best measured path, and the destination relay fans out to subscribed listeners. Routing stays distributed and scalable as channels and audiences grow.
Drop-in SDKs. WebSocket today, QUIC for clients planned. QUIC relay backbone. Simple Pub/Sub API. Focus on your app logic, not networking code.
Designed for Many-to-Many communication. Support thousands of concurrent Writers and thousands of Listeners per channel with minimal latency penalty.
The core is operational: QUIC overlay mesh, latency-based Distance Vector routing, and access ticket authentication are all working. We are hardening these components and preparing for the first external testers.
New: dashboard.fastpubsub.com is live — a real-time monitoring dashboard showing our overlay network, latency savings, and network anomalies on a world map. We are also preparing the first SDKs for easy integration.
Clients (Writers/Listeners) connect to our relays via WebSocket. Authentication and channel logic happen here.
The message enters the network. Our relays use Distance Vector routing with latency metrics to forward data through the best measured path.
Data reaches the destination relay and is pushed to subscribed Listeners immediately.