I’ve also seen the term “real-time web” (RTW), which I like as well, but here I want to dive a bit into the underlying bytes that go back and forth to make the real-time web happen.
The binary “message” at the heart of the real-time web is a sequence of bytes controlled by the application: JMS-style messages, XMPP (Jabber) frames, a JSON object, serialized Java in Hessian, the packets for a Quake game, stock ticker updates, iPhone app messages, toll booth status control panel, on-demand music streaming, auto-manufacturing overview consoles.
Because the messages can vary in length from tiny, fast Quake messages where response time is critical, to larger packets like the music and video streaming, the underlying protocol must handle that range, but still be memory-efficient. It would be absurd to force a server to buffer an entire video before sending it, or even fully serialize an XML message just to find the length.
So a sane protocol needs binary-length encoded chunks (called “frames”) combined into messages. “Messages” are understood by the application, “frames” are invisible to the application but are used by clients, servers, and intermediaries to manage the messages.
Bringing those requirements together, the minimal protocol looks like the following:
stream ::= message*
message ::= (non-final-frame)* final-frame
final-frame ::= final-flag length <bytes>
non-final-frame ::= non-final-flag length <bytes>
At the moment, that’s a bit abstract since I haven’t defined the encodings for the length or the final-flag or allowed any kind of control messages, but it’s the heart of the protocol.
The key is sending chunks of binary data, where servers can use their own fixed buffering (like 8k buffers) to send arbitrary-length binary data.
Any text data is easily encoded as UTF-8 over the binary payload.