Real-time communication (with 10 second delay)

The problem

As we know, Delta Chat, and therefore its webxdc implementation, is not really suitable for real-time communication, due to (I assume) the fact that it’s e-mail-based. You can’t send several messages per second, as you would with WebSocket or WebRTC. In fact, currently there is a throttle in deltachat-core that pauses the sending of messages to make it just 1 batch per 10 seconds at a sustained rate.

See, for reference

So, it’s not really possible to write apps such as video-conferencing, or a real-time game with spectators.

Or is it? (Vsauce music starts)

The “solution”

How about embrace the delay, but still ensure the stream-like-ness of the data communication? That is, if we take an example of a video stream, instead of just a one-time-per-10-seconds refresh rate we can have a much higher refresh rate (as high as you like) - by capturing several frames since the last send update, then batching them in one webxdc.sendUpdate, and then spreading the frames over time on the receiving end:

I know this is not a brand-new concept. It is widely used in audio/video communication software, it’s called just “buffer”, I think, and the purpose is exactly the same - increase latency in exchange for consistent, non-interrupting audio/video stream. I’m pretty sure there is a library somewhere made for this, as there is nothing specific to webxdc, or even audio. Maybe it has even been applied in the way that I’m suggesting.


Firstly: it’s funny. Secondly, it can be useful for stream-like, not too interactive things. Possible examples:

  • Actual audio/video communication (terrible) or just streaming (like lectures and stuff) (ok).
  • Non-real-multiplayer real-time games (like, say, Tetris), where you can watch other players play.
  • Collaborative apps, like the editor we’ve made. For the editor, it could be showing other people type in sorta-real time character by character, and not sentence by sentence.

Some implementation details

  • when you send a packet, you mark the time at which you send it (it can be some kind of an offset from the first packet, or actual UNIX time).
  • When you receive a sendMessage batch of packets, you check the delay (probably gotta be a fixed delay such that the distance between the packets is always the same), check the timestamps of each packet, and give them to the consumer (the video renderer, the game state) with the said delay.

Maybe just reduce the throttle period in Delta Chat?

I don’t know. I’m not sure what’s better for mail protocols and servers - one bigger message every 10 seconds, or 10 smaller ones every second.


Also note that messages have a delay on their own (depending on provider and conditions one second or more).

We are also thinking of adding a p2p api to DC and the webxdc spec, so I would not bet too much on your workarounds but they could make for fun experiments, like pushing the limits and find out what’s possible with the current limitations.

Streaming media with subtitles (not burned in but overlayed) could also be fun, especially since you can aggressively compress and use low res for some content as long as you have real text subtitles.


Do you have a link to that? I’m thinking why not just allow WebRTC inside webxdc (yeah, let’s undo 3 months of work).

FYI I made a prototype of such an app, though it’s not explicitly utilizing the “batch and spread” algorithm that the original post describes.

This could also be useful for this, but just for performance reasons (I mentioned this forum post in the issue): #7 - perf: ~0.5s stutter for high-yjs-update-rate apps - webxdc-yjs-provider - Side note: the realization that this can be used also for performance reasons brought me to another realization that this might be a common algorithm.

I’m retracting this comment, because such updates are also huge in size, which we can’t afford given that all webxdc updates are stored (non-ephemeral). I made a new comment: #7 - perf: ~0.5s stutter for high-yjs-update-rate apps - webxdc-yjs-provider -
Though what if we compress the updates :thinking: :thinking: