in reply to GNOME

it is curious to me to follow from a distance the way in which audio and video handling (on linux, and more broadly) have been both converging yet not quite arriving in the same place.

It's interesting to thnk about the differences: more computational load for visual display, but more sensitivity to timing errors for audio (due to human sensory mechanisms).

read the blog post and couldn't stop wondering "why don't they just double buffer as simply as every audio API on every platform?"