Been playing around more with #HTMLFragments (blog post incoming soon) and I realized the streaming implementation is slower than it should be.
https://blog.dwac.dev/posts/streamable-html-fragments/#streaming-complete-chunks
If you stream two nodes with a delay between them, the first node actually won't stream at all! They'll both appear at once.
```javascript
response.write('<div id="1"></div>');
await timeout(1_000);
response.write('<div id="2"></div>');
```
This is because we use a `MutationObserver` and detect the addition of `#2` to know that `#1` is done parsing. However `#2` doesn't exist until 1 second after `#1` was sent. So in any realistic streaming scenario, the last element is displayed one batch later than it should.
This can theoretically be fixed by streaming the opening tag of the next batch as soon as possible, which is _probably_ feasible given that it likely isn't dependent on the slow operation, but is so unergonomic as to likely never be done. You'd have to rewrite the totally reasonable:
```javascript
const text1 = await readDatabase(1);
response.write(`<div id="1">${text1}</div>`);
const text2 = await readDatabase(2);
response.write(`<div id="2">${text2}</div>`);
```
Into the totally unreasonable:
```javascript
const text1 = await readDatabase(1);
response.write(`<div id="1">${text1}</div><div id="2">`);
const text2 = await readDatabase(2);
response.write(`${text2}</div>`);
```
You definitely shouldn't _have_ to do that, the HTML parser should know when `#1` finished, but since we don't get a proper signal that the element is done parsing and have to rely on `MutationObserver`, we have to take this performance hit.
So the takeaway here is: Give me a signal that an element is done parsing you cowards!