How accurately am I measuring time in this fractal typeset code?

I set this notebook up primarily to illustrate the cool fractal pattern that arises when, starting with x, you replace every x that you see with x_x^x and continue iteratively:

The picture above is 5 iterates; of course, 7 iterates has 9 times as many x\text{s} and, on my first try, slowed down my computer noticeably. Thus, I thought this might make for a fun example to compare the timing of KaTeX to that of MathJax. The basic strategy looks something like this:

  s = ___crazy_long_tex_string___;
  let t = new Date();
  yield tex.block`${s}`
  output(new Date() - t) // to another cell

So, the basic question is, if I have something like

  yield tex.block`${s}`

in between t = new Date() and result = new Date() - t, how accurate might I expect result to be? If it’s not so good, are there other approaches I might take to measuring the time - again, with a specific objective of understanding the speed difference between KaTeX and MathJax.

Dominik has a great template for this Benchmark template / Dominik Moritz / Observable


Doing fine grained performance testing with Date isn’t going to be very accurate. You’d be better off using the window.performance set of APIs that provide access to high resolution timers. There are two major ways to use the API. The first is the dedicated performance.mark and perf.measure functions to let the browser handle some of the work in handling many measurements. The second is the simple which you can use as a drop-in replacement for your current usage of

Note that Dominik’s notebook that Fil linked to uses, and has a lot of other features as well. I’d recommend trying to use that if you can, but it is a little more work to integrate.

1 Like

Thanks @Fil and @mythmon

I am actually using in the notebook already. I guess that my specific question is:

should I expect the timing to genuinely account for rendering time?

I ask for two reasons:

  1. My experience in other environments (like the Mathematica, in particular) suggest that timing like this might account for CPU time but not GPU and
  2. The computed timings on Chrome, especially for MathJax SVG, seem somewhat too short compared to observation.

I also notice that the examples in Dominik’s notebook are simple computations, rather than graphic generation.

I hope that makes more sense.

Apologies, I didn’t actually look at your notebook :sweat_smile:.

For what it’s worth, the times I see reported in your notebook (in Firefox) approximately match measurements I’m taking with a stop watch.

The notebook you have now has an interesting property: because it uses yield within the timed section, and because Observable ties yield to requestAnimationFrame, you are involving the animation loop of the browser. That means the measurement will take into account the time it takes to do any work that blocks the animation thread. Luckily this includes rendering, which is what you are trying measure.

Overall, I think this is a valid way to measure the performance of the notebook.

An entirely different approach you could take is to use the browser profiling tools. I recorded a profile using Firefox’s profiling tool while rendering the 7 iteration KaTeX variant. The notebook reported 5.094 seconds.

In the recorded profile there is a lot of information, but of the interesting things to me is a reported 5.1 seconds of event processing delay (the pink bar), representing the page not being responsive (jank), and the 4.9 seconds spent in requestAnimationFrame callbacks (the brown bar). I don’t think you can see it in the online version of that profile, but the profiler also points to nearly all the time being spent in Node.replaceChild calls originating from the Observable Worker (where the KaTeX code would be bundled).

That independent measurement seems to also add weight to the idea that using calls around a yield statement is a good way to measure Observable’s rendering performance.


Thanks @mythmon This is all good information - the Firefox profiling tool, in particular, seems awesome! I can’t believe I’m only learning of this. I have a feeling that my own workflow is going to slow to crawl as I start profiling everything I do! :slight_smile:


Afaik won’t give you more granular results, because the number reported is varied randomly by browsers to protect against Spectre. You’ll end up with the same resolution as

Date is oriented towards absolute time, calendars, time zones, and human representations. As well as being higher overhead (since it does more), it is also not guaranteed to be monotonic. In code like

const start = new Date();
// do work
let duration = new Date() - start;

duration is not guaranteed to be non-negative. Edge cases include DST or timezone trickery, but also clocks updating automatically due to clock drift and NTP. These are small details, but worth considering. On the other hand, is defined by the spec to be monotonic.

Also, the time of is not always tampered with: Chrome seems to have relaxed it to give more precise values now, and Firefox is configurable to disable that tampering (I remember hearing Firefox’s Fission project would enable making those timers more precise again, but I don’t have the inside news anymore in that regard).

1 Like