Are all of these setTimeout and setInterval calls normal?

Hi folks,

I was paying attention to a different window of Firefox (v68.0.1) when I noticed a banner saying “A page is slowing down performance” appear in a window for a notebook I had open behind it. The notebook in question was the only tab open in that window, so I profiled its performance after refreshing it, and I was surprised by result. In spite of the fact that there were no setTimeout or setInterval calls in any of the notebook cells, the former was being called around 30 times per second, and the latter was being called three or four times in the same time frame.

I wasn’t sure if a module I had loaded or errant piece of my own code was the culprit, so I profiled the five-minute introduction notebook, only to discover the exact same behavior replicated there, even after commenting all of the generator cells and cells containing explicit setTimeout and setInterval calls.

It’s mildly troubling that none of the setInterval calls seem to have corresponding clearInterval calls, as I have to imagine that intervals continue to get created long after the window I profiled the page performance for, but I’m sincerely wondering if this is intended behavior, or something unexpected. Page responsiveness remains remarkably fluid throughout this (until Firefox reports that it isn’t), but it strikes me as an unusual thing to be happening within a system that otherwise seems built entirely on the Promise and EventListener APIs.

Does anyone have any insight into what’s going on?

EDIT: by comparison, the readme page for Observable’s stdlib makes 0 calls to any functions when I profile it, so I have to assume this is the work of a script (or scripts) loaded by every notebook

The 5-minute intro has one cell that does:

i = {
  let i = 0;
  while (true) {
    yield ++i;

And another cell that does:

date = {
  while (true) {
    yield new Promise(resolve => {
      setTimeout(() => resolve(new Date), 1000);

These just keep running in an infinite loop.

Yes, as I mentioned in my post, I have those cells commented.

Looks like these happen regularly (every 4–5 seconds, accompanied by a DOM event). My guess is that this is a heartbeat used for updating the notebook when it gets changed somewhere externally to the page. But it could be something else.

I think maybe the inspector performance tab is showing you every time the setInterval gets triggered, rather than the creation of new piling up timers. I am not an expert on browser profilers though.

I don’t think this is related to whatever was causing your “A page is slowing down performance” warning banner.

Thank you for looking into the behavior on your machine. The notebook in question didn’t contain any cells that regularly re-rendered or re-ran on any kind of schedule, nor any code that would (or, I ought to say, that should) block the main thread, let alone any code that would have been expensive in terms of cycle time.

I’m not convinced that the performance banner and these apparent page change heartbeats are unrelated.

The worker process is shared between multiple notebooks:

The only interval that runs always regardless of notebook content is a 5s interval we use to detect infinite loops. If notebook content delays the event loop by two 5s heartbeats, then we show a message that tells folks that the notebook probably is in an infinite loop and you’ll want to fix it in safe mode.

If you’re looking at the full window, there’s are some more intervals you’ll see - CodeMirror polling for input changes every 100ms, and blinking the cursor every 530ms. There’s also a 30s ping that we send to make sure that the notebook is still connected.

But the gist of the performance question is whether any of these take enough time on the timeline to delay the page, and we haven’t identified any of them to take longer than a tick in any scenario. If you’re still seeing a performance warning on a basic notebook with all loops, yields, and timeouts commented out, and with no other notebook tabs open (because Firefox uses OOPIF in recent versions, so they say), then it’s possible that there’s something awry, which would either mean an extension or something that we’ve missed so far. In that case, if you see the timeouts, clicking on them and getting an idea of which chunk of source code they’re coming from would be helpful, because afaict there’s no sign of such a problem in my testing.