🏠 back to Observable

Error monitoring for notebooks!

I think I have a good solution for keeping notebooks running. With a few tweaks sentry.io works in userspace, and it will collect and summarize where errors are occuring, and will include what OS/Browser/Device etc things are breaking on, and what was happening leading up to an error. It’s a very professional solution to the problem, and provides a single place to monitor all your notebooks! I put some screen shots in the following notebook

Best thing? For low traffic single developer use cases… its FREE

Caveat: Only errors that are not handled by Observable’s Runtime can be logged.

I am digging into this more. So while {throw new Error()} is not reported {eval("foo")} is. Sentry does report even handled errors IF it occurs in some well known place like eval or requestAnimationFrame.

So yeah, some cases are missed, but a lot of cases, even in userspace code, are detected. Errors in promises or generators being a big category of common userspace errors.

It’s a great point though, it’s not 100% coverage, so I added a limitations section. Still, if you are looking for an easy way to catch 80% of unexpected failures this is a good way IMHO. Its a vast improvement over nothing and it only takes a few minutes to setup. It also picks up errors on devices you might not own or browsers you might not have access to, so its a cheap way to do cross platform testing, even with its limitations.

Are you certain? I don’t see how Sentry would be able to catch that. Be aware that while testing I triggered some errors from the JS console, outside of Observable’s Runtime.

That number seems far too generous. You may be able to detect unhandled rejections for internal fetch calls (i.e., where the result is processed inside the same cell that made the request), but any promise rejection that is returned directly to the Runtime should be unloggable. That includes failing imports, requires, and top-level fetches.

1 Like

OK yeah, I have done more testing and I must have got my wired crossed with your tests last night. eval("/") does not get caught if its in a cell. Nor do require(…) errors.

You can manually wrap and report, thought that super sucks (especially as you might need to add in some awaits) so I wonder if there is another way

so this reports but I don’t fancy destroying all my notebooks to do that

{
  try {
    return await require("gibberish module that is caught");
  } catch (err) {
    Sentry.captureException(err);
    Sentry.flush()
    throw err;
  }
}

Edit: I missed the code in your reply, which is basically the same as below. So my recommendation would be to simply wrap the logic in a helper. So anything critical would be called as, e.g., sentry(doStuff).

Assuming that you primarily want to detect broken external resources, you could do something like:

async function logErrors(p) {
  try {
    return await p;
  }
  catch(err) {
    setTimeout(() => { throw err }, 0);
    throw err;
  }
}

and then (invalid package name for testing):

d3 = logErrors(require('d3-foo'))
d3 = logErrors(import('d3-foo'))

Instead of the setTimeout you could also log straight to sentry.

1 Like

Yeah, I don’t like changing source code in order to satisfy monitoring. Adding an independant cell to add monitoring is ok, but having to change every unrelated cell in the notebook is too much.

So there are two major gaps in monitoring. 1. anything processed by the runtime, and 2. anything that errors out before the the SDK is loaded.

So here is a solution I will try out next that hopefully solves both gaps.

This notebooks runs another notebook through a private runtime, so it can see all the errors. It buffers them and will regurgitate them if it finds a configured sentry runtime. It also exposes a http endpoint to run the notebook, so all you need to do is plug that endpoint in an cron-like availability monitor. I am using UptimeRobot

Better?