@tomlarkworthy I am using the tgz endpoint for my nodejs application like this:
{
"name": "observablehq",
"version": "1.0.0",
"description": "Observable Notebook Server",
"main": "index.mjs",
"scripts": {
"start": "node index.mjs",
"refresh": "npm update && npm start"
},
"author": "Bryant Richardson",
"license": "ISC",
"dependencies": {
"@keystroke/observablehq": "https://api.observablehq.com/@keystroke/observablehq.tgz?v=3",
"@observablehq/runtime": "^4.8.2",
"cors": "^2.8.5",
"express": "^4.17.1",
"node-fetch": "^2.6.1",
"puppeteer": "^8.0.0"
}
}
Above is not version-pinned, so every time you run npm update it will fetch the latest version of the notebook and if it has changed, will modify the package-lock.json integrity hash.
If you look back at the embed notebook options, one of them is “Runtime with JavaScript” and it uses the API endpoint for the notebook content that I’m using to add the script tag to the empty page. I am then using the etag header on this endpoint to see when the notebook content has changed, which means I can reuse the page on subsequent requests which should be faster than relying on browser cache and a new page load. I think you can rely on this to the same degree you can rely on the embed endpoint for getting notebook content. Something to consider.
Thanks for the links! I will take a look at the models and see if I can make use of those flows. I was thinking to refactor how I’m running the nodejs notebook, as currently its called via a single function. I was thinking I can hook into the runtime a bit more deeply and “replace” cell values at runtime, sort of like how “import with” works. This would allow me to declare the dependencies like express or puppeteer as their own named-cells, and from nodejs I can set their values with the actual import libraries. This way I wont have to pipe a “context” object all the way through my code acting as a “bag of globals” and instead I can reference the cells directly. This also gives the opportunity to mock the cell values in the notebook as the default values, so it can potentially execute directly in the web editor when working on the code (instead of just defining functions that go uncalled). The main problem is that I don’t want to induce errors when the actual nodejs content isn’t available and references can’t be undeclared magic variables like how the “process” object would be referenced to read env vars. However, this modification isn’t strictly necessary now that the code is already written and I’m not sure the code is complex enough to benefit greatly from such a change. But I am curious to further develop the idea of implementing a “traditional” nodejs application on this platform and how wrapping the notebook to execute and using the topological run order could work.
I think I’ll probably look at the post message interfaces you linked first. Seems a more direct boon to what I’m writing. I still need to refine the design model for how my functions will work and what they’ll have access to in terms of the direct request / response objects. For example if I ever want to support setting cookies I will need a way to let declared functions set them, but I would like to keep the ability to have simple functions that just take an input and return an output without needing to interact with the “lower-level” http layer. Although now that every site is asking me to accept their cookies I’m feeling like I want to do my part and just not use them at all lol 