Issue Translating Teachable Machine Code Sample to Run in Observable

The following notebook successfully loads an exported model from Google’s Teachable Machine:

ezgif.com-resize

However, attempting to implement the exported models pattern in Observable throws a run-time error (T.fromPixels is not a function).

Recommendation is to use tf.browser.fromPixels , but the code snippet generated by the trained model export is utilizing the image package with the built-in methods to consume the webcam’s output.

Any suggestions on debugging this to make the code run in Observable is appreciated. The goal is to have a notebook anyone can fork and simply replace with their pre-trained image detection model generated by the Teachable Machine.

Notebook: Using A Trained Teachable Machine Model in Observable / Mario Delgado | Observable

P.S. The pattern in the notebook How to build a Teachable Machine with TensorFlow.js is helpful, but not directly comparable to the pattern of loading a pre-trained model with its associated weights.

It seems that the image requires tfjs 1.3.1. Loading it via skypack also seems to bundle tfjs, so you may want to import either via esm.run or esm.sh (and do the same for tfjs).

Resolution:

Replaced:

tf = require(“@tensorflow/tfjs@latest/dist/tf.min.js”)

With:

tf = require(“@tensorflow/tfjs@1.3.1/dist/tf.min.js”)

And left import of tmImage as is:

tmImage = import(“https://cdn.skypack.dev/@teachablemachine/image@0.8.5?min”)