syntax error, not javascript?

Is the cell supporting standard javascript? I get syntax error a lot without further explanation.

some example:

k = 2
k = k +1


console.log('a');

document.createElement(''div").innerHTML='fef'

How does this website handle security? allowing people to embed code is a phishing threat, right?

1 Like

Please see this introduction:

Cells are typically JavaScript expressions or blocks. So to define a cell k whose value is two, you’d say:

k = 2

Or if you wanted it to have the value three, you could say:

k = {
  let k = 2;
  k = k + 1;
  return k;
}

Similarly to create your DIV, you could say:

{
  const div = document.createElement("div");
  div.innerHTML = "fef";
  return div;
}

Or more idiomatically:

html`<div>fef`

Regarding security, notebook code is sandboxed and run on a separate domain (a subdomain of observableusercontent.com) per user. Notebooks can display arbitrary content within the body of a notebook, but they can’t access private data or privileged endpoints.

Thank you!

perhaps, you could make that “syntax error” message clickable and link to the above document. or put a clickable question mark linking to the document?

also the scratchpad should have a link to that document under “introduction”?

1 Like

more about sec. on this site in this thread until observable devs put out a proper official cors faq:

I think it’s a good approach for now, not for devs trying to extend this platform, but good enough for avg devs trying this platform without creating too many sec holes to shoot observable int the foot

I’m thinking about loading large data from my dropbox account. That could be dangerous, right?

define large

I’ve done some large data munging in this collection, loading data from github (not recommended since github is mainly for code repos)

would be interested to learn about your large dataset and if loading it from dropbox integrates well with observable cors

the only danger of loading large data in observable notebooks is your notebook will become unresponsive and browsers like FF Quantum complaint about that really well if you don’t sanitize, trim, compress, chunk or pipe your data properly on notebook load. I am sure you’ve seen many of those clunky prototypes on this site

I’d suggest you convert your data to Apache Arrow format or similar flat array structure rather than just dumping megs of csv in a notebook and trying to load and parse that. Start here:

is this also you or just some other guy taking dropbox for a spin on observable? :slight_smile:

a large data could a pretrained deep learning model, for example, 300mb or so.

I forgot from where I saw an example of loading large data from dropbox. it could be google’s collab, or the https://iodide.io/ project.

yeah, 300 megs is too large for any web site, observable included, unless you chunk, pipe and stream it via http2 for data viz on the client. don’t even go there, unless you know what you are doing. can be done. see dask, mapd, cassandra, hadoop, spark, torch, tensorflow, big data and such to build your back-end for that kind of massive data analytics :slight_smile:

i’m thinking of using ipfs or webrtc. I really like the idea of observable. but comparing it with jupyter, it lacks raw gpu access and a filesystem for large data.

I think observable is on par with Jupyter notebooks at this point. what you are lacking is the proper data streaming/ml back-end and no front-end framework can give you that today due to massive dataset you want to load, parse and tackle. it’s best done on the server side, yes via direct compute on GPU, and even Py is not best at that. Do take a look at mapd to start with and forget dropbox :slight_smile: Good luck!

p.s.: you can do GPU compute on the client with webasembly and webgl on the front-end on observable, but that’s mostly for rendering and light math compute.

to be honest, I’m more interested in observable itself. I have been thinking what a better jupyter would be. to me, jupyter is not social enough.

but from a startup engineering point of view, relying on a backend for gpu computing is very costly. for most of the time when people browse a jupyter notebook, they don’t actually do any compute. they spend most of the time reading and consuming the text content. but I think both google’s collab and some nvidia solution will assign a gpu attached vm for each opened notebook at a cost of $2 per hour?

I think it makes lots of sense to move compute to the clients. most of the users would have enough gpu power to do lots of things, like for example deep learning inference.

perhaps, a better way could be a gpu virtualization solution where one vm can be shared with many end users. I found out this solution http://www.rcuda.net/, but it doesn’t seem to be open sourced and its client is native, there is no javascript client yet.

1 Like

good points. yeah, I think observable is getting better with every new release. Myself and other active devs on this forum that have experience with both Py and JS notebooks have been pushing to make it a better more social and collabo friendly JS notebooking platform for devs, but please keep in mind it’s only for front-end.

Google collab and kaggle are still best places to run DL/ML algos to fine-tune things and even though you can now embed image recognition sets in Android apps and web apps with tensorflow, avro and others, I think short-term platforms like observable will stay point and click and stream heavy data algos results from back-end gpu processing for years to come.

I might be wrong on that, but that’s my crystal ball read. Mapd (I think they renamed recently) is one platform and incorporates apache arrow and other things for you to do what you attempt to do with ‘large’ data sets that are easy approachable for avg devs. Cassandra is my pick for wide-column DB, the rest is sort of up to you to swing it to the browser without killing it and waiting for megs of data to load. http2 has made some good progress on that front. Webassembly can help with some final data recalibration on the front-end, but I would not push it further than that.

Hope that helps

:hugs: