(Beta) Reach insights faster with the power of AI

Supercharge your workflow with Observable’s new AI Assist. Powered by GPT-4, AI Assist for Pro and Enterprise users can help you write SQL and generate charts quickly, accelerating you on the path to new insights. All you need to get started is an OpenAI API token!

SQL Assistance

Writing SQL queries is an essential part of preparing data for visualization in Observable, but not everyone feels as comfortable writing SQL as they do D3 or Plot. AI Assist makes it easier than ever to write SQL from scratch, debug, and explain queries on your path to visualization. By granting access to your schema, AI Assist can craft complex, multi-table queries by simply describing the desired results. You’ll spend less time digging through online documentation, and more time actually exploring your data.

ai-assist

Data Analysis Pairing

Since AI Assist is built on ChatGPT, you can ask it for help during your data analysis workflow. Not sure what type of chart you should use? Send a preview of the data to AI Assist and ask it for a recommendation. It’s like having a pairing partner with you at every step of the journey.

Experimental Features

  • JavaScript Assistance: AI Assist can help you write, debug, and explain JavaScript code. We’re still training it on Observable JavaScript, so pardon any incorrect results it may return.
  • Observable Plot Assistance: AI Assist has a basic understanding of Observable Plot, and is able to generate basic charts. It also has fundamental knowledge of D3, which can help you create more complex charts.

Tell us about your experience

AI Assist has been released as an open beta, and we value your feedback. Has AI Assist been helpful in your workflow? Let us know!

To learn more about AI Assist, visit the documentation.

2 Likes

If I understand it correctly, I’d need a paid-for OpenAI API account to get this working, and that their charging model is per-token. So in practice, can you share any experience on how much it ends up costing end users for typical iterative use of the AI assist with observable (e.g. assuming GPT4)?

I know they have the costs of the per-token access on their website, but it is less obvious to me how many are consumed via your interface (e.g. injection of ‘hidden’ system prompts that accompany user prompts).

2 Likes

I think that’s all correct.

But, my impression is that the intended use model is as a “code assistant”. That is you’ve got this sidebar where you can type queries to help you with your programming. Furthermore, the responses should be Observable specific, though I guess their still working on training.

I don’t think that end users who browse your notebooks generally have access to your queries. Even if they do, you can always clear your chat history. Thus, it’s not necessarily the case that you rack up expenses when end users browse your notebooks.

Of course, you could build an application that accesses ChatGPT and generates output based on ChatGPT’s response. I think you’d want to use fetch or OpenAI’s Javascript library for that, though.

Sorry - I was ambiguous in my Q. By ‘end user’ I meant me as a programmer using Observable’s code assist. I realise that once built, anyone viewing a page I have created doesn’t use my OpenAI access.

Currently I use the ChatGPT 4 interface (i.e. not via the API), and I find it remarkably helpful in speeding up the tedious coding work. Unfortunately, although I pay for this with a flat-rate subscription it doesn’t cover API use, which is charged on a use-basis.

I’d be interested to find out what the Observable specific trained model gives that the normal ChatGPT interface doesn’t. But don’t want this to be a costly exploration.

Thanks for your question, jwoLondon! When chatting in “notebook” mode (when a cell is not selected or when any cell except a SQL cell is selected), Observable sends approximately 2,900 tokens with the first message to help ChatGPT understand things like Observable JavaScript and the Plot API, because they are not currently included in the model’s training data. When chatting about a SQL cell, we send far less, approximately 100 tokens, which depends on the complexity of the database schema.

These numbers are just approximations and they are subject to change as we continue to fine-tune the prompts to return the most helpful responses. Also keep in mind that you can choose from GPT-4 and GPT-3.5 Turbo, which have dramatically different costs per token. The length and the nature of your conversation will determine the final cost, which could vary widely based on the use case.

To give you a ballpark, here are a couple of example prompts and their token counts based on average usage data:

A typical notebook chat:
How do I filter data in an array?
The total exchange, including the response from GPT-4, consumed roughly 3700 tokens.

A typical SQL chat:
List all the clients from California that made a purchase in the lat 30 days
The total exchange including the response from GPT-4, consumed roughly 400 tokens.

2 Likes

I tried an example on both Observable’s AI Assist and Open AI’s web interface to gain some context.

Observable’s AI Assist

I typed the following query into Observable’s new AI Assist:

Create a cell using Observable Plot that generates a bar graph with the following data: Ohio State: 77, Michigan 3. Add the title “Halftime Score”.

Here’s the result:

You can check the notebook to see the code, which I think is reasonably good. I did have to add the return statement, though.

In my first attempt, I did not specify Observable Plot (or any specific tool) and the AI Assistant responded with D3 code, which resulted in not so nice a bar plot that you can also see in the notebook.

Open AI’s web interface

I typed the exact same queries into Open AI’s web interface. When I typed the request without specifying Observable Plot, it responded with Python code using Matplotlib. I’ve found that it generally responds with Python code, if you ask for a program without specifying the language.

When I typed the query in with the Observable Plot specification, I received the response below. This response is pretty good but still a much more than one return statement away from a complete graph.


Observable Plot is a JavaScript library used for creating simple, interactive, and stylish visualizations with minimal code. Here’s a simple implementation of a bar graph using Observable Plot for the given data:

Firstly, make sure to include the Observable Plot library in your HTML:

<script src="https://cdn.jsdelivr.net/npm/@observablehq/plot@0.2.0/dist/plot.umd.min.js"></script>

Then, the JavaScript code for the graph is as follows:

const data = [
  { team: "Ohio State", score: 77 },
  { team: "Michigan", score: 3 }
];

const figure = Plot.plot({
  y: {
    grid: true
  },
  marks: [
    Plot.barY(data, { x: "team", y: "score", fill: "team" }),
    Plot.text(data, { x: "team", y: "score", text: "score", dy: -2 }) // to position the scores just above the bars
  ],
  title: "Halftime Score"
});

document.body.appendChild(figure);

You can add this JavaScript code to your webpage, and it should generate the bar graph with the title “Halftime Score”.

If you’re working directly within ObservableHQ’s environment, you would use their cell-based system and the code might differ slightly due to the nature of how Observable works.

Super disappointed to see this is a paid feature given how it’s implemented. ObservableHQ wants to charge me for the privilege of using a third party service I’m already paying for??? Here the company had an opportunity to justify charging people for notebooks by providing new useful features to users, instead of just removing previously free features and charging people for them. The company should be paying for the AI interactions themselves! Add quotas for paid users to ensure the cost doesn’t run high.

You can literally do all of this for free yourself, which many folks have been playing around with. Here was a notebook where I did exactly that, having users enter API key and calling openAI with it, including the functions features:

Notebook has the browser password manager integration to save the API key, and an implementation of an API client supporting streaming and functions. Started messing around with different ways of composing chat bots personalities but never finished it.

Thanks for sharing your perspective and feedback. We hear you!

This release is the first step to integrate and customize the AI experience in Observable. Through AI Assist, we are integrating Observable context such as code and databases into the flow as you work. Observable engineers the prompts for the AI so that it returns helpful responses when chatting about Observable JavaScript and Plot, while giving you the the ability to accept AI suggestions with a single click. You can target specific cells you want to chat about, and retain the chat history for that cell for several days. You can clear and start new conversations quickly, and you can share AI Assist conversations with other users.

OpenAI’s ChatGPT and other AI assist products are helping all of us move faster in our work. Our aim is for a customized experience that keeps you in flow - helping you answer questions, debug errors and get your work done more efficiently.

In future releases, we plan to improve the responses returned from AI Assist, making them more consistent and helpful while working in the notebook. We are also evaluating ways to help more users try AI Assist than the current “bring your own token” model allows.

We know we have lots to improve. We appreciate your feedback as we iterate on these first steps.

1 Like

@mattbrown Did you guys look at training a fine-tuned model for the ObservableHQ platform? This would require you to have your own account though (not bring your own key):