DuckDB-Wasm out of memory error

I have been using DuckD-Wasm for data wrangling and connecting it to a visualization pipeline. I have attached data files totalling 2.1 GB to the notebook (each measuring less than 50 MB) which I am loading in a DuckDB instance. It was working fine for weeks, but suddenly it has stopped working now. It gives this error:

catchDB = Error: Out of Memory Error: could not allocate block of 262144 bytes (3435773952/3435921408 used) 
Database is launched in in-memory mode and no temporary directory is specified.
Unused blocks cannot be offloaded to disk.

Launch the database with a persistent storage back-end
Or set PRAGMA temp_directory='/path/to/tmp.tmp'

I am not clear what I need to do to get it to work again. Please help

Hey @ameyabp - thanks for reaching out, and sorry about the sudden error. Are you able to share a notebook with us so we can recreate the error and investigate? Thanks,

Mike

It is not happening anymore now. I re-ran the notebook without loading any of the attached files in the DuckDB instance, and it started working after that, even when I loaded all the files.

Is the browser environment now supposed to have its own swap space for persistent storage? Can DuckDB not access that swap space for offloading data to disk?