This is more of a conceptual question, but figure some folks around here might be able to help
I’m trying to wrap my head around how geospatial data is represented in a 3D space, when projected onto a flat Mercator view (ie. not a globe or sphere).
For instance, say I wanted to render the following polygon in Three.js, and be able pan / zoom around the geometry (as you would expect in a web map).
Yeah, maybe that wasn’t quite clear. I’m basically trying to understand how something like a vector map (ie. Mapbox) renders geometries. Basically, how to render a 2D geometry (on a plane?) with the ability to pan & zoom.
To do that with something like THREE.js (or WebGL), I assume latitude & longitude still need to be converted to a 3D coordinate space, and trying to understand what that space is.
Fundamentally, this is a two-dimensional problem. That is, you’ve got 2D data (lat/lon pairs) and you want a function that sends that data to a 2D rectangle. There’s a pretty simple formula that accomplishes this task for the Mercator projection given a point with latitude \varphi and longitude \theta given in radian measure, namely:
I’ve implemented exactly that formula in this map. Note that panning and zooming becomes a simple matter of manipulating an SVG transformation, as I’ve done with this map.
More generally, you might be interested in the topic of map projection, which is quite a broad topic. In that context, then yeah, you certainly want to understand that the latitudes and longitudes describe points on the globe and a key question for any projection is - how are shapes on the globe distorted after projection to the plane? There are a number of references for that topic. From a mathematical perspective, I like Tim Freeman’s Portraits of the Earth. I relied on that a bit when I wrote these notes on the topic.