Polygon Soup – For 3D Artists
I had originally began writing this as part of the Cross Product post, but realised it was enough of a topic to also break out onto it’s own short. The Polygon Soup is an interesting one, especially if you’d never considered how or why 3D meshes are stored under the hood…
Mesh data is commonly referred to as Polygon Soup because there’s no explicit information here about the structure of the mesh. Triangle ABC
might be near triangle DEF
in space, but in mesh data they can be anywhere in relationship to each other. This makes it all rather soupy and loosely connected. So how does our 3D software understand where to put everything, and how it’s connected?
Lets look at what a mesh data structure is in Unity. Meshes here are a collection of lists of data (specifically arrays in c#
). With the exception of Triangles, these lists all maintain the same order, so vertex[20]
would correspond to color[20]
. This data includes (but is not limited to);
- Vertices – A list of positions in space, each representing the vertex position.
- Normals – A list of normalised vectors for each vertex. Unity’s documentation even notes that these are not face normals.
- Colors – The vertex colours of the mesh
- UVs – The
U
andV
co-ordinates of each vertex in texture space, we’ll touch on this later. There can be more then one UV list (sometimes referred to as uv channel). - Triangles – This is the outlier here…
It is our triangles
list which is what helps form more then just the idea of points in space. The triangle list in Unity is a multiple of 3 as every 3 entries forms one triangle, and every entry refers to a vertex index.
triangles[0] = 12 first vertex index of triangle
triangles[1] = 108 second vertex index ..
triangles[2] = 249 third vertex index ..
So in the above example, these first 3 entries in our triangles
array tells us our first triangle uses the vertex indices of 12
, 108
, and 249
. These numbers are then used to find the appropriate points in our other arrays. vertices[12]
(the 13th entry in our Vertices list), colors[12]
would be that vert’s color etc.
Unity builds our triangles from our triangles
array, and uses the vertex IDs it contains to link all of our mesh data together for rendering. But if we want to perform any math like casting rays for collision or testing for intersection – ray tracing specifically being one key example – this data structure is not suited for the task.
If we wanted to ask our mesh about any aspect of it, we would need to run through our list (called looping) and look at every triangle to check if our mesh would be near what we care about.
So in the ray tracing example, if I wanted to use ray traced global illumination to work out bounced lighting in a scene, for each ray I draw out from the camera into the scene (lets just assume it’s only our mesh for now) I would need to test every triangle in our mesh to see if the ray would hit. Then our ray would then need to bounce in another direction, performing the same check again.
We’d keep performing this until we hit a maximum number of bounces we allowed, or the ray leaves our scene. We’d also then need to do this for enough rays to give us a clear enough idea of our bounce lighting. Our Polygon Soup doesn’t give us a fast way to perform these kinds of computations, which is why spatial partitioning algorithms like a OctTree, or more suitably for this example, a Bounding Volume Hierarchy (BVH) would be used.
This concludes my brief over view of the Polygon Soup, and Unity’s version of it. I’ve started looking at some more ways to solidify this “soup”, or at least better manipulate it dynamically. Check here on Twitter/X if you want a little preview.