3D model UV unfolding principle

Earlier this year, I wrote a tutorial for MAKE magazine on how to make a stuffed animal of a video game character. The technology takes a given 3D model of a character and its textures and programmatically generates sewing patterns. While I've written a general summary and uploaded the source code to  GitHub , I've written a more in-depth explanation of the math that makes this possible here.

Recommended NSDT toolsThree.js AI texture development kit  -  YOLO synthetic data generator  -  GLTF/GLB online editing  -  3D model format online conversion  -  programmable 3D scene editor  -  REVIT export 3D model plug-in  -  3D model semantic search engine

The goal of my project is to create a printable sewing pattern that, once stitched together, approximates the starting 3D model (in this case a video game character). My technical gist is to use a texture image file of a 3D model as a sewing pattern. Texture images should be able to be joined at their UV seams to reconstruct the original 3D shape. The initial texture image of the 3D model may not be optimized for seam reconstruction, but this can be remedied by creating a new set of UVs from the original model (seams more optimized for seam). Given the original UVs and the new UVs, a transformation matrix can be calculated for each face to transform the old texture image into the new optimized texture image. The resolution of the stitched reconstruction depends on the location of the seam and the amount of deformation of the UV unwrapping algorithm.

As mentioned in the general summary, 3D models consist of several different features. It has vertices, edges, and faces that define its 3D shape. It also has a set of UV coordinates that define how the texture is projected onto each face. Finally, it has texture images that define how the 3D model is shaded.

UV mapping is the process of projecting 3D surfaces onto 2D textured surfaces and is well studied in the field of computer graphics. Each face of the 3D model is mapped to a face on the UV map. Each face on the UV map corresponds to a face on the 3D model, and UV retains the edge relationship between the faces of the 3D model. Dr. Yuki Igarashi of the University of Tsukuba realized this characteristic of UV and published it in her papers  Plushie: An Interactive Design System for Plush Toys  (SIGGRAPH 2006) and Pillow: Interactive Flattening of a 3D Model for Plush Toy Design (SIGGRAPH 2007) Use UVs to create sewing patterns from dynamically created 3D models. The specific algorithm for her UV mapping is ABF++.

Since a UV map can be used as a sewing pattern, so can a texture image, since the UVs map the texture image onto the 3D model. Textures can be printed onto the fabric, and the stuffed animal sewn from the pattern will retain the color information of the original 3D model.

However, not every UV map is optimized for sewing pattern creation. As you can see above, the UVs are folded over each other, so the UVs and body are halved. This is a popular space-saving technique in video game graphics. The head is also much larger than the body, so the head will appear to have finer detail in video games. These optimizations are not suitable for sewing patterns because we want the body's proportions in 3D space to be roughly the same as in 2D UV space.

Difference in final resolution, by Igarashi

The seams of the UV clusters will become the seams on the final stuffed animal. Starting from the same 3D model, the location of the seams will determine the resolution of the final sewn piece.

The initial UVs of my models were not suitable for stuffed animal creation, so I made my own UVs to optimize them for sewing. Most modern 3D graphics software has UV mapping capabilities (Maya, Blender, 3ds Max, etc.). In my project I used UVLayout, which is a specialized UV mapping tool, but as seen in the MAKE magazine article, Blender also works fine.

Part of my final UV map

With the newly created UV map, I want to create a new texture map corresponding to it and print it as my final sewing pattern. This is where linear algebra comes in handy.

Polygonal faces on the UV map are broken into triangles. Each triangle face on the old original UV is mapped to a triangle on the new UV via their relationship to the same face on the 3D model. Since the two triangles represent the same shape but have different coordinates on the UV map, the transformation matrix between the two triangles can be calculated. Triangles are used because we want to use square matrices for calculations. This transformation matrix can be used to transform the corresponding triangular areas on the old texture to shade the new triangular areas on the new texture. Stackoverflow has a great explanation of how to calculate a transformation matrix based on the coordinates of two triangles along with a useful code snippet I used.

If you calculate the transformation matrix for each UV triangle and transform its corresponding texture triangle, the end result will be a new texture. If you apply new textures and new UVs to the original 3D model, there should be no difference in its visual appearance.

In my implementation, the UV coordinates are first mapped to pixel coordinates on the texture image, and then the transformation matrix is ​​calculated. The mapping (combined with floating point inexactness) caused some rounding issues (since pixel coordinates must be integers), which resulted in singular matrices during solving the transformation matrix. My hacky solution was to offset one of the pixel coordinates of one of the UV points by 1 pixel. I think 1 pixel on the final printed pattern is less noticeable.

For example:

Above is the 3D model, with the highlighted face being the face of interest.

Corresponding to the face on the original UV map, its UV coordinates are (0.7153, -0.2275), (0.78, -0.1982), (0.7519, -0.0935), (0.7207, -0.0382).

As you can see, UVs map texture images to 3D models.

This particular UV face controls a small portion of the texture image.

The highlighted faces on the 3D model also correspond to the faces on the new UV map I created.

Its coordinates are (0.046143, 0.63782), (0.133411, 0.683826), (0.09056, 0.660572), (0.108221, 0.6849).

Given two sets of UV coordinates, I decomposed the UV quad into two triangles and calculated the transformation matrix.

In order to calculate the transformation matrix, the equation is established as follows:

where W is the matrix containing the new UV coordinates, A is the transformation matrix, and Z is the matrix containing the old UV coordinates.

Due to the use of homogeneous coordinates, W and Z are 3×3 square matrices with the last row [1 1 1], and A is also a 3×3 square matrix with the last row [0 0 1]. See Affine Transformations for more details .

Filling our matrix with real coordinates gives the following two equations. The original UV coordinates are mapped to pixel coordinates (384, 72), (396, 80), (401, 67), (383, 61). The new UV coordinates map to (29, 174), (23, 185), (33, 188), (35, 172). I use pixel coordinates for conversion.

As mentioned before, there are two equations because I split the quadrilateral into two triangles.

To solve for A, I can take the reciprocal of Z and multiply it by W. Since Z is a square matrix, Z is invertible because its determinant is non-zero. The determinant of Z is nonzero because the determinant represents the area of ​​the triangle it encloses.

However, in a real implementation I solved this problem in a more direct way, i.e. doing a matrix multiplication between A and Z and solving the system of unknowns. Read more about it here.

When applied to the original UV controlled texture image area, I get the following converted texture image fragment:

After converting each texture image area, you will have the following texture image that is ready to be printed. The orange arrow indicates where the transformed texture patch fits into the entire texture image.

That's a more theoretical/mathematical explanation of how to create a sewing pattern from a 3D model.


Original link: 3D Exhibition 2D Mathematics Principles - BimAnt

Guess you like

Origin blog.csdn.net/shebao3333/article/details/135448535