glTF model skeletal animation

Recommendation: Use NSDT scene editor to quickly build 3D application scenes

This article demonstrates the production process of windmill animation in detail:

Of course, this is very easy to hardcode (have two objects, one static and one rotating). However, I plan to add more animations later, so I decided to implement a proper solution.

Previously I used a crappy ad hoc binary format for models and animations, but recently I switched to glTF models for a number of reasons :

  • Easy to parse: JSON metadata + raw binary vertex data
  • It's easy to render: the model is stored in a format that maps directly to the graphics API
  • It's compact enough (the heavy stuff - vertex data - is stored in binary form)
  • It is broad and scalable
  • It supports skeletal animation

Using  the glTF model means that others can easily extend the game (eg, create mods).

Unfortunately, finding a good resource on skeletal animation using glTF seems impossible. All tutorials cover some older formats, and the glTF specification is mostly very lengthy and precise, being unusually concise in the interpretation of animation data. I guess this should be obvious to experts, but I'm not one of them, and if you aren't either - this is the article for you :)

BTW, I ended up reverse engineering Sascha Willems' animation code in his Vulkan + glTF example renderer to figure out how to do this correctly.

glTF model skeleton animation

If you already know what skeletal animation is, you can safely skip this section :)

glTF model skeletal animation is by far the most popular method of animating 3D models. It's conceptually very simple: instead of animating an actual model, you animate a virtual, highly simplified skeleton of the model, to which the model itself is glued like flesh to bones. It roughly looks like this:

Here's how the model's vertices adhere to different bones (red is a lot of glue, blue is no glue):

Typically, each mesh vertex is glued to multiple bones with different weights to provide smoother animation, and the vertex's final transformation is interpolated between these bones. If you only glue each vertex to a single bone, the transitions between different parts of the model (such as typically human shoulders, elbows, and knees) will have unpleasant artifacts when animated:

Another key part of this approach is hierarchical : the bones form a tree, with child bones inheriting their parent's transformations. In this example model, the two bones are children of the bone, which is the root of the skeleton. Only bones are explicitly rotated up and down; this rotation is inherited from the bones.earheadheadearshead

This is the same reason why most game engines use object hierarchies. When you have a man in a car on a moving transport ship with a mosquito on his helmet, defining the motion of all these objects individually is very tedious and error-prone. Instead, one would define the motion of the ship and assign objects to form a hierarchy, with child objects inheriting the motion of the parent. The mosquito is the child of the helmet, the helmet is the child of the man, and so on.

Likewise, it's much easier to specify that a person's shoulder is rotating (and that the entire arm is a child of the shoulder) than to calculate the correct rotation for each arm bone.

glTF model advantages and disadvantages

Compared to the alternative -  morph target animation, which stores all vertex positions for each animation frame - it has some advantages:

  • It requires less storage space - the skeleton is much smaller than the model
  • Requires less data flow per frame (just bones, not the entire mesh, although there are ways to store the entire animation on the GPU)
  • Easier (arguably) for artists to use
  • It decouples the animation from the specific model - you can apply the same walking animation to many different models with different vertex counts
  • It's much easier to integrate into procedural animation - say, you want to constrain your character's feet from crossing the terrain; with skeletal animation, you only need to add constraints to a few bones

However, it has several disadvantages:

  • You need to correctly parse/decode the animation format you are using (this is harder than it sounds)
  • You would need to calculate the transformation for each bone for each animated model, which can be expensive and tricky (although I guess you can use compute shaders to do this)
  • You need to transfer the bone data to the GPU somehow, this is not a vertex attribute and may not be suitable for uniforms
  • You need to apply the bone transform in the vertex shader, making this shader 4x slower than usual (still much cheaper than a typical fragment shader though)

It's not as bad as it sounds, though. Let’s take a deeper look at how to implement it from the bottom up.

glTF model bone conversion

Therefore, we need to dynamically transform the mesh vertices somehow. Each bone has a certain transformation defined, usually consisting of scale, rotation, and translation. Even if you don't need scaling and translation, and your bones only rotate (which is reasonable for many realistic models - try moving your shoulders half a meter out of the shoulder socket!), the rotation can still happen around different centers of rotation (e.g. , when the arm rotates around the shoulder, the hand also rotates around the shoulder, rather than around the hand bone origin), which means you still need to translate anyway.

The most common way to support all of this is to simply store a 3x43x4 affine transformation matrix for each bone. This transformation is usually a combination of scaling, rotation, and translation (applied in that order), expressed as a matrix in homogeneous coordinates (a mathematical trick for expressing translations as matrices, etc.).

Instead of using matrix (that is, 3 floats), giving a total of 7, 8, or 10 floats. However, as we will see later: it is easier to pass these transformations to the shader if the total number of components is a multiple of 4. Therefore, my favorite options are translation + rotation + uniform scale (8 floats) or a full-blown matrix (12 floats).

Anyway, these transformations should also already take into account the parent's transformation (we'll do that later. We call them global transformations, as opposed to local transformations that don't take the parent into account . So, we have a recursive formula like this:

globalTransform(bone)=globalTransform(parent)⋅localTransform(bone)

Global Transform(Bone) = Global Transform(Parent) ⋅ Local Transform(Bone)

If the bone has no parent, it is the same as . We will discuss the origin of these s later in this article.globalTransformlocalTransformlocalTransform

glTF model combination conversion

By the way, the equation above may be a bit misleading. If we store the transformation as a matrix, how do we multiply two 3x43x4 matrices? This violates the rules of matrix multiplication! If we store them as (translation, rotation, scale) triples, how do we compose them?

In the case of matrices, using a 3x43x4 matrix is ​​actually an optimization. What we really need is a 4x44x4 matrix that is easy to multiply. As it happens, affine transformations are always of the form

So it makes no sense to actually store row 4, but we need to restore it when doing the calculation. In fact, reversible affine transformations are a subgroup of all groups of reversible matrices.

The recipe for the matrix is ​​as follows: append a (0001)(0001) row, multiply the resulting 4×44×4 matrix, discard the last row of the result, which will also be (0001)(0001). By explicitly applying the left matrix to columns of the right matrix, there are some ways to do this more efficiently, but the general formula is still the same.

Now, what about storing transformations explicitly as translation, rotation, and scale, and how do we multiply them? Well, there is only one formula! Let us represent our transformation as (T,R,S) – translation vector, rotation operator and scale factor. The effect of this transformation on the point p is (T,R,S)⋅p=T+R⋅(S⋅p). Let's see what happens if we combine two such transformations:

I used the fact that uniform scaling is swapped with rotation. In fact, it can commute with anything!

Therefore, the formula for multiplying two transformations in this form is

Note that R is a rotation operator, not a rotation quaternion. For the rotated quaternion Q, the composition of the rotation does not change, but the way it acts on the vector does:

Note also that this trick does not work with non-uniform scaling: essentially, if R is a rotation and S is non-uniformly scaled, there is no way to express the product S⋅R as something like R′⋅S′ for some other rotation R′ and uneven scaling S′
. In this case it is simpler to just use the matrix.

vertex shader

This is simply an explanation! Let's take a look at some real code, specifically the vertex shader. I'll be using GLSL, but the specific language or graphics API isn't important here.

Let's assume we've somehow passed the per-bone global transform into the shader (we'll talk about that in a minute). We also need some way to tell which vertex is connected to which bone and what weight. This is usually done using two additional vertex attributes: one for the bone ID and one for the bone weight. Typically, you don't need more than 256 bones per model, and the weights don't need that much precision, so you can use integer attributes for the IDs and normalized attributes for the weights. Since properties are at most 4D in most graphics APIs, we usually only allow vertices to be glued to 4 or fewer bones. For example, if a bone only sticks to 2 bones, we just append two random bone IDs with weights of zero and call it day.uint8uint8

Enough said:

// somewhere: mat4x3 globalBoneTransform[]

uniform mat4 uModelViewProjection;

layout (location = 0) in  vec3 vPosition;
// ...other attributes...
layout (location = 4) in ivec4 vBoneIDs;
layout (location = 5) in  vec4 vWeights;

void main() {
    vec3 position = vec3(0.0);
    for (int i = 0; i < 4; ++i) {
        mat4x3 boneTransform = globalBoneTransform[vBoneIDs[i]];
        position += vWeights[i] * (boneTransform * vec4(vPosition, 1.0));
    }

    gl_Position = uModelViewProjection * vec4(position, 1.0);
}

In GLSL, for vectors  v,v[0] is the same as vx, v [1] is  v.y equal.

What we do here is

  1. Iterate over the 4 bones the vertices are attached to
  2. Read the bone's ID and get its global transformationvBoneIDs[i]
  3.  Apply global transformation to vertex position in homogeneous coordinates  vec4(vPosition, 1.0)
  4. Add weighted results to generated verticesposition
  5. Apply the usual  MVP matrix to the result

The entire process is also called skinning, or more specifically linear blend skinning .

In GLSL, matNxM means N columns and M rows, so mat4x3 is actually a 3x4 matrix. I like standards.

If you're not sure your weights sum to 1, we can also divide by their sum at the end (although you'd better make sure they sum to 1!

    position /= dot(vWeights, vec4(1.0));

If the sum of weigts does not equal 1, you will get distortion. Essentially, your vertices will be closer or further away from the model origin (depending on whether the sum is < 1 or > 1). This has to do with perspective projection and the fact that affine transformations do not form linear spaces, but they do form affine spaces.

If we also have normals, they need to be transformed as well. The only difference is that position is a point , while normal is a vector , so it has a different representation in homogeneous coordinates (we have 0 as the w-coordinate instead of appending 1). We might also want to normalize it afterwards to account for scaling:

// somewhere: mat4x3 globalBoneTransform[]

uniform mat4 uModelViewProjection;

layout (location = 0) in  vec3 vPosition;
layout (location = 1) in  vec3 vNormal;
// ...other attributes...
layout (location = 4) in ivec4 vBoneIDs;
layout (location = 5) in  vec4 vWeights;

vec3 applyBoneTransform(vec4 p) {
    vec3 result = vec3(0.0);
    for (int i = 0; i < 4; ++i) {
        mat4x3 boneTransform = globalBoneTransform[vBoneIDs[i]];
        result += vWeights[i] * (boneTransform * p);
    }
    return result;
}

void main() {
    vec3 position = applyBoneTransform(vec4(vPosition, 1.0));
    vec3 normal = normalize(applyBoneTransform(vec4(vNormal, 0.0)));

    // ...
}

Note that if you use non-uniform scaling, or want to do eye space lighting, things get a little more complicated.

Pass transform to shader

I primarily use OpenGL 3.3 to produce graphics content, so the details in this section are specific to OpenGL, I believe the general concepts apply to any graphics API.

Most skeletal animation tutorials recommend using uniform arrays for skeletal transformations. This is a simple way of working, but it can be a bit problematic:

  • OpenGL has a limit on the number of uniforms. OpenGL 3.0 guarantees at least 1024 components , which loosely means individual elements of our matrix. Therefore, for requiring 12 components, we are constrained by the bones of each model. That's a lot, so it might actually be enough. Although, a lot of the uniforms are already used for other things (matrix, textures, etc.), so we don't usually have that many free uniforms. In practice, we typically have 4096 to 16384 components.mat4x31024/12 ~ 85
  • We have to update the uniform array for each animation model, which means a lot of OpenGL calls, no instantiation.

This problem can be solved to some extent by using uniform buffers :

  • With specialization, they have more memory available, but still not that much - typically a 64 KB buffer.
  • Instead of uploading all bone transformations to the uniform, we can upload all transformations for all models to the buffer at once. We still have to call glBindBufferRange for each model to specify where that model's bone data lives, so there's no instantiation.

If you are using OpenGL 4.3 or higher, you can simply store all transformations in a shader storage buffer object, which is essentially unlimited in size. Otherwise, you can use a buffer texture, which is a way to access an arbitrary data buffer, disguising it as a 1D texture . The buffer texture itself doesn't store anything, it just references the existing buffer. It works like this:

  1. We create a common OpenGL and populate it with all the model's skeletal transforms, stored per frame as e.g. a row-by-row matrix (12 floats) or a TRS triplet (8 floats) with uniform scalingGL_ARRAY_BUFFER
  2. We create a and call - is the pixel format of this texture, which is 4 floats (12 bytes) per pixel (so 3 pixels per matrix or 2 pixels per TRS triple )GL_BUFFER_TEXTUREglTexBuffer(GL_BUFFER_TEXTURE, GL_RGBA32F, bufferID);RGBA32F
  3. We attach the texture to the uniform in the shadersamplerBuffer
  4. We read the corresponding pixels in the shader and convert them into bone transformstexelFetch

For instanced rendering, this shader might look like this:

uniform samplerBuffer uBoneTransformTexture;
uniform int uBoneCount;

mat4x3 getBoneTransform(int instanceID, int boneID) {
    int offset = (instanceID * uBoneCount + boneID) * 3;
    mat3x4 result;
    result[0] = texelFetch(uBoneTransformTexture, offset + 0);
    result[1] = texelFetch(uBoneTransformTexture, offset + 1);
    result[2] = texelFetch(uBoneTransformTexture, offset + 2);
    return transpose(result);
}

Note that we assemble the matrix as a 4x3 matrix (in GLSL), but read the rows from the texture and write them to the columns of the matrix, then transpose it, switching rows and columns . This is simply because GLSL uses column-major matrices.mat3x4

review

Let’s review:

  • To animate the model we attach each vertex to up to 4 bones of the virtual skeleton, with 4 different weights
  • Each bone defines a global transformation that needs to be applied to vertices
  • For each vertex, we apply the transformation of the 4 bones it is attached to and average the results using the weights
  • We store transformations as TRS triples or 3x4 affine transformation matrices
  • We store transformations in uniform arrays, uniform buffers, buffer textures or shader storage buffers

What we are left with is where these global transformations come from.

global transformation

Well, actually, we already know where global transformations come from: they are calculated from local transformations :

globalTransform(bone)=globalTransform(parent)⋅localTransform(bone)globalTransform(bone)=globalTransform(parent)⋅localTransform(bone)

A naive way of computing this is similar to computing a recursive function for all transformations:

mat4 globalTransform(int boneID) {
    if (int parentID = parent[boneID]; parentID != -1)
        return globalTransform(parentID) * localTransform[boneID];
    else
        return localTransform[boneID];
}

Or the same thing, but manually unrolling the tail recursion:

for (int boneID = 0; boneID < nodeCount; ++boneID) {
    globalTransform[boneID] = identityTransform();
    int current = boneID;
    while (current != -1) {
        globalTransform[boneID] = localTransform[current] * globalTransform[boneID];
        current = nodeParent[current];
    }
}

Both methods are fine, but they compute many more matrix multiplications than necessary. Remember, we should do this on every frame and on every animated model!

A better way to calculate the global transform is from parent to child: if the parent's global transform has already been calculated, all we need to do is do one matrix multiplication per bone.

// ... somehow make sure parent transform is already computed
if (int parentID = parent[boneID]; parentID != -1)
    globalTransform[boneID] = globalTransform[parentID] * localTransform[boneID];
else
    globalTransform[boneID] = localTransform[boneID];

To ensure that the parent is calculated before the children, you need to do some DFS on the bone tree to order the bones correctly. An arguably simpler solution is to compute the topological ordering of the bone tree (an enumeration of bones so that parents come before children) ahead of time and use it every frame. (BTW, computing the topological ordering is done using DFS anyway. A simpler solution is to ensure that the bone ID is effectively the topological ordering, i.e. it always holds. This can be done by loading the bones (and mesh vertices) properties!) reorder to do it, or ask the artist to order the bones this way :) Well, their model's bones, that's it.parent[boneID] < boneID

In the latter case, the implementation is simplest (and fastest):

for (int boneID = 0; boneID < nodeCount; ++boneID) {
    if (int parentID = parent[boneID]; parentID != -1)
        globalTransform[boneID] = globalTransform[parentID] * localTransform[boneID];
    else
        globalTransform[boneID] = localTransform[boneID];
}

But where does the local conversion come from?

local conversion

This is where things get a little weird (as if they weren't already). You see, it is often convenient to specify local bone transformations in some special coordinate system, rather than world coordinates. If I rotate the arm, the origin of the local coordinate system is at the center of the rotation rather than somewhere beneath my feet, so I don't have to interpret this translation explicitly. Also, if I rotate it up and down, like waving to someone far away, I really want it to be rotating around some axis in local space (maybe X), regardless of whether the model is in model space or world space What is the direction of .

What I want to say is that we want each bone to have a special coordinate system (CS), and we want to use this coordinate system to describe the local transformation of the bone.

However, the vertices of the model are located in the model's coordinate system (which is the definition of this coordinate system). Therefore, we need a way to first transform the vertices into the bone's local coordinate system. This is called an inverse binding matrix because it sounds really cool.

Okay, so we've converted the vertices to the bone's local CS and applied the animation transform in this local CS (we'll cover them later). Is that all? Remember, the next thing is to combine this with the transformation of the parent bone, which will be in it's own coordinate system! So we need one more thing: convert the vertices from the bone local CS to the parent's local CS. By the way, this can be done using the inverse binding matrix: convert the vertices from the bone's local CS back to the model CS, and then convert them to the parent's local CS:

convertToParentCS(node)=inverseBindMatrix(parent)⋅inverseBindMatrix(node)−1convertToParentCS(node)=inverseBindMatrix(parent)⋅inverseBindMatrix(node)−1

We can also think of it this way: a specific bone converts the vertices to its native CS, applies animation, and then converts them back; then its parent converts the vertices to its own local CS, applies its own animation, and then converts them back. convert them back; and so on.

In fact, we don't actually need to use this conversion explicitly in glTF converToParent, but it's useful to think about it.

And one more thing. Sometimes it is convenient (for artists or 3D modeling software) to attach vertices to bones not in the model's default state, but in some transformation state (called a binding pose ). Therefore, we may need another transformation that, for each bone, converts the vertices to the CS where that bone expects the vertices to be. I know, this sounds confusing, but bear with me , we don't actually need this transition :)

Click to see the bears

Blender uses world space vertex positions as binding poses. If the model is 20 units from the origin along the X-axis, its original vertex position will be around X=20, and the inverse binding matrix will compensate for this. This effectively makes animated models exported from Blender unusable without animation.

glTF model conversion review

Overall, we have the following series of transformations applied to the vertices:

  1. Convert it to a model bound pose
  2. Convert to bone local CS (inverse binding matrix)
  3. Apply actual damn animation (specified in local CS)
  4. Convert back from bone local CS
  5. If the bone has a parent bone, repeat steps 2-5 for the parent bone

Now, the problem is that each format defines its own way of specifying these formats. In fact, some of these transformations may not even exist - they should be included in other transformations.

Finally, let’s talk about glTF.

glTF model 101

glTF is a very cool 3D scene creation format developed by the Khronos Group - the folks behind OpenGL, OpenCL, Vulkan, WebGL and SPIR-V, among others. I already said at the beginning of the article why I think this is a cool format, so let’s talk a little more about the details.

This is the specification for glTF-2.0. It's pretty good, just read the spec to learn the format.

A glTF scene is made up of nodes, which are abstractions and can mean many things. Nodes can be rendered meshes, cameras, lights, skeleton bones, or simply aggregate parents of other nodes. Each node has its own affine transformation, which defines its position, rotation, and scale relative to the parent node (or world origin if there is no parent node).

glTF describes all binary data via accessors – basically, a reference to a portion of some binary buffer containing an array of a specified type (with possibly non-zero gaps between elements) (e.g. a contiguous array of 100 whose components are here bytes in a particular binary, etc.).vec4float326340

If the mesh node uses skeletal animation, it has a list of the IDs of the glTF nodes that specify the skeletal bones. (Actually, the mesh references a skin, which in turn contains joints. These joints form a hierarchy - they are still glTF nodes, so they can have parents and children. Note that there are no skeletons or skeleton nodes - there are only Bone nodes; similarly, animation meshes are not children of bone nodes or skeleton nodes, but refer to them indirectly ( although exporting software may add artificial skeleton nodes, such as Blender does, glTF does not require this).joints

Each vertex property of the mesh ( actually of the mesh primitive ) - position, normal, UV, etc. - is a separate accessor. When the mesh uses bone animation, it also has bone ID and weight properties, which are also some accessors. The actual animation of the bone is also stored in the accessor.

In addition to a description of the skinned mesh , the glTF model may also contain some actual animation  - essentially, instructions on how to change the third transformation in the list above.

glTF transform

Here's  how the glTF model stores all transformations in the list 1-5 above:

  1. The model binding pose should already be applied to the model, or premultiplied to the inverse binding matrix. In other words, just forget about glTF's binding pose.
  2. The per-bone inverse binding matrix is ​​specified as another accessor – an array of 4x4 matrices (needed to be an affine transformation, so only the first 3 rows are interesting).
  3. The actual animation can be defined externally (such as procedural animation) or stored as keyframed splines for each bone's rotation, translation, and scale. The important thing here is that these are...
  4. ...combine transitions from local CS to parent local CS. Therefore, skeletal animation is combined.convertToParent
  5. The parent is defined by the node hierarchy, but since we have already applied the transformation we don't need the parent's inverse binding matrix, so we just repeat steps 3-5 for the parent if there is one.convertToParent

So when using glTF the global transformation of the bone looks like this

In code this would be

// assuming parent[boneID] < boneID holds

// somehow compute the per-bone local animations
// (including the bone-CS-to-parent-CS transform)
for (int boneID = 0; boneID < boneCount; ++boneID) {
    transform[boneID] = ???;
}

// combine the transforms with the parent's transforms
for (int boneID = 0; boneID < boneCount; ++boneID) {
    if (int parentID = parent[boneID]; parentID != -1) {
        transform[boneID] = transform[parentID] * transform[boneID];
    }
}

// pre-multiply with inverse bind matrices
for (int boneID = 0; boneID < boneCount; ++boneID) {
    transform[boneID] = transform[boneID] * inverseBind[boneID];
}

This array is the array in the vertex shader above.transform[]globalBoneTransform[]

It’s not that complicated after all! Just need to figure out the right order to multiply a bunch of seemingly random matrices :)

glTF animation

Finally, let's talk about how to apply animations stored directly in the glTF model . They are assigned as keyframe splines for each bone's rotation, scale, and translation.

Each individual spline is called a channel . It defines:

  • which node it applies to (e.g. skeleton)
  • Which parameter does it affect (rotation, scaling or translation)
  • Accessor for keyframe timestamp
  • Accessors for keyframe values ​​(quaternion for rotation, vector for scale or translation)vec4vec3
  • Interpolation method – , orSTEPLINEARCUBICSPLINE

For rotation, LINEARit actually means spherical linearity.  For  CUBICSPLINE interpolation, 3 values ​​are stored per keyframe – the spline value and two tangent vectors.

So the way we build the local transformation for the bone is:

  • Samples the rotation, translation, and scaling splines of this bone at the current moment
  • Combine them together to form a local transformation matrix

For vector translation (x, y, z), the corresponding matrix is

For non-uniformly scaled vectors (x, y, z) the matrix is

For the rotation quaternion you can find the matrix in the wiki article – it will be a 3x3 matrix and you put it into the top left corner of a 4x4 matrix like this:

As we discussed before, these matrices are 4x4, but they are actually affine transformations, so the interesting things only happen in the first 3 rows.

Sample animation spline

To address the last point - efficiently sampling animated splines - we can collect the splines into a class like this:

template <typename T>
struct animation_spline {

    // ...some methods...

private:
    std::vector<float> timestamps_;
    std::vector<T> values_;
};

Now, an obvious API decision is to create a method that returns the spline value at a specific time:

template <typename T>
T value(float time) const {
    assert(!timestamps_.empty());

    if (time <= timestamps_[0])
        return values_[0];

    if (time >= timestamps_[1])
        return values_[1];

    for (int i = 1; i < timestamps_.size(); ++i) {
        if (time <= timestamps_[i]) {
            float t = (time - timestamps_[i - 1]) / (timestamps_[i] - timestamps_[i - 1]);
            return lerp(values_[i], values_[i + 1], t);
        }
    }
}

The call should change depending on the type of interpolation and whether this is a rotation  lerp .

This works, but we can improve it in two ways. First, our keyframe timestamps are guaranteed to be sorted, so instead of a linear search, we can do a binary search:

template <typename T>
T value(float time) const {
    auto it = std::lower_bound(timestamps_.begin(), timestamps_.end(), time);
    if (it == timestamps_.begin())
        return values_.front();
    if (it == timestamps_.end())
        return values_.back();

    int i = it - timestamps_.begin();

    float t = (time - timestamps_[i - 1]) / (timestamps_[i] - timestamps_[i - 1]);
    return lerp(values_[i - 1], values_[i], t);
}

Secondly, when playing an animation, we always linearly traverse it from beginning to end, so we can optimize it further by storing the current keyframe index. However, this is not a property of the animation itself, so let's create another class:

template <typename T>
struct animation_spline {

    // ...

private:
    std::vector<float> timestamps_;
    std::vector<T> values_;

    friend class animation_sampler<T>;
};

template <typename T>
struct animation_sampler {
    animation_spline<T> const & animation;
    int current_index;

    T sample(float time) {
        while (current_index + 1 < animation.timestamps_.size() && time > animation.timestamps_[current_index + 1])
            ++current_index;

        if (current_index + 1 >= animation.timestamps_.size())
            current_index = 0;

        float t = (time - timestamps_[current_index]) / (timestamps_[current_index + 1] - timestamps_[current_index]);
        return lerp(values_[current_index], values_[current_index + 1], t);
    }
};

Original link: glTF model skeleton animation (mvrlink.com)

Guess you like

Origin blog.csdn.net/ygtu2018/article/details/132999268