Welcome to my daily learning log. This is a raw journal of what I do and learn each day — less polished than my weekly projects, but great for tracking progress and reflection.

Mini renderer project

Mini renderer project

  • start by learning OpenGL properly
  • A shift of learning strategy - I need to actually learn it instead of shipping it without understanding what I am doing.
  • study opengl
    • Understand Object, VBO, rendering pipeline, difference between tessellation shader and geometry shader.
      • Object is a description of some discrete state of the state machine (OpenGL)
      • VBO is an OpenGL buffer object storing vertex data (positions, colors, normals, texture coordinates, etc.).
        • Data is transferred from CPU (RAM) to GPU memory via glBufferData()
        • we can specify how frequent the data will be updated(GL_STATIC_DRAW, GL_DYNAMIC_DRAW, GL_STREAM_DRAW), and GPU will put this in the corresponding memory so that it can fetch the data efficiently
      • tessellation shader is used to add more vertices based on existing vertices, to create a more detailed/smoother result
      • geometry shader is used to add more vertices OUT OF existing vertices, for example the particle effect, and flame effect.

Mini renderer project

Mini renderer project

Mini renderer project is totally doable, but we need some workaround:

  • Metal(as well as Object-C)’s learning curve is too steep for me, too few study material, and even the official document is not beginner friendly.
  • XCode’s vim is notoriously bad, I’ll have to work with VSCode + Xcode.
  • Instead, we use OpenGl quickly build the foundation.

2025-06-23 re-planning the learning road map

Mini renderer project

I want to do something that is really related to GPU. It’s where my passion resides.

Start by learning what Metal is since I have MacBook.

Resourcehttps://developer.apple.com/videos/play/wwdc2025/205/)

Tomorrow I will try to understand what Metal is, and then understand the scope of my W6 learning.


2025-06-22 start inspecting asset validator code

Scope: Material Property Parsing in glTF via Assimp

Expected Input:

A .gltf or .glb file loaded via Assimp. Assimp gives you a scene graph: aiScene → aiMaterial[], aiMesh[], etc. You focus on the materials linked to each mesh: aiMesh::mMaterialIndex → aiMaterial. Assimp abstracts glTF’s PBR materials into a semi-standard material model. You typically extract:

  • Base Color (Albedo) texture
  • Normal Map
  • Metallic-Roughness Map
  • Metallic Factor, Roughness Factor
  • Sometimes: Emissive Color, Occlusion, Alpha Mode

Expected Output: A structured material descriptor that your renderer understands. This may be a Material struct or class that includes:

struct Material {
    std::string name;
    glm::vec4 baseColorFactor;
    std::string baseColorTexturePath;

    float metallicFactor;
    float roughnessFactor;
    std::string metallicRoughnessTexturePath;

    std::string normalTexturePath;

    // Optional
    std::string occlusionTexturePath;
    std::string emissiveTexturePath;
    glm::vec3 emissiveFactor;

    // For transparency
    std::string alphaMode; // OPAQUE, MASK, BLEND
    float alphaCutoff;
};

How to Extract from Assimp

Here’s what you’ll typically do with Assimp’s aiMaterial:

Example: Base Color

aiColor4D baseColor;
if (AI_SUCCESS == aiGetMaterialColor(mat, AI_MATKEY_BASE_COLOR, &baseColor)) {
    material.baseColorFactor = glm::vec4(baseColor.r, baseColor.g, baseColor.b, baseColor.a);
}

Example: Base Color Texture

Example: Base Color Texture

aiString texPath;
if (mat->GetTexture(aiTextureType_BASE_COLOR, 0, &texPath) == AI_SUCCESS) {
    material.baseColorTexturePath = texPath.C_Str();  // relative to glTF file
}

Todo

  • I need to undestand the difference between Base color and Base color texture, and how thry are used in PBR.

2025-06-20 initial check-in of asset loader/validator code
  • Finish gltf file format deepdive.

  • Initial check-in AI Gen-ed week 5 scaffolding code.
    • Firstly, I need to understand the request.
    • Then, I will check the code to make sure all of the requested feature are implemented.
    • Currently, the generated code is not completed. I will co-work with GPT to fill in the gap.
  • Lesson learned: as an engineer, shipping should be prioritized. Learning comes along with deploy and iteration. Perfectionism is what stops us from growing.

2025-06-19 gltf file format deepdive part 1
  • half way through the video of gltf file format deepdive.

  • update more math detail for the blog post of normal transformation.


2025-06-18 debug AssetLoader
  • Wrapped up week 3 post.
  • minor tweak on website.
  • Understand linear algebra of normal transformation.(see blog post)

2025-06-17 debug AssetLoader
  • Debug and make the rendering work
  • Start wrapping up week 3’s post. Currently working on fully understand the transform matrix.

2025-06-16 VAO, VBO, EBO setup
  • VAO, VBO, EBO setup done, currently working on linking the shader code.
  • Fixing the undesired indentation with the markdown code block on blogpost.
/* Aggressively fixing the indentation*/
.highlight pre code {
  white-space: pre-wrap;
  display: block;
  padding: 0;
  margin: 0;
  background: transparent;
  border: none;
  text-indent: 0;
  line-height: inherit;
}

/* ensure consistent formatting: */
.highlight pre {
  white-space: pre-wrap;
  word-wrap: break-word;
  overflow-wrap: break-word;
}

2025-06-15 Understanding indices, VBO and EBO

What is Index

  • A vertex defines a point in 3D space (plus optional data like normals, UVs).
  • A face defines a polygon, typically a triangle or quad, by referencing vertex indices.

In Assimp:

aiFace face = mesh->mFaces[i];

Each aiFace is a polygon — in our case, a triangle — that stores:

face.mNumIndices // = 3, for triangle
face.mIndices[j] // = index into mesh->mVertices

So:

for (unsigned int i = 0; i < mesh->mNumFaces; ++i) {
    aiFace face = mesh->mFaces[i];
    for (unsigned int j = 0; j < face.mNumIndices; ++j)
        indices.push_back(face.mIndices[j]);
}

Does the following:

  • Loops over each triangle in the mesh
  • For each triangle (aiFace), gets its 3 indices
  • Appends them to a global indices vector

That indices vector becomes your index buffer.


Visual analogy:

mesh->mVertices = [
  v0 = (x0, y0, z0),
  v1 = (x1, y1, z1),
  v2 = (x2, y2, z2),
  v3 = (x3, y3, z3)
]

mesh->mFaces[0].mIndices = [0, 1, 2]
mesh->mFaces[1].mIndices = [2, 3, 0]

Then:

indices = [0, 1, 2, 2, 3, 0]

This tells OpenGL: “draw triangle with v0-v1-v2, then v2-v3-v0”.


What is VBO? What is EBO?

1. VBO (Vertex Buffer Object)

  • Stores the actual vertex data (positions, normals, UVs).

  • This is an array of your Vertex struct:

    struct Vertex {
        glm::vec3 Position;
        glm::vec3 Normal;
        glm::vec2 TexCoords;
    };
    
  • You upload it with:

    glBindBuffer(GL_ARRAY_BUFFER, VBO);
    glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(Vertex), &vertices[0], GL_STATIC_DRAW);
    

2. EBO (Element Buffer Object) a.k.a Index Buffer

  • Stores an array of indices, which refer to vertex positions in the VBO.
  • Lets you reuse the same vertex multiple times (e.g., for adjacent triangles).
  • You upload it with:

    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO);
    glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices.size() * sizeof(unsigned int), &indices[0], GL_STATIC_DRAW);
    

Why use an EBO?

Instead of duplicating shared vertices, you can just reference them:

Without EBO:

Triangle 1: v0, v1, v2  
Triangle 2: v2, v3, v0  // v0 and v2 are repeated

With EBO:

Vertices: v0, v1, v2, v3
Indices:  [0, 1, 2, 2, 3, 0]

This saves memory and improves cache efficiency.


2025-06-13 web dark theme

Toggle light/dark theme

  • Today I mainly worked on web light/dark theme toggling

  • Reference web layout style is from “the book of shader”.

  • Also I detached the github repo from the origanl forked one, just to maintain the cleaness of self’s repo. The License remains the same.

  • And for the CG project, I removed broken imgui git submodule tracking. And I add include path for Assimp - Homebrew has its corresponding formula, so we can directly include it in the CMakeFilelist.


2025-06-12 Loading asset

Start working on asset validation tool

  • Today I reuse week 2’s window creation, week 3’s camera control & shader loading code to make the skeleton for week 4’s project.
  • I use Assimp to load gltf
  • I plan to build an asset validation tool, which can read gltf file and show the model information.
  • The code is working
     ./AssetPipeline
    Creating window...
    OpenGL version: 4.1 Metal - 89.4
    === ASSET LOADING TEST ===
    File: ../assets/test_models/Box.gltf
    âś… Loaded successfully
    Meshes: 1
    Materials: 2
    Validation: âś… PASSED
    

2025-06-11 undestanding buffer object

Buffer objects

Trying to understand the code from gltf viewer tutorial

GLuint bufferObject = 0; // 0 represents a "null" buffer object, it does not exist yet
glGenBuffers(1, &bufferObject); // Ask opengl to reserve an identifier for our buffer object and store it in bufferObject.
// At this point we should have bufferObject > 0.
// We generally don't test for that, if it happens our program is likely to fail anyway.
glBindBuffer(GL_ARRAY_BUFFER, bufferObject); // A buffer must be bound to be used
glBufferStorage(GL_ARRAY_BUFFER, data.size(), data.data(), 0); // Assuming data is a std::vector containing some data
glBindBuffer(GL_ARRAY_BUFFER, 0); // Generally a good idea to cleanup the binding point after
GLuint bufferObject = 0;

This declares a variable to hold a buffer object identifier. In OpenGL, buffer objects are referenced by unsigned integer IDs. Starting with 0 means “no buffer” - it’s like a null pointer.

glGenBuffers(1, &bufferObject);

This asks OpenGL to generate 1 new buffer object name/ID and store it in bufferObject. After this call, bufferObject will contain a unique positive integer that represents your buffer in OpenGL’s internal tables. The buffer exists as an ID but has no memory allocated yet.

glBindBuffer(GL_ARRAY_BUFFER, bufferObject);

This is where “binding” happens. Binding means “make this buffer the currently active buffer for the specified target.”

  • GL_ARRAY_BUFFER is a binding target - it’s like a slot where you plug in a buffer
  • Think of it like inserting a USB drive into a USB port - the port is GL_ARRAY_BUFFER, the USB drive is your bufferObject
  • Once bound, any buffer operations on the GL_ARRAY_BUFFER target will affect this specific buffer
glBufferStorage(GL_ARRAY_BUFFER, data.size(), data.data(), 0);

This allocates actual GPU memory for the buffer and uploads your data to it. It operates on whatever buffer is currently bound to GL_ARRAY_BUFFER (which is bufferObject from the previous line). The parameters are: target, size in bytes, pointer to data, and flags.

glBindBuffer(GL_ARRAY_BUFFER, 0);

This unbinds the buffer by binding buffer ID 0 (which means “no buffer”) to the GL_ARRAY_BUFFER target. It’s like unplugging the USB drive from the port. This is good practice to avoid accidentally modifying the buffer later.

What binding accomplishes: OpenGL is a state machine. Instead of passing buffer IDs to every function, you bind a buffer to a target, then all subsequent operations on that target affect the bound buffer. It’s like saying “I’m working with this buffer now” rather than specifying it every time.


2025-06-10 gltf format overview

Understand gltf data


2025-06-09 load a glTF model

gltf-viewer tutorial

Link to tutorial

Cloned the repo from GitLab (gltf-viewer-tutorial/gltf-viewer).

Hit CMake version error. Resolved by updating cmake_minimum_required(VERSION 3.5) to a newer version.

Built the project using helper bash functions: cmake_prepare; cmake_build; cmake_install.

Ran ./build/bin/gltf-viewer info, encountered Unable to open window runtime crash.

Fixed by debugging OpenGL context creation failure.

Encountered missing glDebugOutputCallback symbol.

Implemented the debug callback manually, confirmed fallback worked: [Debug Output] OpenGL context has no debug flag.

Confirmed OpenGL 4.1 context and viewer executable runs.


2025-06-08 Wrap up for week 3

Wrap up for week 3

  • See blog post for week 3: we discussed light attenuation for point lights, and the link between BRDF Lambertian model in RTR4 Chapter 9.3 and lighting model in Chapter 5.2

  • The code is still WIP, for now only directional light is being used.

  • We plan to provide switch for different lighting model.


2025-06-07 Normal Transformation and Web clean up

Why Normal only requires Model transform? Why not view transfrom & perspective transform?

  • Because our light is in world coordinate, we need to transform Normal to the same coordinate system used by light.
  • $n$ and $l$ are calculated together in Lambertian model.

Misc: Website clean up

  • Clean up template files. Migrate daily log from GitHub wiki to website.
    • add support for math in LaTex form as well.

    MathJax parses LaTeX math, but we have to tell it which delimiters (e.g. $...$, \(...\), $$...$$, \[...\]) should trigger parsing.

    MathJax v2 supported $ ... $ by default.

    MathJax v3 does not support $ ... $ unless you explicitly configure it.

      <script>
      window.MathJax = {
        tex: {
          inlineMath: [['$', '$'], ['\\(', '\\)']],
          displayMath: [['$$', '$$'], ['\\[', '\\]']]
        }
      };
    </script>
    

2025-06-06 Normal Transformation

Why We Use $M^{-1T}$ for Normal Transformation

Chatting with Claude about transforming normals.

The Core Problem

Normals must remain perpendicular to surfaces after transformation. If you just apply the same transformation matrix $M$ to normals, they can lose this perpendicular property.

Visual Example

Let me show you with an example:

Imagine a simple 2D case:

  • You have a horizontal line (surface)
  • The normal to this line points straight up (vertical)
  • Now you apply a transformation that stretches everything horizontally by factor 2, but keeps vertical unchanged

What happens?

  • The line stays horizontal (just gets longer)
  • If you apply the same transformation to the normal, it would still point straight up
  • So far so good…

The Problem with Non-Uniform Scaling

Now imagine you stretch horizontally by 2 and vertically by 0.5:

  • A diagonal line gets transformed
  • If you apply the same transformation to its normal, the normal is no longer perpendicular to the transformed line!

The Mathematical Reason

The key insight is that normals live in what’s called the “dual space.” When you transform geometry, the relationship between a surface and its normal changes in the opposite way.

Think of it like this:

  • If you stretch a surface in one direction, the normal should actually get “compressed” in that direction to maintain perpendicularity
  • This “opposite” transformation is precisely what $M^{-1T}$ provides

Why Specifically M^{-1T}?

Here’s the mathematical proof:

  1. A surface can be defined by equation: $\mathbf{n} \cdot \mathbf{p} = d$ (where $\mathbf{n}$ is normal, $\mathbf{p}$ is point on surface)

  2. After transforming points by $M$: $\mathbf{p’} = M\mathbf{p}$

  3. We want the same relationship: $\mathbf{n’} \cdot \mathbf{p’} = d$

  4. Substituting: $\mathbf{n’} \cdot (M\mathbf{p}) = \mathbf{n} \cdot \mathbf{p}$

  5. This means: $(\mathbf{n’}^T M)\mathbf{p} = \mathbf{n}^T \mathbf{p}$

  6. Therefore: $\mathbf{n’}^T M = \mathbf{n}^T$

  7. Taking transpose: $M^T \mathbf{n’} = \mathbf{n}$

  8. Solving for $\mathbf{n’}$: $\mathbf{n’} = (M^T)^{-1} \mathbf{n} = (M^{-1})^T \mathbf{n}$

Intuitive Summary

  • Points transform “forward” with $M$
  • Normals transform “backward” with $M^{-1T}$
  • This ensures they remain perpendicular after transformation
  • It’s like they live in “opposite worlds” that balance each other out

The $M^{-1T}$ transformation essentially “undoes” the distortion that would break the perpendicular relationship.


2025-06-05 BRDF Study

Good intro materials that I’ve studied today

Intro to Steradian/ solid angles (very intuitive)

Youtube Link

Intro to BRDF (watch it before reading the paper below)

Youtube Link

Intro to BRDF(must read)

PDF File


2025-06-04 RTR4 Study

What I did

Read RTR4 chapter 5

What I learned

The reflective vector calculation. TODO:notes screenshot

The shading calculation for Gooch model.

What’s next

Keep reading RTR4 chapter 5, it’s sooo fun!