The first thing I wanted to add in my engine were lights. I did implement lights in Open GL, but only in 2D. I had some knowledge of how light works, but implementing it is different.
Fortunately Open GL has some great articles explaining how light works. Understanding and adding Blinn-Phong lighting to my engine was a fun experience. To see the light move in the scene and on the object was very rewarding after a couple of days of trying to understand how the math works.
The unfortunate part was Vulkan. Because of how we can separate buffers into descriptor sets and bindings I would have to rewrite the shaders and the pipelines multiple times to make everything work. It took me a while to understand how descriptor sets actually work, but once I did everything was a lot easier to implement.
Light System
My light is managed by a Light System object which is part of the world. Whenever I want a new light in the world I can call the function CreateLight.
The light system is responsible for updating the light buffer in the shaders. I only added support for 3 type of lights:
Directional light
Point light
Spot light
TheCommonLightData I previously mentioned in the function CreateLight is used to define the settings of the light. Each light type has its own struct, but they all inherit from the common one.
Here are the 3 struct definitions:
struct CommonLightData
{
Color color;
float intensity;
};
struct PointLightData : public CommonLightData
{
float range;
};
struct SpotLightData : public CommonLightData
{
float range;
float angle;
};
They're a bit different and the reason I use 3 different structs is to make it easier to differentiate between light types during the light creation.
The light instance itself is a bit different. It contains all the data the light needs in a c++ friendly layout like this:
struct LightInstance
{
glm::vec3 position;
glm::vec3 eulers;
glm::vec3 color;
uint32_t type =0;
float intensity =1.0f;
float range =0.0f;
float angle =0.0f;
}
The gpu buffer needs to be properly aligned so I made a GPU specific buffer which I'm creating using the LightInstance:
struct LightBufferLayout
{
// w => type
alignas(16) glm::vec4 position = glm::vec4(0.0f);
alignas(16) glm::vec4 direction = glm::vec4(0.0f);
// a => intensity
alignas(16) glm::vec4 color = glm::vec4(0.0f);
// x => range (for point/spot), y => inner cone, z => outer cone
alignas(16) glm::vec4 params = glm::vec4(0.0f);
};
I'm using vec4 so I don't waste any space, it's a bit harder to figure out which one is which that's why I added the comments.
So with all that data I can now create 3 types of lights in my world. Here's a showcase:
Directional Light
I increased the intensity to 20 so it's very visible.
Point Light
The light is at the base of the model that's why the intensity there is stronger. The intensity of the light is 25 and the color of the light is yellow.
Spot Light
I set the position of the spot light (0, 20, 0) , the intensity 25, the range 125 and the angle 60.
Shadows
Implementing shadows was pretty complicated with Vulkan. I learned a lot about how frame buffers work and how to use different passes for different calculations. I had to create a separate render pass and render pipeline for the shadows. It does a depth test on the scene and draws it on a texture I then transfer the texture to the right layout and use it in the final rendering to display the shadows.
I added a specific pipeline for debugging shadow passes. It gets the image of the depth test and renders it on a plane in a smaller viewport. I can the 1 key to show/hide this window.
One more thing, I only added shadows for directional lights. Point lights and spot lights don't yet support shadows. The reason for that is I wanted to learn more about CSM (cascade shadow mapping) so I started by implementing that.
Here's a showcase of the CSM
The window on the right is the depth texture. There's a small bug and I'm not sure why, but the floor is not displaying in the buffer test when changing the cascade index. I'm still looking into this.
The jump from OpenGL to Vulkan was pretty rough. The amount of boilerplate required just to get Vulkan running is insane. What surprised me even more was realizing how much abstraction OpenGL was actually handling behind the scenes. For me, the hardest part was wrapping my head around synchronization in Vulkan. With multi-threading in the mix, it was tough to even know where to start. In traditional apps, I could set breakpoints—even across different threads—and piece together what was going on. But Vulkan’s command submission model, which separates the Host (CPU) and Device (GPU), makes that approach nearly impossible. So how did I deal with all of that? Debugging My main debugging tool early on was the validation layers. I expected them to be similar to OpenGL’s, but they’re actually much better. The amount of information you get from a Vulkan validation layer message is easily 10 times more helpful than in OpenGL, where you often had to set multiple breakpoints just to figure out wh...
Working with Unity and Unreal is part of my day job, so I’ve spent a lot of time using high-level tools to build games and interactive content. Outside of work, I started exploring OpenGL to get a better grasp of how things actually work under the hood. That curiosity eventually led me to Vulkan, where I could dive into low-level details and take full control over the rendering pipeline. A big reason for that shift was the industry's move away from OpenGL toward modern graphics APIs—and all the exciting new tech they enable. Epic’s Nanite and Lumen systems in Unreal Engine 5 were big eye-openers for me. Nanite is a virtualized geometry system that lets you render incredibly high-detail assets—billions of triangles—by streaming and culling only the visible data on the fly. It eliminates the need for traditional LODs and baking, and is built around GPU-driven rendering and multi-threaded command generation—capabilities that older APIs like OpenGL struggle to support. Lumen, on the ot...
Comments
Post a Comment