The jump from OpenGL to Vulkan was pretty rough. The amount of boilerplate required just to get Vulkan running is insane. What surprised me even more was realizing how much abstraction OpenGL was actually handling behind the scenes.
For me, the hardest part was wrapping my head around synchronization in Vulkan. With multi-threading in the mix, it was tough to even know where to start. In traditional apps, I could set breakpoints—even across different threads—and piece together what was going on. But Vulkan’s command submission model, which separates the Host (CPU) and Device (GPU), makes that approach nearly impossible. So how did I deal with all of that?
Debugging
My main debugging tool early on was the validation layers. I expected them to be similar to OpenGL’s, but they’re actually much better. The amount of information you get from a Vulkan validation layer message is easily 10 times more helpful than in OpenGL, where you often had to set multiple breakpoints just to figure out where an error started. Here's an example of a message in Vulkan:
It tells you what went wrong and also gives you the object that's causing the issue and the command buffer.
Another essential tool for me has been RenderDoc. Being able to assign debug names to Vulkan buffers makes it so much easier to track resources in RenderDoc’s interface. I’ve lost count of how many times I found out I hadn’t set the right dynamic offset in a buffer just by inspecting a captured frame. There are other great debugging tools like NSight and PIX, but RenderDoc remains my favorite.
To make my life easier, I’m not using multi-threading yet, but I plan to add it in the future. For example, I might load materials on a separate thread while waiting for Vulkan to finish its current work.
Pipelines
I really like how Vulkan lets you define your own pipeline. In Open GL the pipeline was defined for you. So you can't really decide what stages the pipeline should have, what descriptor sets or bindings it should have.
Another cool feature Vulkan has is specialization constants. These are constants that you can use in the shader and set the value at pipeline creation. So for example if you have a constant amount of MAX_LIGHTS in your shader which you'll use to create the light buffer like so:
layout(set = 1, binding = 0) readonly buffer LightData {
Light lights[MAX_LIGHTS];
} lightData;
You can create a specialization constant in the pipeline and define it in your app. That way if you ever decide to change it in the app you won't have to go and change the shader.
Render pass
To me, the concept of render passes in Vulkan and OpenGL is pretty similar. In both, you work with a framebuffer that has one or more textures attached to it. You render into those images—whether it’s just depth for shadow maps or both depth and color for the final scene. Then, you present the rendered image to the screen. The main difference lies in how each API structures and manages this process. The underlying logic, though, stays the same.
Shaders
When I first started writing shaders, I tried to keep them compatible with both OpenGL and Vulkan. But as the codebase grew, it became clear that maintaining support for both was getting messy. I had to add Vulkan-specific code in several places, and the shaders quickly became hard to manage. Eventually, I decided to drop OpenGL support entirely and focus solely on Vulkan.
One thing I really like about Vulkan is that shaders are compiled into SPIR-V before being used in your application. To streamline that, I wrote a small script to automatically compile all my shaders whenever I make changes.
Conclusion
Working with both OpenGL and Vulkan made it clear to me how different they really are. While OpenGL hides a lot of complexity, Vulkan exposes everything—which can be overwhelming at first, but also incredibly rewarding. Diving into Vulkan helped me understand just how challenging rendering programming can be, but also how fun it becomes once you start to grasp how it all fits together.
Comments
Post a Comment