This feels like the answer to a problem I’ve had all week.
I’ve been using PixiJS (WebGL) for an project and Chrome profiler says the GPU is in use almost 100% of every frame. And nvtop says the GPU is 35% utilized by that (well isolated) chrome tab. But it’s a single quad being rendered at 60fps on an RTX 4060, so that’s obviously misleading… I hope.
> Chrome profiler says the GPU is in use almost 100% of every frame. And nvtop says the GPU is 35% utilized by that (well isolated) chrome tab
Is that a problem? That the GPU is used for 100% of the frame generation is OK, is it not? And 35% utilization sounds maybe tad high for some simple web visualization, but for a full game, sounds normal.
Maybe I misunderstand what you see as the problem here?
Because it’s more like 1%. The CPU doesn’t even warm up like it does when playing Half Life 1 with VSYNC on. There’s some measurement error going on with what it considers utilized.
The trick we use at work is to take note of the instantenous power consumed. The more it's consuming the more compute it is doing. As far as macro indicators, I'm not sure you can do better than that, without profiling.
You can saturate the entire power budget on pretty much all GPUs just by moving data in and out of HBM. There is no compute needed at all to do this, and bandwidth bound workloads are extremely common in the scientific computing space.
I’ve been using PixiJS (WebGL) for an project and Chrome profiler says the GPU is in use almost 100% of every frame. And nvtop says the GPU is 35% utilized by that (well isolated) chrome tab. But it’s a single quad being rendered at 60fps on an RTX 4060, so that’s obviously misleading… I hope.
Is that a problem? That the GPU is used for 100% of the frame generation is OK, is it not? And 35% utilization sounds maybe tad high for some simple web visualization, but for a full game, sounds normal.
Maybe I misunderstand what you see as the problem here?