Boux wrote:I found something interesting running the MSI Afterburner monitoring tool.
The test has been done at the location Chris was using (9 light years from earth, stars rendering only, 45 FOV).
Here is the graph:
You can see that GPU load starts dropping at Lim Mag 3 to reach a minimum at Lim Mag 10.
At Lim Mag 12, GPU load starts increasing again up to the maximum at Lim Mag 15.09.
There is something happening here. Could be CPU load reaching a peak and the GPUs starting waiting for data.
GPUs temperatures variations are consistent with the load.
This is a crossfire setup. Actual drop on a single GPU would likely be more in the 50-60% range.
Here's what I think is happening...
* When the limiting magnitude is low (i.e. few stars are drawn), there's not much work for the CPU to do. The GPU is kept busy repeatedly clearing the framebuffer at an extremely high rame rate.
* As the limiting magnitude increases, the CPU has to process more and more stars. The GPU has only slightly more work to do, as the number of stars drawn isn't that large and they only cover a few pixels.
* When the limiting magnitude gets very large, the stars become bright and cover a lot of pixels. This requires a lot of pixel shader processing and graphics memory bandwidth, and the GPU becomes the bottleneck again.
It's important not to get too caught up about performance now; correctness is more important at this stage. Also, the performance of the new star rendering is already faster except when very large stars are rendered. With the new code, more work is offloaded from the CPU to the GPU. This is desirable because the shaders always run in parallel on GPU cores, and the typical GPU has a lot more cores than a CPU. It's not too hard to modify the code so that the CPU is doing less work. Another very useful optimization would be switching the star rendering to use vertex buffers so that the CPU and GPU can better operate in parallel with each other.