On the other hand, the 2500k, when paired with the 1070, gives 85-90fps and it's not even maxed out, since I have it running at only 4.3Ghz, the same clock as my 8600k with multicore enhancement. However on the 8600k, due to its pcie 3.0 and overall faster performance of course, this hindrance does not appear very often. My guess is that due to the combination of the 3.5GBs vram of the 970 and pcie 2.0 bus of the 2500k, the system has many delays to send data back and forth the pcie 2.0 bus.
Also the 2500k performs much better with the 1070, than it did with the 970, meaning that the system was not severely cpu limited before. The 970 now performs quite better with the 8600k reaching ~75fps with fewer drops.
Some articles go into more depth like this but not too many, Digital Foundry sometimes get some good stuff too and especially if the developers are around to answer some of the tech stuff directly but that's not always possible.įollowing my 2500k+9k+1070 test videos above, I did two new ones if anyone cares. (3840x2160 being 4x 1920x1080 renders a lot more per frame just from the pixel count alone and how much more the GPU has to work for that also.)ĮDIT: Although again someone would have to capture and break down the entire process from a frame the game renders and each step that goes on here for how that works and if that's optimized or not and what issues there might be and then additions over the console version and all that.
Someone would have to break it down more and do a deeper analysis, it can get pretty clever and also pretty complex what some of the newer game engines are actually doing and how they resolve handling all this data of various sorts, shaders, geometry, lighting and various passes and what gets drawn and when and all the ways this can completely break or need to be adjusted to minimize performance or maximize the GPU workload and recommended practices including many from NVIDIA and also AMD on how to best utilize their GPU strengths and API's such as D3D10, 11 and 12 and Vulkan now.īut yeah the settings probably push the console values pretty far and that's already going to tax GPU and CPU performance extending draw distances, shadow resolution and that's without going over 1920x1080 and how the game handles even that such as upscaling or checkerboard or just natively supporting it and then higher via downsampling or methods of pushing into 2560, 3200 or 3840 resolutions for the PS4 Pro and XBox One X models. While it's probably optimized the games use of vegetation and alpha transparency and perhaps trying to have some sorting going will also really affect performance as the display resolution increases, Horizon Zero Dawn on console uses some pretty heavy culling for anything out of the camera and probably a good system for masking or minimizing pop-in and LOD transitions too. (There's a lot of fun little ways to really break input handling and cause all sorts of performance issues it seems, mouse poll rate above default can affect a number of titles for another example and then gamepads/controllers and API's for handling these.)
Overhead from the OS, other programs and the drivers plus bugs such as Steam API and input issues for example for this particular game though the game patched some stuff by now it seems and Valve also has a beta client update where they seemingly optimized the CPU calls their input code had though for whatever reason it took this game to hit those problems before Valve optimized things on their end but eh it got improved at least.
Sometimes the console settings are below the lowest available PC settings plus scaling isn't always linear and a few games also scale up other effects with resolution like running shaders at 1/2 or 1/4 of the output resolution and thus effects like ambient occlusion or depth of field have a significantly different performance profile above 1920x1080 for example.