Hitman Update 1.1.0 - DirectX 12 Internal Benchmark Results

 

Hitman was recently patched and upgraded to 1.1.0. There’s some controversy around this particular update, but this entire title has been controversial since Square Enix announced it would release Hitman in an episodic fashion. AMD has also release their latest drivers [16.4.1] as well. We already know that AMD worked closely with IO Interactive to implement “Async Compute” which is something that AMD hardware has been supported since the 7000 series. As usual AMD was thinking ahead of time. When GCN first released it was back in 2011/2012. This basically allows AMD GPUs to perform more work in parallel rather than the serial data structure we’ve been seeing in DX9\10\11 with a single main thread.

Other than a few special developers who actually built their engines around multi-core support, most games use only a single thread. Combine the single thread usage with the DX11 Draw Call limitation and you’ll easily reach a bottleneck. Modern CPUs are much faster, but GPUs are still more powerful and quicker. So basically the GPU uses a lot of it’s time “waiting” on the CPU to send data. If that data is being sent in a serial manner then powerful GPUs are wasting energy and performance. If the CPU is sending data constantly in a parallel manner, then you can have concurrent operations executing on the GPU as well. This ensures that that GPU always has something to process. This is the future of computing in general. CPUs are getting close to a point were concurrent operations and parallel programming must take advantage of multi-core support we have had for many years now. Believe it or not Quad cores aren’t being fully utilized in a ton of apps.

You are probably wondering what the point is. Well the point is that AMD hardware is capable of running games better when using asynchronous workflow. The other point is that a lot of PC gamers need to understand what DX12 and Vulkan is providing for the PC gaming community and innovation. This is something DX12 and Vulkan provides, thanks heavily to AMDs Mantle technology.

Update Patch 1.1.0 - Broken

  The results that originally posted came from the Internal Benchmarks that the devs supplied. I have ran a my own Real Time Benchmarks™ and it appears that the results aren't accurate. For example I checked the data and although the cores are working it appears that the settings aren't being set properly. My vRAM usage @ 4K was only using 3129MBs.  As we all know at 4K the 4GB vRAM can throttle performance. As far as I can tell after setting and saving the graphical options there appears to be no difference in the "actual" settings. SMAA appears to be the only working option. I'll update the results once the devs address this issue and release another patch.

  Eventhough the I'll need to benchmark everything again I can still use the current data. Since all of the graphical settings are the same I can use this to see how well the Fury X performs with the DX11 Draw Call limitation removed.

 

Patch 1.1.0 - DX11 - 4Ghz 1920x1080 Patch 1.1.0 - DX12 - 4Ghz 1920X1080 Patch 1.1.0 Performance % Increase
10278 frames 11293 frames 10% [9.87%]
89.40fps Average 99.11fps Average 11% [10.86%]
10.87ps Min 14.86fps Min 37% [36.70%]
284.33fps Max 463.00fps Max 63% [62.83%]
11.19ms Average 10.09ms Average 10% [9.83%]
3.52ms Min 2.16ms Min 39% [38.63%]
92.00ms Max 67.29ms Max 27% [26.85%]

 

  Although you can't really rely on the max and min results in this benchmark, this patch does allow for "Apples to Apples" comparisons with the graphics. From my previous test the FPS Averages I benched during the Real Time Benchmarks™ matched the internal benchmark so they are accurate. With the CPU performing more work you clearly see the benefits. DX12 definitely improves the performance. With my PC running 4Ghz + DDR3 1400Mhz we see a decent frame rate average increase by 11% Now if the devs can push out a quick fix I can start benchmarking again. Then we can see if the patches and AMD driver updates will indeed increase performance.

  When I overclocked my CPU even further to 4.8Ghz + DDR3-2095Mhz I was able to pull 126fps @ 1080p + DX12. This puts my FPS average 27.15% over my 4Ghz + DDR3-1400Mhz overclock. So DX12 appears to be working fine in this game as far as removing the CPU limitation. Here's a chart with the DX12 4Ghz vs DX12 4.8Ghz performance differences. 

 

Patch 1.1.0 - DX12 - 4Ghz 1920x1080 Patch 1.1.0 - DX12 - 4.8Ghz 1920X1080 4.8Ghz DX12 Performance % Increase
11293 frames 14381 frames 27% [27.34%]
99.11fps Average 126.02fps Average 27% [27.15%]
14.86fps Min 17.04fps Min 15% [14.67%]
463.00fps Max 671.75fps Max 45% [45.08%]
10.09ms Average 7.94ms Average 21% [21.30%]
2.16ms Min 1.49ms Min 31% [31.01%]
67.29ms Max 58.69ms Max 12% [12.78%]

 

  I'm sure gamers using newer AMD and Intel CPUs\architectures will benefit more unless they hit the GPU limit. Now we just have to wait for IO Interactive to sort out the graphic setting issues.

 

Update Patch 1.1.0 Fixed

  The developers have resolved the graphical problems as well as other issues in the game. I can now benchmark the game with 100% maxed settings properly. I've ran my benchmarks and the only FPS average difference is I can see is the 4K average FPS. This isn't bad news at all due to the amazing increase. The overall FPS Average increased by 22%! I am running AMDs latest drivers as well [Crimson 16.4.2 Hotfix]

Apples to Apples

Day 1 - DX12 - 4.6Ghz - 3840x2160 [4K]
Patch 1.1.0 [Fix] - DX12 - 4.6Ghz 3840x2160  [4K]
4.6Ghz DX12 Performance % Increase
4081 frames 5020 frames 23% [23.01%]
36.02fps Average 43.82fps Average 22% [21.65%]
5.24fps Min 10.02fps Min 92% [91.22%]
65.61fps Max 532.19fps Max 711% [711.14%]
27.76ms Average 22.82ms Average 18% [17.79%]
15.24ms Min 1.88ms Min 88% [87.66%]
190.95ms Max 99.82ms Max 48% [47.72%]

 

Day 1 - 4.6Ghz vs Patch 1.1.0 [Fix] - 4.8Ghz

Day 1 - DX12 - 4.6Ghz - 3840x2160 [4K]
Patch 1.1.0 [Fix] - DX12 - 4.8Ghz 3840x2160  [4K]
4.8Ghz DX12 Performance % Increase
4081 frames 4975 frames 22% [21.90%]
36.02fps Average 43.43fps Average 20% [20.57%]
5.24fps Min 11.47fps Min 119% [118.89%]
65.61fps Max 545.76fps Max 732% [731.82%]
27.76ms Average 23.02ms Average 17% [17.07%]
15.24ms Min 1.83ms Min 88% [87.99%]
190.95ms Max 87.21ms Max 54% [54.32%]

 

  The tighter RAM timighs with 4.6Ghz gives me a slightly better FPS Average, but the larger memory frequencies shows better Max and Min FPS. The actual difference is so minor that it doesn't really matter all that much. The 4K performance increase is all that matters in this case. Great work IO Interactive and AMD!