Jump to content

emil2424

Member
  • Posts

    7
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. So if I work on one disk the speed will be maximum and if I use two at a time they will give a total of 32gbit.
  2. I have a Z390 AORUS ELITE so 2 NvME slots. I'm currently using a Samsung 970 PRO x4 and would like to add a second NVME x4 drive. Will the number of lines be divided into x2 + x2 after connecting second disks which will cause their speeds to be divided in half?
  3. Insert the GPU into the x16 slot. Make sure the card is in PCIe 3.0 x16 mode. Do not use ultra high settings. LINK
  4. Did you put the GPU in the first slot from the top - x16? Have you changed any individual settings in the game? E.g. Supersampling, Fog. If I remember correctly, the clouds also consumed GPU terribly. What about other games? Any problems?
  5. This is a game engine bottleneck, not a CPU or GPU bottleneck. Doom 2016 is probably the worst title you could have chosen to check your GPU usage. The Doom 2016 game engine does not allow you to generate more than 200 FPS so part of the GPU (52%) generated 200 FPS and the rest of the computing power (48%) is waiting for further commands. I had 6700K and 1080Ti which is almost the same as you and I will tell you that you will have problems in games where 8 threads is not enough. Mainly titles from Ubisoft like the division 2, assassin's creed origins/odysey etc. and some other games. Overclock CPU and try to transfer the load of rendering to the GPU by increasing the level of graphic detail in games. If you are open to extreme solutions and you are not afraid to damage the motherboard and expose yourself to potentially unproven software then you can use the modified BIOS and unlock the possibility of installing 8700K on an unsupported platform.
  6. Is it possible to determine how much PCIe bandwidth the most powerful GPUs like RTX 2080 Ti need? PCIe 3.0 x8 = 7.88 GB/s PCIe 3.0 x16 = 15.75 GB/s PCIe 4.0 x8 = 15.75 GB/s PCIe 4.0 x16 = 31.51 GB/s For example based on this chart: Can it be said that since the RTX 2080 Ti in the x8 and x16 mode achieves the same result does it mean that the bandwidth of 7.88 GB / s is sufficient therefore the lifetime of PCIe 3.0 x16 is not at risk? I understand that there is backward compatibility but do your crystal ball tell you that bottleneck may occur on RTX 3080 Ti due to the lack of PCIe 4.0 on Intel's top consumer platforms? I'm asking because I'm wondering if my 9900K with Z390 (PCIe 3.0 x16 = 15.75 GB/s) will support the fastest top consumer GPU with Amper architecture like 3080 Ti. I use a PC to play 3440x1440 @ 100 Hz - no professional applications.
  7. Will the new HDMI 2.1 standard require new hardware or can I expect that the TV from 2019 will get software updates and will be able to use this standard with the right cable? Is it even possible to update version 2.0 to 2.1 using software because I read on several pages related to Samsung that their QLEDs from 2019 will get such functionality through soft. I need to buy a new TV now and the choice fell on the Samsung Q60R 55".
×