-
Posts
1,079 -
Joined
-
Last visited
Reputation Activity
-
agent_x007 got a reaction from ProRules in Does x16 SLI actually matter? on a Z270 cpu
Actually Z170 chipset has more PCI-e lanes than any LGA1151 CPU (20) while Z270 has 24 PCI-e lanes.
Z97 has 8 lanes... of PCI-e 2.0 variety
-
agent_x007 got a reaction from David89 in Scrapyard Wars Season 5 *Updated w/ Ep4 FINALE*
My comment on IDE adapter adventure :
Doing old stuff can be challenging
Maybe next Scrapyard Wars theme should be "retro" (instead of RGB) ?
For example :
1) Build the fastest PC with a functioning AGP slot on MB (AGP + PCI-e is "OK", if AGP works).
2) Building best PC using LGA 1156/1366* or AM3(+) platform (*1366 is almost 10Y old + I wish Linus/Luke had the time to get that server stuff to work )
I would love to see something like this in the future...
Other :
Extending the time to do any Scrapyard Wars to a whole week (7 Days, that's 5 working days).
Filming all of that could be hard, and getting second or even a third editor to get things together may be a must in this case.
BUT this would enable shipping options (ie. no need to fly anywhere to get the best parts... if you are fast enough on first few days).
-
agent_x007 got a reaction from TOMPPIX in can someone explain this?
RX GPU's transistors : <2GHz clock.
Broadwell CPU's transistors : 4,5GHz+ clock.
Because of that, you can't compare them directly.
-
agent_x007 got a reaction from GuruMeditationError in What's the fastest M.2 SDD that an ASUS Z97-C will support?
OP : Second PCI-e slot is 2.0 x4 (so you should see the numbers I had with PCI-e 2.0).
BUT it's shared with PCI-e x1 ports.
So to get all x4 lanes on Second PCIe x16, you have to disable both PCI-e x1 ports first (and you won't be able to use them with PCI-e x4 M.2 adapter).
Setting for x4 mode is in UEFI (under Onboard Devices Configuration and "PCI-EX16_2 slot (black) Bandwidth [x2 Mode]" option).
PS. Samsung Pro uses MLC NAND, Samsung EVO uses TLC NAND.
-
agent_x007 got a reaction from GuruMeditationError in What's the fastest M.2 SDD that an ASUS Z97-C will support?
From what I read in the manual, there are no PCI to PCI-e sharing warnings.
So, you should be good with PCI sound card
I recommend using last PCI port for it (as far as possible from GPU and NVMe adapter).
-
agent_x007 got a reaction from GuruMeditationError in What's the fastest M.2 SDD that an ASUS Z97-C will support?
Sound cards don't get old that quicky.
So, maybe in future you will get enough PCI-e lanes to use M.2 @ x4 and PCI-e x1 sound card
What image ?
-
agent_x007 got a reaction from GuruMeditationError in What's the fastest M.2 SDD that an ASUS Z97-C will support?
It's actually more about distance from any heat sources (since higher end soundcard parts can also generate heat).
Heating up (or from), M.2 drive or blocking GPU Airflow, isn't good for longevity.
-
agent_x007 got a reaction from Technomancer__ in Redundant CPU Cores
No, he didn't.
He simply set maximum number of THREADS, OS can use.
(a "Processor" in msconfig = a Thread on actual CPU).
Now, if he wanted to put a Phenom II Hex Core in there now, that CPU would always be seen as a Quad Core (even in CPU-z).
To re-enable the other two cores, he would have to uncheck "Number of Processors" in msconfig's advanced boot options, and restart the PC.
THIS OPTION DOES NOT UNLOCK HIDDEN CPU CORES !!!
To put it in another way :
That's why you should ALWAYS read what a thing does.
In this case it ALWAYS CAPS (as in LIMITS), the number of threads you can have while maximum memory limits RAM capacity that can be used by OS.
To enable any one of them on a normal, day-to-day PC is stupid.
Those options are for virtualisation mostly (to keep VMs from hogging too much resources from host system).
-
agent_x007 got a reaction from Gaussen in Some help understanding M.2 and NVME
NVMe support is independent of MB's controllers/functions.
What counts is NVMe aware UEFI (or EFI booting), with GPT boot capable OS.
No other things are needed from MB point of view.
You may need drivers for your NVMe drive, but Windows 8 and newer have generic one inside.
Windows 7 SP1 has an update with it, which can be baked into installation - so no problems there.
As for M.2 standard and what can be used with it :
SATA or PCIe = Bus used
AHCI or NVMe = Interface used
You can have M.2 drive that support combinations of them :
1) SATA/AHCI
2) PCIe/AHCI
3) PCIe/NVMe
4) PCIe/AHCI + PCIe/NVMe
^This corresponds with notches on drive itself :
M.2 "Key B" = SATA/AHCI
M.2 "Key B+M" = SATA*/PCI-e + AHCI/NVMe* (depending on drive)
M.2 "Key M" = PCIe + AHCI/NVMe (depending on drive).
A "Key M" drive is usually PCI-e x4.
A "Key M + B" drives are usually PCI-e x2 (but they can use SATA port).
*There are no SATA + NVMe "combo" drives (that would be stupid/impossible).
Side note :
Intel Optane drives are the only drives that can't work with SATA port or in AHCI PCI-e mode, while having "Key M+B" configuration.
-
agent_x007 got a reaction from nicklmg in 10 YEARS of NVIDIA Video Cards Tested
I like how Avg. FPS jumps get slower in HL2/Bioshock/Unigine Heaven/ after GTX 780 Ti
Also ~700FPS Avg. in Half Life 2
-
agent_x007 got a reaction from Net3 in M.2 NVMe - M.2 slot & PCIe
OK.
If Win 10 is also the OS you want to use, you shoudn't have any problems with NVMe drive OS installation.
-
agent_x007 reacted to WereCat in Why 1080p needs more CPU than 4k
It is not that 1080p needs more CPU, it is that the more FPS you have the more load you put on CPU, and on 1080p you get way more FPS than on 4k. (Yea, this is not really exact explanation but it is close enough).
-
agent_x007 got a reaction from WereCat in Why 1080p needs more CPU than 4k
CPU needs to be fast enough for a FPS mark you want to achieve.
RAM also plays a role in this, but only with high FPS it cn be seen.
@App4that You forgot about detail levels or game settings.
Low settings on 4k can be better or worse than Ultra on 1080p (depending on the game).
-
agent_x007 got a reaction from Zando_ in RX 480 GTR Vs 580
Go RX 480, because GTX 580 is too old.
...
Because last time I checked "580" was Nvidia GPU.
-
agent_x007 got a reaction from Jumper118 in Post your Cinebench R20+15+R11.5+2003 Scores **Don't Read The OP PLZ**
First Core i7 Extreme : "965"
2003 :
R11.5 :
R15 :
-
agent_x007 got a reaction from goodtofufriday in Retro computer subforum
Pendrive/microSD memory reader performance level is quite far from actual SSD.
PCI SATA cards should all recognise AHCI based SSD's (why woudn't they ?).
Trim/Garbage collection is done by SSDs internal controller.
Of course, MicroSD readers or Pendrives don't know how to do it.
I use NVMe on my LGA 775 system : LINK, because Overkill is Underrated
It can work on cheap boards as well : LINK.
The key thing is the right software
-
agent_x007 got a reaction from PCGuy_5960 in Post your Cinebench R20+15+R11.5+2003 Scores **Don't Read The OP PLZ**
First Core i7 Extreme : "965"
2003 :
R11.5 :
R15 :
-
agent_x007 got a reaction from GigabitXe in Reviving old pc
8GB of RAM is a priority.
Then get a Core i5 750/760 or i7 860/870 (depending on $$).
After that, you can think about HDD/SSD.
-
agent_x007 reacted to DocSwag in What RAM CL/MHz....
Latency = timings/frequency. Higher MHz always means higher bandwidth, but latency also matters. If you plug it in:
For 2133 MHz RAM: Latency = 13 cycles/2133 million cycles/sec = 6.095*10-9
For 3200 MHz RAM: Latency = 16 cycles/3200 million cycles/sec = 5.0*10-9
So the 3200 MHz RAM actually has a lower latency.
Also, the clock speed of the infinity fabric in Ryzen is linked to the memory clock speed, so even more reason to get 3200 MHz RAM. It can make a substantial difference.
-
agent_x007 got a reaction from tbendubois in Will a Gtx 1080 bottleneck on 1080p resolution
Not at all - GTX 1070 is GTX 1070 (ie. nothing special).
GTX 1080 is just THAT extra bit special
Since GTX 1080 Ti was released, maybe NV meant it for 1080p ?
Which would imply GTX 1080 was for 1080i... and knowing what NV did in past it probably is truth
"Proof" : Max DSR factor in driver is x4.00, and 4x 1080 = 4k
-
agent_x007 got a reaction from Beefytv9683 in Will a Gtx 1080 bottleneck on 1080p resolution
Not at all - GTX 1070 is GTX 1070 (ie. nothing special).
GTX 1080 is just THAT extra bit special
Since GTX 1080 Ti was released, maybe NV meant it for 1080p ?
Which would imply GTX 1080 was for 1080i... and knowing what NV did in past it probably is truth
"Proof" : Max DSR factor in driver is x4.00, and 4x 1080 = 4k
-
agent_x007 got a reaction from babadoctor in Is it possible to send any type of data connection through any type of wire?
Short answer : No.
You cannot use fiber/optical cables, to send a electric current.
Also, you can't establish a optical connection (laser light), through copper wire/cable.
Other that, you basicly ask a question can a metal use in wires conduct electricity.
So in practive it depends on wire (lenght mainly), and current/signal you want to send
-
agent_x007 got a reaction from Jumper118 in Post your Cinebench R20+15+R11.5+2003 Scores **Don't Read The OP PLZ**
When P5K64 WS isn't good enough... go Rampage
Updated results for Pentium XE 965 : R11.5 and R15 only.
R11.5
R15
-
agent_x007 got a reaction from zeros in Post your Cinebench R20+15+R11.5+2003 Scores **Don't Read The OP PLZ**
Air cooling "Hellrun" : QX9770 @ 4,5GHz (Vcore = 1,575V + LLC)
2003
R11.5
R15
-
agent_x007 got a reaction from JonaChan in AA card
Dedicated Anti Aliasing card ?
1) SSAA is not an option for it, since it multiplies render resolution (which would mean seperating Vertex/Hull/Domain Shaders from Pixel Shaders).
And ^that is REALLY stupid idea at this point, since it will increse lag (sending data across PCB's will always take more time, than sending it within one PCB). Not to mention, it's a Voodoo 2 technology (LOL)
2) MSAA is not an option, because it requires ROP units to be separated (or copied), from the GPU itself.
You can't simply add more ROP's and hope for best in multi PCB configuration.
That's why SLI on different model cards is impossible (too much work to keep things in-sync with each other).
Also keeping ROP's fed, would require a massive ammounts of bandwidth with added card, which is pointless since gain on GPU side would be lost on transfering data.
3) FXAA (or any other shader based AA)
To put it bluntly...
I HIGHLY doubt, there is a way to do FXAA faster than modern GPU itself can do it.