Jump to content

EVGA responds to hot VRM area on GTX 10 series

zMeul
Just now, Yoinkerman said:

If you are a normal user just playing games and doing content creation don't get the pads.

 

If you're obsessed with running furmark or coin mining or f@h gpu all day every day, then get the pads.

Thank you very much for your advice.

 

I am just a normal gamer and I have done no overclocking so I hope there will be no issue.

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, Rohime said:

Hey... they listened... they retested ... they offered solution for those that want it ... they work with the people that reported it.

 

Whats not to love?    Many many many companies don't do anywhere near as much.

I think it's amazing how much people let these companies get away with. In other industries they would have been forced to do free repairs on the devices themselves, paying for shipping and everything.

 

And since this is such a low volume product it wouldn't even be a big deal.

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, Delicieuxz said:

 

 

So, FTW and Classified? I think the ACX3 and SC use the same PCB as the FE.

It's probably not an issue on the classified, since it has much more efficient phase design. The FTW basically uses reference components and just adds more of them. The classified has completely different chokes,mosfets, capacitors etc.

Stuff:  i7 7700k @ (dat nibba succ) | ASRock Z170M OC Formula | G.Skill TridentZ 3600 c16 | EKWB 1080 @ 2100 mhz  |  Acer X34 Predator | R4 | EVGA 1000 P2 | 1080mm Radiator Custom Loop | HD800 + Audio-GD NFB-11 | 850 Evo 1TB | 840 Pro 256GB | 3TB WD Blue | 2TB Barracuda

Hwbot: http://hwbot.org/user/lays/ 

FireStrike 980 ti @ 1800 Mhz http://hwbot.org/submission/3183338 http://www.3dmark.com/3dm/11574089

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Arokhantos said:

Founders has had better overclocking results so far then custom, this does't surprise me at all.

Probably due to such a high amount of people buying founders cards, not because the card is better somehow or the chips are binned.  A higher sample size and more people discussing their overclocks is most likely more to blame I'd think.  Basically all of the cards do 2050 to 2150, with very few falling outside of that category.

Stuff:  i7 7700k @ (dat nibba succ) | ASRock Z170M OC Formula | G.Skill TridentZ 3600 c16 | EKWB 1080 @ 2100 mhz  |  Acer X34 Predator | R4 | EVGA 1000 P2 | 1080mm Radiator Custom Loop | HD800 + Audio-GD NFB-11 | 850 Evo 1TB | 840 Pro 256GB | 3TB WD Blue | 2TB Barracuda

Hwbot: http://hwbot.org/user/lays/ 

FireStrike 980 ti @ 1800 Mhz http://hwbot.org/submission/3183338 http://www.3dmark.com/3dm/11574089

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, Rohime said:

Hey... they listened... they retested ... they offered solution for those that want it ... they work with the people that reported it.

 

Whats not to love?    Many many many companies don't do anywhere near as much.

They've known about this since release and only bothered offering a solution when people complained. That's pretty scummy

Link to comment
Share on other sites

Link to post
Share on other sites

106 degrees C on the other side of the PCB is insane. When talking about the 125 max temperature for the VRM it is the actual chip temperature. The package and especially the PCB have a considerable thermal resistance. The temperature of the chip itself can easely be 20 degrees higher.

 

And to double up the test was made at the best possible conditions for the card. With no black coating and an Er = 1 the camera is bound to report a temperature that is to low.

 

But I'm surpriced the performance is so bad. The base plate is a pretty good heat spreader. But proper thermal coupling is needed for the components to the plate. connecting the plate to the main heatsink is even better. The second test is like I expected it to be.

Mineral oil and 40 kg aluminium heat sinks are a perfect combination: 73 cores and a Titan X, Twenty Thousand Leagues Under the Oil

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Prysin said:

happened with Titan X (Maxwell), happened with Titan Z, happened with a couple of 780s or 780Tis (mostly low end ones)

As for Hawaii cards, uhm, the ones who were affected was from ASUS and Gigabyte from what i remember, both of those brands should NEVER be considered when purchasing a Radeon product. They just recycle GTX series heatsinks for their Radeon card, thus the heatsinks arent remotely able to handle the heat produced by a car that has nearly 2x the TDP of the GTX card the heatsink was designed for.

 

Titan Z was just a disaster that should never have existed....

 

Titan X Maxwell I don't remember or believe had that issue, now that you mention it though I remember Asus 780s having something shitty.

 

Fudge Asus graphics cards man. 

 

Either way, the point remains I thought we had sufficiently shamed gpu makers into forcefully heatsinking their vrms... com'on evga...

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Curufinwe_wins said:

 

Titan Z was just a disaster that should never have existed....

 

Titan X Maxwell I don't remember or believe had that issue, now that you mention it though I remember Asus 780s having something shitty.

 

Fudge Asus graphics cards man. 

 

Either way, the point remains I thought we had sufficiently shamed gpu makers into forcefully heatsinking their vrms... com'on evga...

Titan X VRAM chips hit 108c.... yeah, thats great isnt it?

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Prysin said:

Titan X VRAM chips hit 108c.... yeah, thats great isnt it?

lol not at all. I legitimately didn't remember seeing that. 

 

http://www.guru3d.com/articles-pages/nvidia-geforce-gtx-titan-x-review,10.html

 

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, potoooooooo said:

Why do people like EVGA so much? There is shit like this every generation

not to mention the lifespan of EVGA fans .... ugh 

RyzenAir : AMD R5 3600 | AsRock AB350M Pro4 | 32gb Aegis DDR4 3000 | GTX 1070 FE | Fractal Design Node 804
RyzenITX : Ryzen 7 1700 | GA-AB350N-Gaming WIFI | 16gb DDR4 2666 | GTX 1060 | Cougar QBX 

 

PSU Tier list

 

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, Curufinwe_wins said:

lol not at all. I legitimately didn't remember seeing that. 

 

http://www.guru3d.com/articles-pages/nvidia-geforce-gtx-titan-x-review,10.html

 

01-PCB.jpg

 

there was another review showing it at 108c.... Guru3D might need to recalibrate their camera. That or they didnt wait long enough before taking a image.

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, Rohime said:

Hey... they listened... they retested ... they offered solution for those that want it ... they work with the people that reported it.

 

Whats not to love?    Many many many companies don't do anywhere near as much.

You have to take the card apart to put the thermal pads on right?

 

Not everyone (like myself) would want to have to do that.  Thankfully this issue (hopefully doesn't) apply to me since I have a Gigabyte G1 Gaming 1070 but I'm not really interested in disassembling a GPU.

 

 

I do agree that investigating the issue and offering a solution for free is great.  However I'd say having designed the affected cards better to not need the extra pads in the first place.  Or at least have them in the first place without needing to add them. 

Link to comment
Share on other sites

Link to post
Share on other sites

here's a new one: http://forums.evga.com/What-i-supposed-to-do-with-my-1080-FTW-m2568851.aspx

it appears that the thermal pads that supposedly should've made contact between the VRAM chips and the plate are too thin xD

 

download.axd?file=1;2569003&filename=161

 

what the fuck EVGA?!?!?!

 

  • VRM issues with GTX1080 FTW
  • VRM thermal issues with all their custom GTX1070 and GTX1080 boards
  • coolers reeving up with their SC models - reportedly they stopped accepting RMAs for this issue until they understand what's going on -_-
  • and now thermal pads are too thin
Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, zMeul said:

here's a new one: http://forums.evga.com/What-i-supposed-to-do-with-my-1080-FTW-m2568851.aspx

it appears that the thermal pads that supposedly should've made contact between the VRAM chips and the plate are too thin xD

 

what the fuck EVGA?!?!?!

 

  • VRM issues with GTX1080 FTW
  • VRM thermal issues with all their GTX1070 and GTX1080s
  • coolers reeving up with their SC models - reportedly they stopped accepting RMAs for this issue until they understand what's going on -_-
  • and now thermal pads are too thin

Typical EVGA WTF edition cards.

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/22/2016 at 9:25 AM, potoooooooo said:

Why do people like EVGA so much? There is shit like this every generation

Look at the way they responded, they're sending out free thermal pads to all their customers if they want one. Why do you hate EVGA so much?

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Richard Burton said:

Look at the way they responded, they're sending out free thermal pads to all their customers if they want one. Why do you hate EVGA so much?

hate?!

let's get one thing straight: this shouldn't even been an issue for a company that built it's business on designing video cards

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, zMeul said:

hate?!

let's get one thing straight: this shouldn't even been an issue for a company that built it's business on designing video cards

Now if it was Gigabyte, definitely. Especially with the retarded vRAM configuration of their GTX 970 Windforce and G1 Gaming.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Richard Burton said:

Look at the way they responded, they're sending out free thermal pads to all their customers if they want one. Why do you hate EVGA so much?

Are they actually sending them though, or just putting a request form on their webpage that you can fill out so they can ignore...

Link to comment
Share on other sites

Link to post
Share on other sites

The MOSFET drivers used in the EVGA FTW 1080 cards are ON Semiconductor NCP81282 integrated drivers (the MOSFET and control circuit are in one package). They are rated for an absolute maximum temperature of 150*C. The VRMs are operating within their limit but they are getting much closer to the limit than I would like to see.

 

EVGA disclosed their method for testing and mentioned that they used thermal probes which explains how this issue got past them. A thermal probe can only measure a single point of heat and it is a very small point at that. Its advantage is that it can measure temperatures behind other heat generating components such as a heatsink. The thermal probe is not capable of showing heat propagation away from the component to other areas of the board unless additional probes are used to measure this propagation. Using that many probes is very impractical and still would not paint a complete picture of what is going on.

 

The thermal camera that THW used is specifically designed to show heat prorogation but is not designed to give high accuracy pin point temperature readings which is probably why EVGA didn't use it during testing (I'll come back to this point in a second). The thermal camera is also not capable of looking past other heat generating components such as the heat sink. If they had used this while looking down towards the board through the heat spreader they would have seen a slightly warmer area but would not have seen just how hot the VRMs were getting. At the same time if they viewed it from the rear the back plate would have blocked heat from radiating way from the board and would also have shown a warmer area on the board but would not have shown such a high temperature.

 

Back to my point about the thermal camera, while it would not have offered the most accurate readings, it should have still been used in tandem with the thermal probes to see how the heat was radiating away from the components and testing should have been done with and without the heat sink on both front and back (using a modified heat sink to pull heat away from the GPU core for this part of testing).

 

As for the issue with using thermal pads to cool the VRMs along the back plate, this is a perfectly viable option. While its cooling performance is not as good as it would be making contact with the heat sink by itself, it is still pulling heat away from the VRM area on the PCB and allowing the heat to be dissipated elsewhere. Even if the back plate is showing high temps, this means that it is doing its job in cooling the VRM. Combining that with using the heat sink will further increase the effectiveness of this cooling. Cooler VRMs mean better power delivery.

 

Not sure what is going on with the issue of the new thermal pads not fitting. Maybe the guy used the wrong thermal pad in the wrong place? We'll have to wait and see if anyone else has a similar issue and what new solution is offered. I haven't seen the thermal pads offered for replacement but I'm willing to bet that one is thicker than the other.

 

 

Intel Xeon 1650 V0 (4.4GHz @1.4V), ASRock X79 Extreme6, 32GB of HyperX 1866, Sapphire Nitro+ 5700XT, Silverstone Redline (black) RL05BB-W, Crucial MX500 500GB SSD, TeamGroup GX2 512GB SSD, WD AV-25 1TB 2.5" HDD with generic Chinese 120GB SSD as cache, x2 Seagate 2TB SSHD(RAID 0) with generic Chinese 240GB SSD as cache, SeaSonic Focus Plus Gold 850, x2 Acer H236HL, Acer V277U be quiet! Dark Rock Pro 4, Logitech K120, Tecknet "Gaming" mouse, Creative Inspire T2900, HyperX Cloud Flight Wireless headset, Windows 10 Pro 64 bit
Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, DragonTamer1 said:

The MOSFET drivers used in the EVGA FTW 1080 cards are ON Semiconductor NCP81282 integrated drivers (the MOSFET and control circuit are in one package). They are rated for an absolute maximum temperature of 150*C. The VRMs are operating within their limit but they are getting much closer to the limit than I would like to see.

it's not as simple that if doesn't reach the temp limit it's OK. higher temps means a less efficient FET

it also dependent on how much current it goes trough it

here, read this: http://www.gamersnexus.net/news-pc/2661-evga-mosfet-failure-possible-from-thermal-runaway-scenario

 

Quote

Not sure what is going on with the issue of the new thermal pads not fitting. Maybe the guy used the wrong thermal pad in the wrong place? We'll have to wait and see if anyone else has a similar issue and what new solution is offered. I haven't seen the thermal pads offered for replacement but I'm willing to bet that one is thicker than the other.

he didn't applied anything, those are the factory installed thermal pads that should've made contact between VRAM chips and the "mid-plate"

 

---

 

to sum it up: EVGA fucked up, and cards are "blowing a gasket"

 

7EiQp4E.jpg?1

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, zMeul said:

it's not as simple that if doesn't reach the temp limit it's OK. higher temps means a less efficient FET

it also dependent on how much current it goes trough it

here, read this: http://www.gamersnexus.net/news-pc/2661-evga-mosfet-failure-possible-from-thermal-runaway-scenario

If it doesn't reach the max operating temperature then it is okay, the power phases will survive to continue working as intended. Nothing has changed there. I read the article and I'm not sure where they got this number of 125*C max temp since that is the max ambient air temp for the drivers to operate in. The maximum thermal junction temp (the temp that the actual package can reach and continue to function) is 150*C with a thermal resistance jucntion-ambient of 22*C/W. The datasheet recommends an operating temperature no higher than 100*C.

 

The article you just linked me also has them saying that THW was testing their FTW at 22*C ambient but earlier in this thread it was mentioned that the test was done at 30*C ambient. I remember EVGA saying that was their ambient for testing but I could have swore I saw THW saying they matched it. I'll have to scan the thread again.

 

I know that the converter circuit will become less efficient at higher temps. These higher temps are a factor of the relation to the supply voltage and the load current as well as the power that the package has to dissipate itself to reach these lower voltages. It is typically much lower in a buck converter since the energy to be down converted is stored in a magnetic field in the inductor but there is still switching resistance when the MOSFET is on. This switching resistance becomes more of an issue at higher currents. The packages are still at a 90.9% efficiency when the card is pulling its rated power. With the power boosted to 115% in MSI afterburner the power draw bumps up from 215W to 247W which breaks down to 19.78A per phase which is still a 90.5% efficiency. If you drop in something like Furmark that is specifically designed to pull the most amount of power pulling and we get god knows what because we don't have the power numbers on hand.

 

There is also heat being generated in the PCB itself because of the high currents and additional heat being generated within the drivers.

 

This ultimately breaks down to these boards get hot, yes. These boards under normal operating conditions should not fail prematurely unless a user gets a borderline component or the user is trying to pull more power than what is recommended. Raising the power limit but not the core voltage will cause more heat to be produced per phase. There is also the possibility that GPU Boost 3.0 is inadvertently damaging these boards by trying to supply lower voltage at these high power draw scenarios.

 

Edit: I forgot to leave you the link to the datasheet so you could verify all of this yourself.

 

1 hour ago, zMeul said:

he didn't applied anything, those are the factory installed thermal pads that should've made contact between VRAM chips and the "mid-plate"

I didn't bother reading the link TBH, I only saw the one image and though that it was thermal pads that he applied himself.

Intel Xeon 1650 V0 (4.4GHz @1.4V), ASRock X79 Extreme6, 32GB of HyperX 1866, Sapphire Nitro+ 5700XT, Silverstone Redline (black) RL05BB-W, Crucial MX500 500GB SSD, TeamGroup GX2 512GB SSD, WD AV-25 1TB 2.5" HDD with generic Chinese 120GB SSD as cache, x2 Seagate 2TB SSHD(RAID 0) with generic Chinese 240GB SSD as cache, SeaSonic Focus Plus Gold 850, x2 Acer H236HL, Acer V277U be quiet! Dark Rock Pro 4, Logitech K120, Tecknet "Gaming" mouse, Creative Inspire T2900, HyperX Cloud Flight Wireless headset, Windows 10 Pro 64 bit
Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, DragonTamer1 said:

with a thermal resistance jucntion-ambient of 22*C

that's a fantasy ambient temp and I saw it used in a lot of testing from CPU coolers to video cards

 

and yes, the problems Tom's discovered were done with testing on 30deg ambient; EVGA replicated the tests at same 30deg - that's a more realistic ambient temp, but still doesn't cover the full spectrum of variation on the globe where these products might end up

 

---

 

there are 26deg in my room right now and I still have a cap on because I can feel a draft that's making my left ear hurt - and this is almost November

what about summer, eh?! 22deg is not realistic

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, zMeul said:

that's a fantasy ambient temp and I saw it used in a lot of testing from CPU coolers to video cards

 

and yes, the problems Tom's discovered were done with testing on 30deg ambient; EVGA replicated the tests at same 30deg - that's a more realistic ambient temp, but still doesn't cover the full spectrum of variation on the globe where these products might end up

 

---

 

there are 26deg in my room right now and I still have a cap on because I can feel a draft that's making my left ear hurt - and this is almost November

what about summer, eh?! 22deg is not realistic

Junction to ambient is the thermal resistance between the silicon inside the package and the plastic/ceramic outer shell of the package. It has nothing to do with testing temperature. The silicon has a max temp of 150*C but the temp read on the outside of the package will be lower. The temp difference becomes greater as the temps become lower. The drivers also have a temperature limit that pushes them into thermal shut down at 150*C. This should cause a cascading effect when they hit their limit that causes the rest of the drivers to shut down and would cause the GPU to stop functioning before damage could occur and would most likely need a restart.

 

You are right in that 25*C is not going to be an accurate representation of the temperatures that can be seen in some parts of the world, at the same time what you are not considering is that most people have air conditioning. If they don't they may want to consider investing in one rather than a $750 GTX 1080. 25*C is used as a standard among a wide array of bench-markers and testers because it is an average and the typical people that can afford it will most likely be using it at those temperatures. You haven't said anything about their method of testing though. If we continue to follow the "full spectrum" of global temperatures where people may buy this then we would have to test everything from Ecuador to Nome Alaska, after all there are people living there as well.

 

What about summer? Just because you don't like EVGA or their products (which you have made obvious multiple times in this thread) doesn't mean you need to get nasty and condescending towards other members on this board. And before you try and say you're not, you are. It's not what you say, it's how you say it.

 

When I made my first post I did my best to reach a conclusion as to how this managed to happen at EVGA. It is a simple design oversight on their part and has already been addressed. I even talked about how their drivers are being run too close to their temp limits and it is something that should have been rejected for a better design, something that is not praise towards the company. You seem to be on this subject of "this is terrible, they are a terrible company, nobody should buy from them." I have nothing more to say on this subject until you decide to approach the subject from an unbiased standpoint and have an intelligent discussion like adults.

Intel Xeon 1650 V0 (4.4GHz @1.4V), ASRock X79 Extreme6, 32GB of HyperX 1866, Sapphire Nitro+ 5700XT, Silverstone Redline (black) RL05BB-W, Crucial MX500 500GB SSD, TeamGroup GX2 512GB SSD, WD AV-25 1TB 2.5" HDD with generic Chinese 120GB SSD as cache, x2 Seagate 2TB SSHD(RAID 0) with generic Chinese 240GB SSD as cache, SeaSonic Focus Plus Gold 850, x2 Acer H236HL, Acer V277U be quiet! Dark Rock Pro 4, Logitech K120, Tecknet "Gaming" mouse, Creative Inspire T2900, HyperX Cloud Flight Wireless headset, Windows 10 Pro 64 bit
Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, DragonTamer1 said:

at the same time what you are not considering is that most people have air conditioning

I have one too, but I don't use it because it literally gives me a headache if I have it on for too long

also, ACs aren't going to be efficient over a point, over a certain temp the AC won't even function

 

Quote

It is a simple design oversight on their part

so simple and yet not

as I said in one post earlier, this company based their whole existence over designing video cards

plus, this isn't their 1st time screwing up; and if you followed the topic, this is 3rd if not 4th issue with their 10 series cards

 

---

 

I'm condescending!? nasty?! because I dare to question EVGA? how come ...

some EVGA dude made a comment on their forums about this not being expected because the previous generation showed no issues - that's how they evaluate and test their products?

Link to comment
Share on other sites

Link to post
Share on other sites

I'll just use my 1070 SC with no overclocking. Will have a custom fan curve of maximum ~60% speed. GPU Boost 3.0 appears to be excellent.

 

If I eventually want to try to overclock to keep up with future demands - just sell the card and get a new one instead of risking the equipment. This applies to any PC hardware.

 

Stuff changes so fast, and there's always someone with MUCH worse hardware than you. DO IT *Shia labeouf"

 

CPU R7 1700    Motherboard Asus Prime X370 Pro  RAM  24GB Corsair LPX 3000 (at 2933Mhz)    GPU EVGA GTX1070 SC  Case Phanteks Enthoo Pro M    

Storage 1 x 1TB m.2, 1x 500GB SSD, 1x 1TB HDD, 1x 8TB HDD  PSU Corsair RM1000  Cooling Thermalright Macho Rev B (tower)

Synology NAS 1 x 4TB 1 x 8TB

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×