Jump to content

Nystemy

Member
  • Posts

    534
  • Joined

  • Last visited

Awards

This user doesn't have any awards

1 Follower

About Nystemy

  • Birthday September 5

Profile Information

  • Location
    Sweden, just north of most of Europe.
  • Interests
    Tech but primarily electronics.
  • Biography
    Mostly a tech interested furry pretending to be a dragon on the internet. What else did you expect?
  • Occupation
    Electronics engineering and manufacturing

System

  • CPU
    Xeon E5-1650v3
  • Motherboard
    AsRock X99 WS-E
  • RAM
    Crucial 4x 16GB DDR4 2666 MHz (Might get 4 more if I run out of RAM again...)
  • GPU
    Giggabyte GTX 660 x 2 (not running SLI, I just have too many screens, thinking of upgrading, anyone got a pair of GTX960?)
  • Case
    Some old cooler master case that I have no clue how I even got it...
  • Storage
    The usual mix. 512 GB SSD + a handful of TBs on HDDs
  • PSU
    Corsair CX750M
  • Display(s)
    Samsung SyncMaster 2494HS + SyncMaster 223BW + Envision (19" 4:3 aspect ratio) + Cintiq 13HD for the arts....
  • Cooling
    Noctua NH-U9S (Larger wouldn't fit in the case...)
  • Keyboard
    Logitech G110 Nordic version
  • Mouse
    Logitech M500

Recent Profile Visitors

2,982 profile views
  1. Removing the PSU from a screen can at times be somewhat "simple", and other times be down right maddening even for an experienced electronics tinkerer. I wouldn't recommend attempting such a project if one considers oneself a "big noob on DIY electronics." But if one is lucky, the PSU in the screen is a "dumb" 12 volt (or other voltage) supply where all the fancier power management handled by DC-DC converts on the main board. Here one could get USB-C to 12 volt (or other applicable voltage) power module, if it can handle the needed power. But this is all based on pure speculation and likely isn't applicable to most screens in practice. (in my own experience repairing the occasional screen, things are rarely this simple...)
  2. I feel that you missed my point. Especially as far as "waste heat" is concerned. Linus commented that his in floor heating system (seemingly running on gas) heats up the room. Likely a similar story for the boiler for the hot tap water. Here a heat pump for these applications would have made some sense. Since these can provide "waste COLD" to the server room. My point were not about actively trying to cool the room with an AC, but rather use a heatpump for preexisting heating tasks that could utilize the waste heat already available in the room. Not that the servers would provide enough heat to warm the whole house during winter, but it likely covers all warm tap water needs.
  3. Personally I wouldn't consider gravity of major concern in this setup. The pressure difference in that fairly short water column would be fairly marginal compared to the flow resistance variations from one water block to the next. A needle valve is likewise not much of a moving part. And the flow gauge can just be "this system seems warmer than the rest for the same workload", and therefore require a bit more open needle valve. It is a set and forget kind of thing. Ie, "standardize flow resistance of each system." as you allude to in your last paragraph. (or rather standardize flow resistance to peak "expected" system power dissipation.) But in the end, as we both say, it largely doesn't matter. There is plenty of cooling to go around. And I would rather focus on other creature comforts like quick disconnects and such, ease of maintenance.
  4. Yes, a heat pump could have just used that warm water as its "cold" side. However, 33 C water is already warm as far as in floor heating goes. To be fair, Linus could likely just take an outdoor air to water heat pump and place on the wall in the server room and call it a day as far as warm tap water is concerned.
  5. A needle valve and a flow gauge for each system could help ensure more "even" flow between the systems. Though at the downside of a bit higher flow resistance for the pump. Also, the lines out to the pool could have been separated a lot further away from each other and had some insulation, currently they leak warmth from hot to cold on the way out to the pool, recycling the heat in the lines and effectively increasing thermal resistance. (the more heat that is allowed to leak between the lines, the less cooling the pool will provide. This is actually somewhat common for ventilation systems on larger buildings where one wants to exchange air with the outside world but not exchange thermal energy in the process. https://en.wikipedia.org/wiki/Heat_recovery_ventilation) Also, Alex's comment about "how to make it without 3d printing" is so apt. Yes, a piece of wood were the "fix all issues" of the olden days, together with duct tape.
  6. To be honest. USB IF should amend their power standards a bit. Would be nice if any USB port could specify its power delivery capabilities. Be it 5 volts 500 mA, 1 amp, 1.5 amps, or 2 amps that a USB type A port can handle fairly well. To be fair, implementing the whole USB power delivery spec for this negotiation wouldn't be weird. Sadly USB IF in their infinite wisdom removed 12 volts as a negotiable voltage from the power delivery spec... But 9 volts is still a thing. And with a simply boost converter motherboards could provide 24 or more volts as well. So reaching the USB-C maximum of 48 volts 5 Amps wouldn't be unrealistic, just think of all the devices one could power with over 200 watts. (imagine having to have a 6 or an 8 pin power cable going to a USB controller card just to keep it happy. And then it might only have 1 USB-C port.) However, I am still awaiting the day one can casually connect two computers over USB and transfer files that way. USB 3 sure does beat most people's networking capabilities by far. Though, I upgraded to 10 gig myself, so this has become somewhat needless for me at least...
  7. 1% low is still fairly average. At least they have stopped using the even more meaningless 10% low. To be fair, even 0.1% or 0.01% is somewhat meaningless as well. To a degree it is better to just provide the longest frame times. Since if running at 100-200 FPS on average, then a 120 ms spike every 20 seconds won't meaningfully impact even the 0.1% average. However, the spike in latency is a very noticeable stutter to everyone. And having the system do more stuff in the background is a rather surefire way to ensure that the Kernel will schedule away aspects of the game engine at inopportune moments. Effectively ensuring that one will get a massive spike in frame delivery times. (A wonderfully common problem back when single threaded CPUs were effectively the only thing around.)
  8. Personally not really interested in the fairly uninformative "average FPS". What about the 0.1% lows? (or how the worst frame times gets affected.) That there is a performance impact isn't too surprising. Nor its size. However, here comes the good old. "what about if one has two GPUs?" Like one GPU for the primary monitor, and a second one for the ancillary ones. This should logically offload at least the GPU side of the issue from one's games. And as long as one has a suitably good CPU and enough system memory bandwidth, then the CPU shouldn't noticeably be meaningfully burdened by the extra load either. However, streaming video while playing online games likely has a measureable impact on network latency. After all, the network won't care what package contains what data, only that it should arrive at its destination at some point in time. Video or game data are simply equal in priority as far as latency goes, IP networking largely don't care that one technically has a much more relaxed latency budget than the other.
  9. I can't help but think that the USB/fiber dock should have used the internal USB 3 header. Would have been a cleaner solution, and I am not usually one to be finicky about cable management... Though, that ASRock motherboard had 3 OCulink connectors, would be fun to see whacky PCIe setups using such someday. Like it is a practical connection for smaller motherboards where one still wants tons of IO. A feature that could be quite useful to see on more high end consumer motherboards in general to be honest. Rather that than just wasting PCIe lanes into thin air on more compact boards that otherwise can't find a use for the lanes.
  10. To be fair, this partly highlights why Nokia kind of stopped making phones. They simply were going a bit too hard on being different, a bit too high on their own prior success to be fair. And yes, it is a bit odd that Sony haven't gone for a second attempt at a controller centric phone once more. Like at the time phone gaming weren't really all that established as it is today. And looking at other handhelds on the market it does seem like they have some low hanging fruit at their fingertips. Especially now that people have changed their opinion about phone size, back then a 5 inch phone were big, now it is absolutely tiny...
  11. To be fair, using a heat exchanger would be a wise move. Not as a filter, but rather as a "if we get a leak in amongst our multitude of connections in the rack, the whole loop won't risk siphoning into the room." kind of protection. Trust me, a fult like that is wonderful when it happens, and then one will look at the price of a heat exchanger and an additional pump as fairly marginal costs compared to a water damaged basement. Its all about removing unnecessary risks. Now, I would have a filter on each side of the heat exchanger regardless, since filters are nice things to have.
  12. That is true. But I haven't seen a "PC" fan that matches it yet. It is frankly overkill to use a ducted fan for PC cooling, but blade servers are silly things...
  13. Reminds me of HPE's fans used in their C7000 and C3000 blade centers. Though those are only 12 volt ones using a measly 16.5 Amps. But as a "PC" fan they are quite silly. Especially since the C7000 enclosure has 10 of them. (so that is just 2 kW of fans, or about 15% of the blade center's power consumption.)
  14. I myself use a 12x 3.5 inch bay 2U DIY rack case for my own home server. (it is far from small though...) Then use an HBA providing me with 2x SFF8087 ports that provide 4 SAS/SATA connections each. However, the backplane in the rack case would need a third SFF8087 port on my HBA to be able to use all 12 bays, but as a start 8 bays is more than sufficient for me at the moment considering that I only use half of them... (did for fun try to use an SFF8087 to 4x SATA cable to see if I could just hook the last backplane to the motherboard's many unused SATA ports, but something in the SAS/SATA standard made this not work...) However, my home server is more than just a NAS, but also the home to my various VMs. (and to be fair, a larger case would have been nice so that I could add in a GPU or two for accelerating some 3d rendering projects.)
  15. As others have stated, for a home server I would recommend SATA drives. They are more than fast enough for the server to bottleneck on your local network connections. Even 10 Gb/s networking is "slow" compared to a handful of SATA SSDs. A better explanation of flash data retention can also be found here: Write endurance and data retention time isn't purely about manufacturing quality vs the amount of data written. Temperature plays a key role in the whole thing as well, both in regards to how quickly the drive forgets, but also how much it wears with each write. In the end. Flash is a complex beast. Unlike magnetic storage that generally prefers a constant temperature of around 25-30 C at all times until it most often dies due to corrupted firmware or mechanical failure.
×