Jump to content

Nystemy

Member
  • Posts

    534
  • Joined

  • Last visited

Everything posted by Nystemy

  1. Removing the PSU from a screen can at times be somewhat "simple", and other times be down right maddening even for an experienced electronics tinkerer. I wouldn't recommend attempting such a project if one considers oneself a "big noob on DIY electronics." But if one is lucky, the PSU in the screen is a "dumb" 12 volt (or other voltage) supply where all the fancier power management handled by DC-DC converts on the main board. Here one could get USB-C to 12 volt (or other applicable voltage) power module, if it can handle the needed power. But this is all based on pure speculation and likely isn't applicable to most screens in practice. (in my own experience repairing the occasional screen, things are rarely this simple...)
  2. I feel that you missed my point. Especially as far as "waste heat" is concerned. Linus commented that his in floor heating system (seemingly running on gas) heats up the room. Likely a similar story for the boiler for the hot tap water. Here a heat pump for these applications would have made some sense. Since these can provide "waste COLD" to the server room. My point were not about actively trying to cool the room with an AC, but rather use a heatpump for preexisting heating tasks that could utilize the waste heat already available in the room. Not that the servers would provide enough heat to warm the whole house during winter, but it likely covers all warm tap water needs.
  3. Personally I wouldn't consider gravity of major concern in this setup. The pressure difference in that fairly short water column would be fairly marginal compared to the flow resistance variations from one water block to the next. A needle valve is likewise not much of a moving part. And the flow gauge can just be "this system seems warmer than the rest for the same workload", and therefore require a bit more open needle valve. It is a set and forget kind of thing. Ie, "standardize flow resistance of each system." as you allude to in your last paragraph. (or rather standardize flow resistance to peak "expected" system power dissipation.) But in the end, as we both say, it largely doesn't matter. There is plenty of cooling to go around. And I would rather focus on other creature comforts like quick disconnects and such, ease of maintenance.
  4. Yes, a heat pump could have just used that warm water as its "cold" side. However, 33 C water is already warm as far as in floor heating goes. To be fair, Linus could likely just take an outdoor air to water heat pump and place on the wall in the server room and call it a day as far as warm tap water is concerned.
  5. A needle valve and a flow gauge for each system could help ensure more "even" flow between the systems. Though at the downside of a bit higher flow resistance for the pump. Also, the lines out to the pool could have been separated a lot further away from each other and had some insulation, currently they leak warmth from hot to cold on the way out to the pool, recycling the heat in the lines and effectively increasing thermal resistance. (the more heat that is allowed to leak between the lines, the less cooling the pool will provide. This is actually somewhat common for ventilation systems on larger buildings where one wants to exchange air with the outside world but not exchange thermal energy in the process. https://en.wikipedia.org/wiki/Heat_recovery_ventilation) Also, Alex's comment about "how to make it without 3d printing" is so apt. Yes, a piece of wood were the "fix all issues" of the olden days, together with duct tape.
  6. To be honest. USB IF should amend their power standards a bit. Would be nice if any USB port could specify its power delivery capabilities. Be it 5 volts 500 mA, 1 amp, 1.5 amps, or 2 amps that a USB type A port can handle fairly well. To be fair, implementing the whole USB power delivery spec for this negotiation wouldn't be weird. Sadly USB IF in their infinite wisdom removed 12 volts as a negotiable voltage from the power delivery spec... But 9 volts is still a thing. And with a simply boost converter motherboards could provide 24 or more volts as well. So reaching the USB-C maximum of 48 volts 5 Amps wouldn't be unrealistic, just think of all the devices one could power with over 200 watts. (imagine having to have a 6 or an 8 pin power cable going to a USB controller card just to keep it happy. And then it might only have 1 USB-C port.) However, I am still awaiting the day one can casually connect two computers over USB and transfer files that way. USB 3 sure does beat most people's networking capabilities by far. Though, I upgraded to 10 gig myself, so this has become somewhat needless for me at least...
  7. 1% low is still fairly average. At least they have stopped using the even more meaningless 10% low. To be fair, even 0.1% or 0.01% is somewhat meaningless as well. To a degree it is better to just provide the longest frame times. Since if running at 100-200 FPS on average, then a 120 ms spike every 20 seconds won't meaningfully impact even the 0.1% average. However, the spike in latency is a very noticeable stutter to everyone. And having the system do more stuff in the background is a rather surefire way to ensure that the Kernel will schedule away aspects of the game engine at inopportune moments. Effectively ensuring that one will get a massive spike in frame delivery times. (A wonderfully common problem back when single threaded CPUs were effectively the only thing around.)
  8. Personally not really interested in the fairly uninformative "average FPS". What about the 0.1% lows? (or how the worst frame times gets affected.) That there is a performance impact isn't too surprising. Nor its size. However, here comes the good old. "what about if one has two GPUs?" Like one GPU for the primary monitor, and a second one for the ancillary ones. This should logically offload at least the GPU side of the issue from one's games. And as long as one has a suitably good CPU and enough system memory bandwidth, then the CPU shouldn't noticeably be meaningfully burdened by the extra load either. However, streaming video while playing online games likely has a measureable impact on network latency. After all, the network won't care what package contains what data, only that it should arrive at its destination at some point in time. Video or game data are simply equal in priority as far as latency goes, IP networking largely don't care that one technically has a much more relaxed latency budget than the other.
  9. I can't help but think that the USB/fiber dock should have used the internal USB 3 header. Would have been a cleaner solution, and I am not usually one to be finicky about cable management... Though, that ASRock motherboard had 3 OCulink connectors, would be fun to see whacky PCIe setups using such someday. Like it is a practical connection for smaller motherboards where one still wants tons of IO. A feature that could be quite useful to see on more high end consumer motherboards in general to be honest. Rather that than just wasting PCIe lanes into thin air on more compact boards that otherwise can't find a use for the lanes.
  10. To be fair, this partly highlights why Nokia kind of stopped making phones. They simply were going a bit too hard on being different, a bit too high on their own prior success to be fair. And yes, it is a bit odd that Sony haven't gone for a second attempt at a controller centric phone once more. Like at the time phone gaming weren't really all that established as it is today. And looking at other handhelds on the market it does seem like they have some low hanging fruit at their fingertips. Especially now that people have changed their opinion about phone size, back then a 5 inch phone were big, now it is absolutely tiny...
  11. To be fair, using a heat exchanger would be a wise move. Not as a filter, but rather as a "if we get a leak in amongst our multitude of connections in the rack, the whole loop won't risk siphoning into the room." kind of protection. Trust me, a fult like that is wonderful when it happens, and then one will look at the price of a heat exchanger and an additional pump as fairly marginal costs compared to a water damaged basement. Its all about removing unnecessary risks. Now, I would have a filter on each side of the heat exchanger regardless, since filters are nice things to have.
  12. That is true. But I haven't seen a "PC" fan that matches it yet. It is frankly overkill to use a ducted fan for PC cooling, but blade servers are silly things...
  13. Reminds me of HPE's fans used in their C7000 and C3000 blade centers. Though those are only 12 volt ones using a measly 16.5 Amps. But as a "PC" fan they are quite silly. Especially since the C7000 enclosure has 10 of them. (so that is just 2 kW of fans, or about 15% of the blade center's power consumption.)
  14. I myself use a 12x 3.5 inch bay 2U DIY rack case for my own home server. (it is far from small though...) Then use an HBA providing me with 2x SFF8087 ports that provide 4 SAS/SATA connections each. However, the backplane in the rack case would need a third SFF8087 port on my HBA to be able to use all 12 bays, but as a start 8 bays is more than sufficient for me at the moment considering that I only use half of them... (did for fun try to use an SFF8087 to 4x SATA cable to see if I could just hook the last backplane to the motherboard's many unused SATA ports, but something in the SAS/SATA standard made this not work...) However, my home server is more than just a NAS, but also the home to my various VMs. (and to be fair, a larger case would have been nice so that I could add in a GPU or two for accelerating some 3d rendering projects.)
  15. As others have stated, for a home server I would recommend SATA drives. They are more than fast enough for the server to bottleneck on your local network connections. Even 10 Gb/s networking is "slow" compared to a handful of SATA SSDs. A better explanation of flash data retention can also be found here: Write endurance and data retention time isn't purely about manufacturing quality vs the amount of data written. Temperature plays a key role in the whole thing as well, both in regards to how quickly the drive forgets, but also how much it wears with each write. In the end. Flash is a complex beast. Unlike magnetic storage that generally prefers a constant temperature of around 25-30 C at all times until it most often dies due to corrupted firmware or mechanical failure.
  16. The failure mode of SSDs is a bit complex in practice. Effectively speaking, Flash is inherently forgetful. The data is just stored as a bit of charge, it does slowly leak over time and eventually the Flash chip doesn't know what it originally were. Manufacturing tolerances also adds the fact that different chips forgets at different rates. Beyond that there is also the fact that with each write/erase cycle to a cell, it leaks faster. Eventually it forgets its contents fast enough to not be practical as data storage for a given application, ie data retention is quite limited in comparison to magnetic storage. (scratch disk applications can usually use a worn out SSD just fine for quite some time.) Some people will pull out Microchip's data retention specs for their industrial flash chips and microcontrollers, stating that the "data retention of flash is 20 years at 85 C after 20K writes." But that is like saying that any car can drive at 700+ km/h just because a land speed racer did do that. (Industrial grade flash isn't remotely comparable to enterprise grade flash. Industrial flash is built for applications requiring long term reliability and where it is allowed to cost more than a dollar per MB to achieve said reliability in harsh environments, while Enterprise grade flash is built for the lowest possible GB/$ for use in mass storage in fairly well maintained ambient environments.) But in general, flash doesn't really just give up the ghost after X hours unlike the typical hard drive that often is more predictable. However, there is still some single points of failure in these devices, a cracked MLC capacitor shorting out the power rail will often make the drive into a brick, same for a few cracked solder joints, an excessively bent PCB developing tears/cracks in the internal copper tracks or just good old ESD, tin whiskers, conductive gunk/corrosion, or a slew of other issues. (At least SSDs don't also have the mechanical wear of HDDs that makes the list of single point of failures far longer...) An SSD is fairly random when it will die/stop-working. But it can tend to have corrupted data in it long before it dies if one don't use it a lot. HDDs are a bit more predictable due to their mechanical nature, but even here a given drive can fail more or less whenever, some last a few weeks, others go 10x beyond Mean Time Between Failures. But for thousands of drives, it will on average be quite close to MTBF. In the end. If one cares about data retention in flash media, it is quite wise to check the data on a somewhat regular interval and correct any errors that are found. (I would say yearly, if not monthly if the drives are getting on the old side of things. Also, this is wise to do for HDDs too, they can also have bit errors induced over time even if this is more rare. This is also not a particularly atypical feature of most storage software/controllers, however one can do like LMG did one time and forget to turn that feature on, and I wouldn't be surprised if some SSD controllers do this in the background.)
  17. It logically wouldn't be a particularly interesting discount, more a token gesture for helping to reduce needless E-waste, for those who are willing to accept some superficial scratches. If one don't want the unit to have such superficial scratches in certain places, then perhaps one shouldn't buy the variant that has scratches on it. Likewise aren't we talking about serious scratches, dinged up edges and similar more distinct surface imperfections. QC in a factory would regard these as a problem with the manufacturing line, not just the individual product. (In these cases one has to ask, "Where did that even come from." Since then some work surface is flawed, or an employee dropped it for some reason.)
  18. "doing something like a fail-book, where there is like little scratches and you get a discount." Here I would say that framework's reply that taking pictures, setting prices and such takes labor and effort. Yes it does, but that is the wrong approach. A "defected" product can be its own product. One doesn't have to be specific that "the scratch is over here on this exact unit." Just that it has a scratch somewhere "acceptable". One can make a few stock images (used for all such units) of what this can entail, but beyond that it is just product binning. A great example of other companies doing this exact thing is chip vendors like Intel, AMD, nVidia, etc. A lot of CPUs, GPUs and such are just higher end chips that didn't pass QC for what they were supposed to be, so the broken parts get disabled, usually along with a few working bits too such that it actually fits the lower tier product that it now will be. Non of these vendors handles these defected products individually, they don't get individual listings, prices and explanations of what is wrong with them and where. (other than logs from QC tests associated to the serial number of the unit of course... But this is true for all units, defective or not.) There is logically no reason for why framework couldn't do the same. Ie, one can either buy X product, or X scratched product for a nominal discount when available. (If the scratched product becomes popular enough, one can do like the chip vendors and intentionally break working products for the heck of it.) However, if one has few enough scratched up or otherwise defective units, they can instead just be kept internal to the company itself. Employees often needs a computer at work after all.
  19. Yes. It is a EMC2 DS-32B2 switch. Quite slow by modern standards, but bought it a really long time ago, so it isn't really even worth the effort to try to convert it to Ethernet.
  20. I myself at one point managed to buy a second hand "fiber network switch", carried the thing home and only then realized that it weren't for Ethernet/IP, but ISCSI. My disappointment were rather immeasurable and the day were ruined. But at least it were so dirt cheap that I honestly couldn't even complain since it were still interesting in its own right. But since then I always act more carefully around anything fiber related. However, then there is the cheap vendor locked NICs on the market that all are so alluring, yet a complete waste of money unless one has a system from said vendor... (I guess the stereotype of a dragon's hoard isn't too unrealistic in my case, what can I say, I like tech...) So I wouldn't be surprised if a whole cabinet of storage like this would be 500 bucks, seems in line with how these sorts of things deprecate in value.
  21. To be fair. Keeping track of Tx and Rx pairs isn't super critical in practice for 100baseT. Since a lot of switches do notice this and swap their behavior accordingly. (Ie, Auto MDI-X) Some Ethernet drivers have also started supporting polarity correction as well. There is 384 ways to wire a 8P8C ("RJ45") connector and still keep the pairs intact. But in practice, only 2 of these ways are allowed. Or 32 if both Ethernet controllers has polarity correction. However, some gigabit and above controllers can also handle swapped pair as well. So in practice it won't be too many more years before one only have to ensure that each pair position has a pair wired to it. (It doesn't matter if pair 1 technically is on pair 2, or 3, Ethernet controllers will just shrug and auto negotiate through one's mess.) However, I would love if Ethernet controller started supporting splitting a port in half and get two ports at half speed. Ie, split a 1 Gb/s port into two 500 Mb/s ones. Gigabit does send 500 Mb/s per pair, but the drivers just don't want to do it unless they have 4 pairs of connectivity between them. Same thing for 2.5 Gb/s ports being split to two 1.25 Gb/s ones, or 5 Gb/s to two 2.5 Gb/s, or 10 to two 5s. From an electrical and signal integrity standpoint there is no issues. It is all just the standard being a bit strict in wanting 4 pairs for anything more than 100 Mb/s. I can see quite a lot of practical uses for situations where one needs more ports, but want more than just 100 Mb/s. After all, half a gigabit port is a lot better. Another advantage of running on half a port is better resilience. If a cable gets damaged, one won't loose the connection but rather just drop in speed. (though, here one would preferably be able to loose any arbitrary pair and still work. Currently this is almost the case as long as pair 2 and 3 are intact. A gigabit or above connection will stumble back to 100 Mb/s if pair 1 or 4 breaks, but if 2 or 3 breaks the connection is lost...)
  22. 23-24 KHz operational range for the oscillating part is going to be really annoying to a lot of animals. Just because humans can't hear it doesn't mean that it isn't noise pollution. So is it suitably quiet? Also, young humans tends to be able to hear up at these frequencies, so perhaps kids will "love" the constant whine... However. There is potential that they have already "fixed" some of the noise issues. For an example have a multi phase design, such that all nozzles don't oscillate in unison, but rather out of phase. This would inherently reduce noise since the resultant sound waves would partly cancel out. But it is still important to be considerate to other animals in our surroundings. Especially if one starts embedding these types of coolers into everything...
  23. Running a business is indeed a rather multifaceted thing. Creative direction is important, but likewise is so many other things to run a business. At some point, it does become hard to do all this by oneself. Most companies hire an accountant quite early on, it is often the "easiest" to shift control over to someone else without feeling like one lost any real control. Hiring someone for human resource management is typically second on the list. Eventually project management, legal, etc. Not to mention various experts on applicable things for the business. Anarchy works in small companies where there isn't too much stuff going on at any particular moment. But as a company grows one has to start organizing the efforts. Sadly a lot of this organization adds in a lot of extra bureaucratic work to wander through to make even simple decisions. Seeing Linus taking a step to the side to better focus on what he does best is honestly a good decision. I personally wouldn't call this a step down, since it isn't. Running a company is teamwork, even in upper management.
  24. Competition is indeed a good thing in this regard. 3dfx did get a bit lost on their way and making a bit rash business decisions it seems. However, all companies have debt, it all technically belongs to the shareholders. 3dfx simply didn't deliver enough shareholder value compared to the value of their assets. Assets that were likely to shrink given the writing on the wall at the time, so that the shareholders liquidated the company when it were heading to the grave isn't too surprising. Now, technically the same can happen to any company. Though, I suspect that our current GPU vendors are rather far away from painting themselves into as bad of a corner as 3dfx managed to do.
  25. Postnord offers a "bulk shipping" service for those who ships out a lot of packages. One puts all one's neatly packed boxes into a cage. Then the carrier simply handles the whole cage as one unit until it reaches the post sorting warehouse. Saving greatly on handling costs since the currier doesn't have to handle each package individually during pickup. When it reaches the warehouse, they don't manually unload the cage one box at a time and gently place it on a conveyor belt, since that would be way too slow... They simply lift the whole cage up into the air, tips it over such that all contents drops onto a conveyor belt after a meter or so drop, before it goes on down the line to be scanned and sorted, likely suffering further (meter or so high) drops as they get put into other cages/carts for the next leg of their journey. It is a beautifully simple and cost effective method of bulk handling packages. As long as said packages are small and mostly not containing heavy or fragile goods. But larger boxes or more fragile products aren't in the slightest going to survive. (At least most packages won't see more than 2-3 of these drops, unless Postnord does their magic of having the box sent around the whole country 3 times over before finally getting delivered to the wrong post office at the opposite side of the country...) (So I can't help but wonder how many people's products I wrecked when I had a 10+ kg bar of steel shipped to me in a very small box. The worker at the post office almost drop it on their own foot due to not expecting it to be that dense... And yes, the box did have shipping damage, but the bar didn't have a scratch.) In the end. I dread seeing that Postnord handles my packages. Now, most smaller packages goes through just fine. Sometimes they get stuck in a loop for a bit, sometimes they get mixed up with another package and never gets delivered... Other times there is major shipping damage since someone else's far too dense package decided to crush one's delivery. But over all, 80% of the time, things arrive just fine.
×