Jump to content

brandishwar

Member
  • Posts

    977
  • Joined

  • Last visited

Everything posted by brandishwar

  1. With dual CPU, you have the NUMA node barrier. And the PCI slots on multi-CPU boards are split between CPUs as well. So you might have quad-SLI, and it looks like quad-SLI to the NVIDIA driver, but it's more like dual-SLI due to how the PCI-E slots are actually wired up. That NUMA node barrier also means that any data going across to the other graphics cards needs to go across to the other CPU to the GPUs in question across the second CPU's PCI-E lanes. That's.... painfully slow. I'm actually surprised Windows was actually allowing it to happen, to be honest. Having a single CPU with triple or quad-SLI would've been better to avoid the NUMA node limitation, but then you likely still would've been CPU limited simply because the CPU likely couldn't keep up with trying to coordinate 4 GPUs in SLI. ETA: Okay scrap the above since that doesn't apply since those old CPUs didn't have the PCI-Express lanes handled by the CPU directly. It's handled by the chipset, which is obviously slower. The Intel 5520 chipset supports only PCI-E 2.0, and the GTX 900 series is where 2.0 vs 3.0 really started to matter, along with lane counts. And the chipset provided a max of 32 lanes and the mainboard uses an NF200 bridge chip (similar to a PLX) to spread that out.
  2. THANK YOU, THANK YOU, THANK YOU for not just the content warning about the flickering colors and text with the Wireshark demo, but also the audio chime for when it was safe to look at the video again. Definitely keep doing that in the future! It's something I wish more content creators took into account.
  3. Oh dear God, this had to have been one of the only LTT videos where I was yelling at my screen... Why didn't you go to a hardware store and find plumbing fittings that would've properly fit the inlets and outlets on that radiator? Using a tire inner-tube and zip ties?!? Really?!? I mean you had to order in the radiator, so why didn't you take additional time to find all the proper parts that would've allowed you to do this somewhat properly? And then you had the output on the pump going to the GPU block rather than just straight up to the radiator? That would've eliminated most of the flow resistance since gravity would've given you flow pressure on its own given the mass of the fluid you would've had in that. Or, better yet, NOT having the radiator ABOVE the entire system! This had so much potential that was just.... wasted.
  4. Are hard drives still worth it? I don't even need to watch the video to say this: YES! Price per TB for HDDs is still unbeatable and will remain such for the foreseeable future. And combining them into a RAID setup (whether in your case or via an external enclosure) will still net pretty decent performance as well as capacity, making them a great choice for a large game or media library or on-site backups.
  5. And I'm guessing the takedown notices and "cease and desist" letters are already on the way to everyone involved...
  6. That's been my thought watching this "adventure". I use optical fiber for 10GbE in my home. (With exception of 1 short DAC that goes from a MikroTik CSS610 to my CRS317). I used to have them hanging on the walls and just yesterday ran two through my attic to get between a hallway closet (networking equipment) and the rack (NAS, virtualization server). And I certainly wasn't "gentle" in doing that. You don't need to baby the fiber cables the way they were trying to hook up the ingest stations.
  7. This is one of those topics for which history is a necessary part of the discussion. Since in doing this experiment, you basically ignored the advances in PCB design that allow for better ESD protection, along with published standards for minimizing ESD risk. This is about on par with the whole "cable management doesn't matter" video you made in which you used a modern chassis with several 120mm fans for airflow, completely ignoring the history of the concept and recommendations. There are several things that protect against ESD, all of which are likely integrated into those DDR2 RAM sticks, and all of which you basically had to overwhelm to kill them. This includes a ground plane along with other components as part of an overall grounding strategy that aims to minimize the risk of ESD to sensitive components. There are published guidelines and standards for ESD protection and grounding. A simple Google search would've provided that information, which would've informed you on WHY you had to go to some extreme lengths to kill the RAM sticks, and why your initial attempts weren't working as well as you thought. This doesn't mean you should be careless in handling PC components, but it does mean you don't need to be paranoid. But there was a time where paranoia of ESD was necessary, but it's been quite a long while since that was the case.
  8. Computers aren't the only thing that are water cooled. I recall seeing a project years ago where these small radiators were used in RC vehicles.
  9. "Software as a service" I know gets a bad rap in the industries where it's become prominent, but it's also allowed for a lot of leeway when it comes to software pricing. For example, Microsoft Office had a hefty price tag to it before Office 365 came along at $100 per year for 5 machines (with different tiers as well). And that Office 365 license provides everything that was part of Office Professional. And Microsoft has also largely moved toward the "Software as a service" model for their other professional suites like Visual Studio - though with VS they also have the community editions available which have enough functionality for most developers who aren't working on enterprise-level software. And from Microsoft's perspective, it's likely much better for their bottom line to have customers paying for a monthly or annual subscription than paying one price up front every however many years, especially if it brings more customers onboard. And that subscription also includes technical support, whereas that support would expire after a short time with the "buy once" model. So there are trade-offs either way. Adobe's software suite is similar in that regard. I pay $10 for Lr CC with 1TB online storage. I've considered including Ps in that as well, but I don't mind exporting to 16-bit TIFF to use GIMP for the few places where I do need it. It's what I used to do with Nikon's software before I bit the bullet and went with Lr, as it was much easier to learn Lr than learn how to do the same in GIMP.
  10. You can also find used options through mpb.com and KEH. I've ordered through the former a couple times and also sold to them, and I've heard a lot of good about the latter, and both have good reputations.
  11. Sounds like they also don't make that easy to do. So hopefully that's something Folding@Home will be changing so they can allow volunteers to set up servers. But then the workers are intended to be able to come and go. Servers have to be dedicated.
  12. Right, and now that I know why you're wanting to parse the example line in question, knowing where it came from, I'm pointing you to an easier method of reading and writing the .INI file that abstracts away reading the file, reading the line, and parsing the line. Sometimes knowing the full scope of the problem you're trying to solve allows us to point you to solutions that are a lot simpler than whatever you're considering. So if you're now going to be trying to implement a .ini file parser, reconsider doing that.
  13. So you're trying to parse an INI file? If you're doing this on Windows, there's already an API that'll do all the parsing for you without you having to handle the actual file contents: https://docs.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-getprivateprofilestring
  14. As long as nothing conductive touches it and creates a short, you don't really need to worry about it.
  15. Short answer: no. Long answer: Whether you can use registered DIMMs is dependent upon the mainboard, not the CPU. Try to use RDIMMs on a mainboard that doesn't support it and the system won't POST. And since the memory controller is embedded in the CPU, it's likely that attempting to use ECC RAM with a CPU that doesn't support it will give you a system that won't POST.
  16. @wouterroos The problem you're going to run into is the projects and solutions you're trying to build. I mentioned above the dependencies. Visual Studio (and msbuild by extension) can build a project and solution in parallel up to the processor count of the machine. But how far VS and msbuild parallelize the build is going to depend on the individual project and the project dependency tree within the solution. So you may not see a substantial improvement over existing build times. Visual Studio parallelizes the build by default, but msbuild does not. So on a build server, check that you have that enabled (for msbuild, it's a command line option). And in C++ projects, I know you have to turn ON "multiprocessor build" for a project as that isn't enabled by default. With one solution we have where I work, I enabled both on the VC++ projects and enabled parallel builds in msbuild and observed an immediate 40% improvement to our solution build time on the build agent. And if you aren't using Azure DevOps to set up a build agent, make sure whatever build server you have (e.g. Maven, etc.) is configured so projects and solutions are always built in parallel. Any build time improvement in a parallel build, though, will see more improvement at the front of the build before the solution dependency tree really starts to kick in. Will these processors result in a substantial improvement to your workflow? That really depends on what you're referring to. If you're talking build times, I've already said the improvement probably won't be all that impressive. But if you have several teams trying to build projects at the same time on a build server, it'll allow more builds to run simultaneously without throttling everything. First look to see whether there are ways to optimize your existing builds before upgrading the hardware. And for individual developer machines, as I mentioned above I'm currently running a Haswell i7 processor in my laptop. Going with a new generation AMD or Intel I would not expect to result in substantial improvements to my workflow.
  17. What USB 2.0 devices are connected and how are they connected to your system? If they aren't in the dedicated USB 2.0 ports, they could be causing the entire USB 3.1 bus to be throttled. It shouldn't be happening, but that's not to say it doesn't happen. You could also try plugging the SSD into one of the 3.1 Gen 2 ports on the rear (they're teal and adjacent to the DVI and HDMI ports). Those ports are on a separate controller to all the other USB ports on the mainboard - they're he only two on the ASMedia controller, while all the others go through the Z170 chipset. I also found this article on Medium that could provide some insight: https://medium.com/@crossphd/how-to-fix-slow-usb-3-0-transfer-speeds-213455173b91
  18. It looks like the power port on the converter is USB 2.0. Your system might be detecting that and thinking that you have a USB 2.0 device connected to the front panel. If you can, try connecting the power portion of the drive to a USB charger or separate USB plug and look at your transfer speeds then. Beyond that, what are you trying to copy over? If it's a bunch of small files, those will never copy at high throughput due to some details in how file systems work unless you're using a utility to copy those files in parallel. Which Windows Explorer doesn't do that, and neither does the command-line "copy" command.
  19. I wish I had your system as my daily driver at work. My work laptop is a Haswell processor with 16GB RAM and 1TB SSD. And I can have a quite a few tabs open in FF with sometimes a VM running in the background plus Visual Studio Enterprise with a few plugins. I'm a professional software engineer with over 20 years of experience. And I've worked with Visual Studio for nearly that 20 years, and I work primarily with C++ and C# with a little bit of PowerShell thrown in for good measure. So let's get to the heart of your issue. First, core count and core speed both matter. Visual Studio will use multiple cores to build several files at once where dependencies allow. Memory is your friend here as well, but unless you're building massive projects - one of my solutions at work has over 70 projects in it - your builds are unlikely to run into any kind of memory ceiling. We can happily build the solution I mentioned on a dual-core virtual machine with... 4GB RAM I think. And if you're expecting upgrading to a Ryzen 9 to cut your build times in half because it benchmarks at double the score of your processor, prepare to be sorely disappointed. Things don't work that way. Your CPU, memory, and storage will all play a role. The newer CPU will help, don't get me wrong, but it won't be this spectacular reduction in build times. And there are quite a few reasons for this. On your storage, going with NVMe will help as well, but only so much. If you want an idea of what I'm talking about, copy a ton of small files (like only a few kilobytes each) from one location on your SSD to another and watch the transfer speed. That is what a solution build is doing: opening and reading a ton of small files, and creating a bunch more small files. There is a little bit of a penalty incurred every time a file is opened and closed. The more often this happens, the more often that penalty is incurred, meaning it's incurred more often with smaller files. It's why transferring a bunch of small files from one location to another takes longer than a few large files. Adding more RAM will help, especially since you're running Docker containers on Windows, I presume. (Why?) Which if those Docker containers aren't all that heavy, consider moving those into a Linux VM if they're Linux containers, or if they contain software that can run happily on Linux if those containers are for your projects. They'll require less resources there. So, TL;DR: Yes, the newer CPU will help, but it won't cut your build times in half. It will help spread the load on what you're running. Given all of what you're trying to do, I'd use that alone as the justification for the newer CPU. Yes, more memory will be to your advantage here, especially given what you're trying to do, but it's unlikely to significantly help your build times as well. Yes, NVMe will help, but it's unlikely to be the significant performance boost to your build times that you're hoping for. It will help in a lot of other ways, though, so don't be too focused on your build times. You're likely to get more bang for buck right now by upgrading your storage to NVMe, but keep the SATA SSD as secondary storage. Upgrade memory next - go to 32GB before deciding to go to 64GB.
  20. If you can transfer at GbE speeds across your network, but can't get faster than that beyond your network, it's either the connection between your router and modem, or it's the connection between the modem and your ISP. The router may have GbE ports, but the WAN port may only be Fast Ethernet (100Mbps). The RJ45 port on the modem may also be only Fast Ethernet. What is the make/model of your router and modem?
  21. Yes and no. This is going to vary by camera since some are better at this than others. It's one reason a lot of pros have a larger, shaped eyepiece on their E/OVF. This is particularly a problem when shooting landscape orientation where the body of the camera can make it difficult to get your entire eye onto the OVF to where you have minimal light entry from the side. If you're outdoors, depending on what you're shooting and the orientation to the light source (e.g. the sun, any overhead lights if at night), this could create flaring or other optical defects in trying to sight through the viewfinder. But at least those enhanced eyepieces aren't expensive or difficult to install. And the OVF is still better than live view on that mark, unless you have a cover over the display to prevent glare. But with DSLRs, trying to take photographs using live view is... far from optimal since there is a longer delay between shots and you may not be able to take shots in rapid succession, along with the battery drain.
  22. The major difference is multiple users accessing the system simultaneously. Hence why I said to ask about anticipated growth in access to this machine as that will inform the spec decisions to keep the system in service as long as possible - generally 5 years is the anticipated lifespan of a server or desktop for SMB and enterprise. Beyond that, the specifications are also going to depend substantially on what that server is doing. Anyone can look up system requirements for software, but unless you have recommendations based on your use case, that doesn't really tell you much. For example, a Raspberry Pi can be a great server option for light duty and proof-of-concept. But it'll choke if you try to use it to host a website being hit by a significant number of clients simultaneously. That's why servers tend to have much beefier specs compared to desktops, or are multiple nodes behind a load balancer. And the other major difference between a desktop and server: downtime must be kept as close to zero as possible. Hence the recommendation against going DIY. Prebuilt systems are inspected prior to shipment, and servers are even more closely inspected to ensure the chance of failure after they've been put into service is as close to zero as possible. The manufacturers are also in a much better position to anticipate and account for the one-off dead piece of hardware that could cost you (and your employer) hours or days of time replacing, not to mention any downtime should that part fail after you put the system into service. That was supposed to be that you have no experience spec'ing a server, let alone building one.
  23. So you weren't willing to go prebuilt because you're afraid your boss would make a bad decision, yet you have no experience spec'ing a system, let alone building one with the requirements you have in mind... To amend @WereCatf's comments, when you talk to them about spec'ing a system, you may want to talk to whomever in your company can give you an approximation for anticipated growth over the next 3 to 5 years for this system specifically. No need to worry about full company growth, just growth in the number of people expected to access this system. That way you avoid a situation where you're spec'ing a system that'll need to be replaced sooner than your company might like. Your boss may even make that part of the requirements for the purchase. Since WereCatf already mentioned the service agreement and warranty, there's no need to elaborate on that. Only thing I'll add is to speak to their sales teams about your use case. They'll likely be able to give you an idea of what similarly situated companies are using with Microsoft Dynamics GP (Great Plains) so you can make an informed purchase decision. I wouldn't necessarily try to go for that on your own. You could also try contacting Microsoft, letting them know that you intend to upgrade the server you're using and ask for specification recommendations based on your use case and anticipated growth.
  24. That's only the case if you try to put it in series with everything else. Which, no, you shouldn't do. Instead you put it in parallel with the loop's flow. Such as with the fitting that @VegetableStu shows in his reply, the AlpahCool MCX 5-way divider.
  25. That tiny waterblock many have used to watercool Raspberry Pi boards as well.
×