Jump to content

brandishwar

Member
  • Posts

    977
  • Joined

  • Last visited

Reputation Activity

  1. Agree
    brandishwar got a reaction from CityCultivator in I Declared Victory. I was SOOO Wrong…   
    THANK YOU, THANK YOU, THANK YOU for not just the content warning about the flickering colors and text with the Wireshark demo, but also the audio chime for when it was safe to look at the video again. Definitely keep doing that in the future! It's something I wish more content creators took into account.
  2. Agree
    brandishwar got a reaction from Dracarris in Our Craziest Cooling Project Yet   
    Oh dear God, this had to have been one of the only LTT videos where I was yelling at my screen... Why didn't you go to a hardware store and find plumbing fittings that would've properly fit the inlets and outlets on that radiator? Using a tire inner-tube and zip ties?!? Really?!? I mean you had to order in the radiator, so why didn't you take additional time to find all the proper parts that would've allowed you to do this somewhat properly?
     
    And then you had the output on the pump going to the GPU block rather than just straight up to the radiator? That would've eliminated most of the flow resistance since gravity would've given you flow pressure on its own given the mass of the fluid you would've had in that. Or, better yet, NOT having the radiator ABOVE the entire system!
     
    This had so much potential that was just.... wasted.
  3. Agree
    brandishwar got a reaction from Radium_Angel in Our Craziest Cooling Project Yet   
    Oh dear God, this had to have been one of the only LTT videos where I was yelling at my screen... Why didn't you go to a hardware store and find plumbing fittings that would've properly fit the inlets and outlets on that radiator? Using a tire inner-tube and zip ties?!? Really?!? I mean you had to order in the radiator, so why didn't you take additional time to find all the proper parts that would've allowed you to do this somewhat properly?
     
    And then you had the output on the pump going to the GPU block rather than just straight up to the radiator? That would've eliminated most of the flow resistance since gravity would've given you flow pressure on its own given the mass of the fluid you would've had in that. Or, better yet, NOT having the radiator ABOVE the entire system!
     
    This had so much potential that was just.... wasted.
  4. Like
    brandishwar got a reaction from Kilrah in Our Craziest Cooling Project Yet   
    Oh dear God, this had to have been one of the only LTT videos where I was yelling at my screen... Why didn't you go to a hardware store and find plumbing fittings that would've properly fit the inlets and outlets on that radiator? Using a tire inner-tube and zip ties?!? Really?!? I mean you had to order in the radiator, so why didn't you take additional time to find all the proper parts that would've allowed you to do this somewhat properly?
     
    And then you had the output on the pump going to the GPU block rather than just straight up to the radiator? That would've eliminated most of the flow resistance since gravity would've given you flow pressure on its own given the mass of the fluid you would've had in that. Or, better yet, NOT having the radiator ABOVE the entire system!
     
    This had so much potential that was just.... wasted.
  5. Like
    brandishwar got a reaction from dogwitch in Are Hard Drives Still Worth It?   
    Are hard drives still worth it? I don't even need to watch the video to say this: YES! Price per TB for HDDs is still unbeatable and will remain such for the foreseeable future. And combining them into a RAID setup (whether in your case or via an external enclosure) will still net pretty decent performance as well as capacity, making them a great choice for a large game or media library or on-site backups.
  6. Agree
    brandishwar got a reaction from GDRRiley in We just leveled up HARDCORE - Fibre Networking Adventure   
    That's been my thought watching this "adventure". I use optical fiber for 10GbE in my home. (With exception of 1 short DAC that goes from a MikroTik CSS610 to my CRS317). I used to have them hanging on the walls and just yesterday ran two through my attic to get between a hallway closet (networking equipment) and the rack (NAS, virtualization server). And I certainly wasn't "gentle" in doing that. You don't need to baby the fiber cables the way they were trying to hook up the ingest stations.
  7. Agree
    brandishwar got a reaction from Lurick in We just leveled up HARDCORE - Fibre Networking Adventure   
    That's been my thought watching this "adventure". I use optical fiber for 10GbE in my home. (With exception of 1 short DAC that goes from a MikroTik CSS610 to my CRS317). I used to have them hanging on the walls and just yesterday ran two through my attic to get between a hallway closet (networking equipment) and the rack (NAS, virtualization server). And I certainly wasn't "gentle" in doing that. You don't need to baby the fiber cables the way they were trying to hook up the ingest stations.
  8. Like
    brandishwar got a reaction from Middcore in Can static KILL your PC? (ft. Electroboom)   
    This is one of those topics for which history is a necessary part of the discussion. Since in doing this experiment, you basically ignored the advances in PCB design that allow for better ESD protection, along with published standards for minimizing ESD risk. This is about on par with the whole "cable management doesn't matter" video you made in which you used a modern chassis with several 120mm fans for airflow, completely ignoring the history of the concept and recommendations.
     
    There are several things that protect against ESD, all of which are likely integrated into those DDR2 RAM sticks, and all of which you basically had to overwhelm to kill them. This includes a ground plane along with other components as part of an overall grounding strategy that aims to minimize the risk of ESD to sensitive components. There are published guidelines and standards for ESD protection and grounding. A simple Google search would've provided that information, which would've informed you on WHY you had to go to some extreme lengths to kill the RAM sticks, and why your initial attempts weren't working as well as you thought.
     
    This doesn't mean you should be careless in handling PC components, but it does mean you don't need to be paranoid. But there was a time where paranoia of ESD was necessary, but it's been quite a long while since that was the case.
  9. Agree
    brandishwar got a reaction from panzersharkcat in Can static KILL your PC? (ft. Electroboom)   
    This is one of those topics for which history is a necessary part of the discussion. Since in doing this experiment, you basically ignored the advances in PCB design that allow for better ESD protection, along with published standards for minimizing ESD risk. This is about on par with the whole "cable management doesn't matter" video you made in which you used a modern chassis with several 120mm fans for airflow, completely ignoring the history of the concept and recommendations.
     
    There are several things that protect against ESD, all of which are likely integrated into those DDR2 RAM sticks, and all of which you basically had to overwhelm to kill them. This includes a ground plane along with other components as part of an overall grounding strategy that aims to minimize the risk of ESD to sensitive components. There are published guidelines and standards for ESD protection and grounding. A simple Google search would've provided that information, which would've informed you on WHY you had to go to some extreme lengths to kill the RAM sticks, and why your initial attempts weren't working as well as you thought.
     
    This doesn't mean you should be careless in handling PC components, but it does mean you don't need to be paranoid. But there was a time where paranoia of ESD was necessary, but it's been quite a long while since that was the case.
  10. Agree
    brandishwar got a reaction from NineEyeRon in Is Your Gaming Rig Being Bottlenecked??   
    Do you like the performance you're getting with that combination? That is what matters more.
  11. Like
    brandishwar got a reaction from dogwitch in Building a 100TB Folding@Home Server!   
    Sounds like they also don't make that easy to do. So hopefully that's something Folding@Home will be changing so they can allow volunteers to set up servers. But then the workers are intended to be able to come and go. Servers have to be dedicated.
  12. Agree
    brandishwar got a reaction from PeterT in 980ti Darwin Awards: Help   
    Bingo! Most PCBs for motherboards and components are typically 3 layers, and there will be traces between the layers to connect things together. As such, @Zanderlinde, trying to use solder will probably only make the situation worse, and you will risk compromising your motherboard in the process since you could end up shorting something. Declare it a loss and move on. $650+ lesson learned.
  13. Like
    brandishwar got a reaction from sarkarian in How to speed up my Software development workflow ? I'm using Visual Studio 2019 to build, compile, run tests, for dotnet c# docker containers etc   
    I wish I had your system as my daily driver at work. My work laptop is a Haswell processor with 16GB RAM and 1TB SSD. And I can have a quite a few tabs open in FF with sometimes a VM running in the background plus Visual Studio Enterprise with a few plugins. I'm a professional software engineer with over 20 years of experience. And I've worked with Visual Studio for nearly that 20 years, and I work primarily with C++ and C# with a little bit of PowerShell thrown in for good measure.
     
    So let's get to the heart of your issue. First, core count and core speed both matter. Visual Studio will use multiple cores to build several files at once where dependencies allow. Memory is your friend here as well, but unless you're building massive projects - one of my solutions at work has over 70 projects in it - your builds are unlikely to run into any kind of memory ceiling. We can happily build the solution I mentioned on a dual-core virtual machine with... 4GB RAM I think.
     
    And if you're expecting upgrading to a Ryzen 9 to cut your build times in half because it benchmarks at double the score of your processor, prepare to be sorely disappointed. Things don't work that way. Your CPU, memory, and storage will all play a role. The newer CPU will help, don't get me wrong, but it won't be this spectacular reduction in build times. And there are quite a few reasons for this.
     
    On your storage, going with NVMe will help as well, but only so much. If you want an idea of what I'm talking about, copy a ton of small files (like only a few kilobytes each) from one location on your SSD to another and watch the transfer speed. That is what a solution build is doing: opening and reading a ton of small files, and creating a bunch more small files. There is a little bit of a penalty incurred every time a file is opened and closed. The more often this happens, the more often that penalty is incurred, meaning it's incurred more often with smaller files. It's why transferring a bunch of small files from one location to another takes longer than a few large files.
     
    Adding more RAM will help, especially since you're running Docker containers on Windows, I presume. (Why?) Which if those Docker containers aren't all that heavy, consider moving those into a Linux VM if they're Linux containers, or if they contain software that can run happily on Linux if those containers are for your projects. They'll require less resources there.
     
    So, TL;DR:
     
    Yes, the newer CPU will help, but it won't cut your build times in half. It will help spread the load on what you're running. Given all of what you're trying to do, I'd use that alone as the justification for the newer CPU. Yes, more memory will be to your advantage here, especially given what you're trying to do, but it's unlikely to significantly help your build times as well. Yes, NVMe will help, but it's unlikely to be the significant performance boost to your build times that you're hoping for. It will help in a lot of other ways, though, so don't be too focused on your build times. You're likely to get more bang for buck right now by upgrading your storage to NVMe, but keep the SATA SSD as secondary storage. Upgrade memory next - go to 32GB before deciding to go to 64GB.
  14. Agree
    brandishwar got a reaction from Aaron_T in How to speed up my Software development workflow ? I'm using Visual Studio 2019 to build, compile, run tests, for dotnet c# docker containers etc   
    I wish I had your system as my daily driver at work. My work laptop is a Haswell processor with 16GB RAM and 1TB SSD. And I can have a quite a few tabs open in FF with sometimes a VM running in the background plus Visual Studio Enterprise with a few plugins. I'm a professional software engineer with over 20 years of experience. And I've worked with Visual Studio for nearly that 20 years, and I work primarily with C++ and C# with a little bit of PowerShell thrown in for good measure.
     
    So let's get to the heart of your issue. First, core count and core speed both matter. Visual Studio will use multiple cores to build several files at once where dependencies allow. Memory is your friend here as well, but unless you're building massive projects - one of my solutions at work has over 70 projects in it - your builds are unlikely to run into any kind of memory ceiling. We can happily build the solution I mentioned on a dual-core virtual machine with... 4GB RAM I think.
     
    And if you're expecting upgrading to a Ryzen 9 to cut your build times in half because it benchmarks at double the score of your processor, prepare to be sorely disappointed. Things don't work that way. Your CPU, memory, and storage will all play a role. The newer CPU will help, don't get me wrong, but it won't be this spectacular reduction in build times. And there are quite a few reasons for this.
     
    On your storage, going with NVMe will help as well, but only so much. If you want an idea of what I'm talking about, copy a ton of small files (like only a few kilobytes each) from one location on your SSD to another and watch the transfer speed. That is what a solution build is doing: opening and reading a ton of small files, and creating a bunch more small files. There is a little bit of a penalty incurred every time a file is opened and closed. The more often this happens, the more often that penalty is incurred, meaning it's incurred more often with smaller files. It's why transferring a bunch of small files from one location to another takes longer than a few large files.
     
    Adding more RAM will help, especially since you're running Docker containers on Windows, I presume. (Why?) Which if those Docker containers aren't all that heavy, consider moving those into a Linux VM if they're Linux containers, or if they contain software that can run happily on Linux if those containers are for your projects. They'll require less resources there.
     
    So, TL;DR:
     
    Yes, the newer CPU will help, but it won't cut your build times in half. It will help spread the load on what you're running. Given all of what you're trying to do, I'd use that alone as the justification for the newer CPU. Yes, more memory will be to your advantage here, especially given what you're trying to do, but it's unlikely to significantly help your build times as well. Yes, NVMe will help, but it's unlikely to be the significant performance boost to your build times that you're hoping for. It will help in a lot of other ways, though, so don't be too focused on your build times. You're likely to get more bang for buck right now by upgrading your storage to NVMe, but keep the SATA SSD as secondary storage. Upgrade memory next - go to 32GB before deciding to go to 64GB.
  15. Agree
    brandishwar got a reaction from Lenovich in Why does camera needs small Viewfinder when it has display?   
    Yes and no.
     
    This is going to vary by camera since some are better at this than others. It's one reason a lot of pros have a larger, shaped eyepiece on their E/OVF. This is particularly a problem when shooting landscape orientation where the body of the camera can make it difficult to get your entire eye onto the OVF to where you have minimal light entry from the side. If you're outdoors, depending on what you're shooting and the orientation to the light source (e.g. the sun, any overhead lights if at night), this could create flaring or other optical defects in trying to sight through the viewfinder.
     
    But at least those enhanced eyepieces aren't expensive or difficult to install. And the OVF is still better than live view on that mark, unless you have a cover over the display to prevent glare. But with DSLRs, trying to take photographs using live view is... far from optimal since there is a longer delay between shots and you may not be able to take shots in rapid succession, along with the battery drain.
  16. Like
    brandishwar got a reaction from TechyBen in We Water Cooled an SSD!!   
    That tiny waterblock many have used to watercool Raspberry Pi boards as well.
  17. Agree
    brandishwar got a reaction from sazrocks in Our work server needs upgrading... never built a server machine, need suggestions   
    The major difference is multiple users accessing the system simultaneously. Hence why I said to ask about anticipated growth in access to this machine as that will inform the spec decisions to keep the system in service as long as possible - generally 5 years is the anticipated lifespan of a server or desktop for SMB and enterprise.
     
    Beyond that, the specifications are also going to depend substantially on what that server is doing. Anyone can look up system requirements for software, but unless you have recommendations based on your use case, that doesn't really tell you much. For example, a Raspberry Pi can be a great server option for light duty and proof-of-concept. But it'll choke if you try to use it to host a website being hit by a significant number of clients simultaneously. That's why servers tend to have much beefier specs compared to desktops, or are multiple nodes behind a load balancer.
     
    And the other major difference between a desktop and server: downtime must be kept as close to zero as possible. Hence the recommendation against going DIY. Prebuilt systems are inspected prior to shipment, and servers are even more closely inspected to ensure the chance of failure after they've been put into service is as close to zero as possible. The manufacturers are also in a much better position to anticipate and account for the one-off dead piece of hardware that could cost you (and your employer) hours or days of time replacing, not to mention any downtime should that part fail after you put the system into service.
     
    That was supposed to be that you have no experience spec'ing a server, let alone building one.
  18. Agree
    brandishwar got a reaction from sazrocks in Our work server needs upgrading... never built a server machine, need suggestions   
    So you weren't willing to go prebuilt because you're afraid your boss would make a bad decision, yet you have no experience spec'ing a system, let alone building one with the requirements you have in mind...
     
    To amend @WereCatf's comments, when you talk to them about spec'ing a system, you may want to talk to whomever in your company can give you an approximation for anticipated growth over the next 3 to 5 years for this system specifically. No need to worry about full company growth, just growth in the number of people expected to access this system. That way you avoid a situation where you're spec'ing a system that'll need to be replaced sooner than your company might like. Your boss may even make that part of the requirements for the purchase.
     
    Since WereCatf already mentioned the service agreement and warranty, there's no need to elaborate on that. Only thing I'll add is to speak to their sales teams about your use case. They'll likely be able to give you an idea of what similarly situated companies are using with Microsoft Dynamics GP (Great Plains) so you can make an informed purchase decision. I wouldn't necessarily try to go for that on your own. You could also try contacting Microsoft, letting them know that you intend to upgrade the server you're using and ask for specification recommendations based on your use case and anticipated growth.
  19. Agree
    brandishwar got a reaction from TechChild in Our work server needs upgrading... never built a server machine, need suggestions   
    The major difference is multiple users accessing the system simultaneously. Hence why I said to ask about anticipated growth in access to this machine as that will inform the spec decisions to keep the system in service as long as possible - generally 5 years is the anticipated lifespan of a server or desktop for SMB and enterprise.
     
    Beyond that, the specifications are also going to depend substantially on what that server is doing. Anyone can look up system requirements for software, but unless you have recommendations based on your use case, that doesn't really tell you much. For example, a Raspberry Pi can be a great server option for light duty and proof-of-concept. But it'll choke if you try to use it to host a website being hit by a significant number of clients simultaneously. That's why servers tend to have much beefier specs compared to desktops, or are multiple nodes behind a load balancer.
     
    And the other major difference between a desktop and server: downtime must be kept as close to zero as possible. Hence the recommendation against going DIY. Prebuilt systems are inspected prior to shipment, and servers are even more closely inspected to ensure the chance of failure after they've been put into service is as close to zero as possible. The manufacturers are also in a much better position to anticipate and account for the one-off dead piece of hardware that could cost you (and your employer) hours or days of time replacing, not to mention any downtime should that part fail after you put the system into service.
     
    That was supposed to be that you have no experience spec'ing a server, let alone building one.
  20. Agree
    brandishwar got a reaction from AbsoluteFool in Our work server needs upgrading... never built a server machine, need suggestions   
    So you weren't willing to go prebuilt because you're afraid your boss would make a bad decision, yet you have no experience spec'ing a system, let alone building one with the requirements you have in mind...
     
    To amend @WereCatf's comments, when you talk to them about spec'ing a system, you may want to talk to whomever in your company can give you an approximation for anticipated growth over the next 3 to 5 years for this system specifically. No need to worry about full company growth, just growth in the number of people expected to access this system. That way you avoid a situation where you're spec'ing a system that'll need to be replaced sooner than your company might like. Your boss may even make that part of the requirements for the purchase.
     
    Since WereCatf already mentioned the service agreement and warranty, there's no need to elaborate on that. Only thing I'll add is to speak to their sales teams about your use case. They'll likely be able to give you an idea of what similarly situated companies are using with Microsoft Dynamics GP (Great Plains) so you can make an informed purchase decision. I wouldn't necessarily try to go for that on your own. You could also try contacting Microsoft, letting them know that you intend to upgrade the server you're using and ask for specification recommendations based on your use case and anticipated growth.
  21. Agree
    brandishwar got a reaction from AbsoluteFool in Our work server needs upgrading... never built a server machine, need suggestions   
    The major difference is multiple users accessing the system simultaneously. Hence why I said to ask about anticipated growth in access to this machine as that will inform the spec decisions to keep the system in service as long as possible - generally 5 years is the anticipated lifespan of a server or desktop for SMB and enterprise.
     
    Beyond that, the specifications are also going to depend substantially on what that server is doing. Anyone can look up system requirements for software, but unless you have recommendations based on your use case, that doesn't really tell you much. For example, a Raspberry Pi can be a great server option for light duty and proof-of-concept. But it'll choke if you try to use it to host a website being hit by a significant number of clients simultaneously. That's why servers tend to have much beefier specs compared to desktops, or are multiple nodes behind a load balancer.
     
    And the other major difference between a desktop and server: downtime must be kept as close to zero as possible. Hence the recommendation against going DIY. Prebuilt systems are inspected prior to shipment, and servers are even more closely inspected to ensure the chance of failure after they've been put into service is as close to zero as possible. The manufacturers are also in a much better position to anticipate and account for the one-off dead piece of hardware that could cost you (and your employer) hours or days of time replacing, not to mention any downtime should that part fail after you put the system into service.
     
    That was supposed to be that you have no experience spec'ing a server, let alone building one.
  22. Agree
    brandishwar got a reaction from Compuration in Where Intel is in REAL Trouble...   
    Hopefully they're looking at a 40GbE card instead of 10GbE. I think one of their switches has a QSFP+ port.
  23. Like
    brandishwar got a reaction from Ben17 in Our Smallest Build EVER? - Velkase Velka 3   
    They actually went with one of the quietest ones available, as well. Most every other I've tried has been LOUD.
     
    They're made for 1U and 2U server chassis, not SFF desktop builds. They also typically have only the bare minimum cable setup to them as well, so there's typically no need for them to be modular since you're likely going to use all of what's attached to it.
  24. Like
    brandishwar got a reaction from Fetzie in Nikon D500 vertical grips   
    Yeah I've got the same one, using it with the the D7200. Works like a charm. Zero issues. Seems pretty well built as well. Can't complain.
  25. Agree
    brandishwar got a reaction from MandicReally in Workflow   
    One thing to add to your workflow: between steps 2 and 3, add a step to create a backup copy of all original files. You can delete those once the final video file is created.
×