Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

brandishwar

Member
  • Content Count

    973
  • Joined

  • Last visited

Reputation Activity

  1. Agree
    brandishwar got a reaction from GDRRiley in We just leveled up HARDCORE - Fibre Networking Adventure   
    That's been my thought watching this "adventure". I use optical fiber for 10GbE in my home. (With exception of 1 short DAC that goes from a MikroTik CSS610 to my CRS317). I used to have them hanging on the walls and just yesterday ran two through my attic to get between a hallway closet (networking equipment) and the rack (NAS, virtualization server). And I certainly wasn't "gentle" in doing that. You don't need to baby the fiber cables the way they were trying to hook up the ingest stations.
  2. Agree
    brandishwar got a reaction from Lurick in We just leveled up HARDCORE - Fibre Networking Adventure   
    That's been my thought watching this "adventure". I use optical fiber for 10GbE in my home. (With exception of 1 short DAC that goes from a MikroTik CSS610 to my CRS317). I used to have them hanging on the walls and just yesterday ran two through my attic to get between a hallway closet (networking equipment) and the rack (NAS, virtualization server). And I certainly wasn't "gentle" in doing that. You don't need to baby the fiber cables the way they were trying to hook up the ingest stations.
  3. Like
    brandishwar got a reaction from Middcore in Can static KILL your PC? (ft. Electroboom)   
    This is one of those topics for which history is a necessary part of the discussion. Since in doing this experiment, you basically ignored the advances in PCB design that allow for better ESD protection, along with published standards for minimizing ESD risk. This is about on par with the whole "cable management doesn't matter" video you made in which you used a modern chassis with several 120mm fans for airflow, completely ignoring the history of the concept and recommendations.
     
    There are several things that protect against ESD, all of which are likely integrated into those DDR2 RAM sticks, and all of which you basically had to overwhelm to kill them. This includes a ground plane along with other components as part of an overall grounding strategy that aims to minimize the risk of ESD to sensitive components. There are published guidelines and standards for ESD protection and grounding. A simple Google search would've provided that information, which would've informed you on WHY you had to go to some extreme lengths to kill the RAM sticks, and why your initial attempts weren't working as well as you thought.
     
    This doesn't mean you should be careless in handling PC components, but it does mean you don't need to be paranoid. But there was a time where paranoia of ESD was necessary, but it's been quite a long while since that was the case.
  4. Agree
    brandishwar got a reaction from panzersharkcat in Can static KILL your PC? (ft. Electroboom)   
    This is one of those topics for which history is a necessary part of the discussion. Since in doing this experiment, you basically ignored the advances in PCB design that allow for better ESD protection, along with published standards for minimizing ESD risk. This is about on par with the whole "cable management doesn't matter" video you made in which you used a modern chassis with several 120mm fans for airflow, completely ignoring the history of the concept and recommendations.
     
    There are several things that protect against ESD, all of which are likely integrated into those DDR2 RAM sticks, and all of which you basically had to overwhelm to kill them. This includes a ground plane along with other components as part of an overall grounding strategy that aims to minimize the risk of ESD to sensitive components. There are published guidelines and standards for ESD protection and grounding. A simple Google search would've provided that information, which would've informed you on WHY you had to go to some extreme lengths to kill the RAM sticks, and why your initial attempts weren't working as well as you thought.
     
    This doesn't mean you should be careless in handling PC components, but it does mean you don't need to be paranoid. But there was a time where paranoia of ESD was necessary, but it's been quite a long while since that was the case.
  5. Agree
    brandishwar got a reaction from NineEyeRon in Is Your Gaming Rig Being Bottlenecked??   
    Do you like the performance you're getting with that combination? That is what matters more.
  6. Like
    brandishwar got a reaction from dogwitch in Building a 100TB Folding@Home Server!   
    Sounds like they also don't make that easy to do. So hopefully that's something Folding@Home will be changing so they can allow volunteers to set up servers. But then the workers are intended to be able to come and go. Servers have to be dedicated.
  7. Agree
    brandishwar got a reaction from PeterT in 980ti Darwin Awards: Help   
    Bingo! Most PCBs for motherboards and components are typically 3 layers, and there will be traces between the layers to connect things together. As such, @Zanderlinde, trying to use solder will probably only make the situation worse, and you will risk compromising your motherboard in the process since you could end up shorting something. Declare it a loss and move on. $650+ lesson learned.
  8. Like
    brandishwar got a reaction from sarkarian in How to speed up my Software development workflow ? I'm using Visual Studio 2019 to build, compile, run tests, for dotnet c# docker containers etc   
    I wish I had your system as my daily driver at work. My work laptop is a Haswell processor with 16GB RAM and 1TB SSD. And I can have a quite a few tabs open in FF with sometimes a VM running in the background plus Visual Studio Enterprise with a few plugins. I'm a professional software engineer with over 20 years of experience. And I've worked with Visual Studio for nearly that 20 years, and I work primarily with C++ and C# with a little bit of PowerShell thrown in for good measure.
     
    So let's get to the heart of your issue. First, core count and core speed both matter. Visual Studio will use multiple cores to build several files at once where dependencies allow. Memory is your friend here as well, but unless you're building massive projects - one of my solutions at work has over 70 projects in it - your builds are unlikely to run into any kind of memory ceiling. We can happily build the solution I mentioned on a dual-core virtual machine with... 4GB RAM I think.
     
    And if you're expecting upgrading to a Ryzen 9 to cut your build times in half because it benchmarks at double the score of your processor, prepare to be sorely disappointed. Things don't work that way. Your CPU, memory, and storage will all play a role. The newer CPU will help, don't get me wrong, but it won't be this spectacular reduction in build times. And there are quite a few reasons for this.
     
    On your storage, going with NVMe will help as well, but only so much. If you want an idea of what I'm talking about, copy a ton of small files (like only a few kilobytes each) from one location on your SSD to another and watch the transfer speed. That is what a solution build is doing: opening and reading a ton of small files, and creating a bunch more small files. There is a little bit of a penalty incurred every time a file is opened and closed. The more often this happens, the more often that penalty is incurred, meaning it's incurred more often with smaller files. It's why transferring a bunch of small files from one location to another takes longer than a few large files.
     
    Adding more RAM will help, especially since you're running Docker containers on Windows, I presume. (Why?) Which if those Docker containers aren't all that heavy, consider moving those into a Linux VM if they're Linux containers, or if they contain software that can run happily on Linux if those containers are for your projects. They'll require less resources there.
     
    So, TL;DR:
     
    Yes, the newer CPU will help, but it won't cut your build times in half. It will help spread the load on what you're running. Given all of what you're trying to do, I'd use that alone as the justification for the newer CPU. Yes, more memory will be to your advantage here, especially given what you're trying to do, but it's unlikely to significantly help your build times as well. Yes, NVMe will help, but it's unlikely to be the significant performance boost to your build times that you're hoping for. It will help in a lot of other ways, though, so don't be too focused on your build times. You're likely to get more bang for buck right now by upgrading your storage to NVMe, but keep the SATA SSD as secondary storage. Upgrade memory next - go to 32GB before deciding to go to 64GB.
  9. Agree
    brandishwar got a reaction from AaronThomas in How to speed up my Software development workflow ? I'm using Visual Studio 2019 to build, compile, run tests, for dotnet c# docker containers etc   
    I wish I had your system as my daily driver at work. My work laptop is a Haswell processor with 16GB RAM and 1TB SSD. And I can have a quite a few tabs open in FF with sometimes a VM running in the background plus Visual Studio Enterprise with a few plugins. I'm a professional software engineer with over 20 years of experience. And I've worked with Visual Studio for nearly that 20 years, and I work primarily with C++ and C# with a little bit of PowerShell thrown in for good measure.
     
    So let's get to the heart of your issue. First, core count and core speed both matter. Visual Studio will use multiple cores to build several files at once where dependencies allow. Memory is your friend here as well, but unless you're building massive projects - one of my solutions at work has over 70 projects in it - your builds are unlikely to run into any kind of memory ceiling. We can happily build the solution I mentioned on a dual-core virtual machine with... 4GB RAM I think.
     
    And if you're expecting upgrading to a Ryzen 9 to cut your build times in half because it benchmarks at double the score of your processor, prepare to be sorely disappointed. Things don't work that way. Your CPU, memory, and storage will all play a role. The newer CPU will help, don't get me wrong, but it won't be this spectacular reduction in build times. And there are quite a few reasons for this.
     
    On your storage, going with NVMe will help as well, but only so much. If you want an idea of what I'm talking about, copy a ton of small files (like only a few kilobytes each) from one location on your SSD to another and watch the transfer speed. That is what a solution build is doing: opening and reading a ton of small files, and creating a bunch more small files. There is a little bit of a penalty incurred every time a file is opened and closed. The more often this happens, the more often that penalty is incurred, meaning it's incurred more often with smaller files. It's why transferring a bunch of small files from one location to another takes longer than a few large files.
     
    Adding more RAM will help, especially since you're running Docker containers on Windows, I presume. (Why?) Which if those Docker containers aren't all that heavy, consider moving those into a Linux VM if they're Linux containers, or if they contain software that can run happily on Linux if those containers are for your projects. They'll require less resources there.
     
    So, TL;DR:
     
    Yes, the newer CPU will help, but it won't cut your build times in half. It will help spread the load on what you're running. Given all of what you're trying to do, I'd use that alone as the justification for the newer CPU. Yes, more memory will be to your advantage here, especially given what you're trying to do, but it's unlikely to significantly help your build times as well. Yes, NVMe will help, but it's unlikely to be the significant performance boost to your build times that you're hoping for. It will help in a lot of other ways, though, so don't be too focused on your build times. You're likely to get more bang for buck right now by upgrading your storage to NVMe, but keep the SATA SSD as secondary storage. Upgrade memory next - go to 32GB before deciding to go to 64GB.
  10. Agree
    brandishwar got a reaction from Lenovich in Why does camera needs small Viewfinder when it has display?   
    Yes and no.
     
    This is going to vary by camera since some are better at this than others. It's one reason a lot of pros have a larger, shaped eyepiece on their E/OVF. This is particularly a problem when shooting landscape orientation where the body of the camera can make it difficult to get your entire eye onto the OVF to where you have minimal light entry from the side. If you're outdoors, depending on what you're shooting and the orientation to the light source (e.g. the sun, any overhead lights if at night), this could create flaring or other optical defects in trying to sight through the viewfinder.
     
    But at least those enhanced eyepieces aren't expensive or difficult to install. And the OVF is still better than live view on that mark, unless you have a cover over the display to prevent glare. But with DSLRs, trying to take photographs using live view is... far from optimal since there is a longer delay between shots and you may not be able to take shots in rapid succession, along with the battery drain.
  11. Like
    brandishwar got a reaction from TechyBen in We Water Cooled an SSD!!   
    That tiny waterblock many have used to watercool Raspberry Pi boards as well.
  12. Agree
    brandishwar got a reaction from sazrocks in Our work server needs upgrading... never built a server machine, need suggestions   
    The major difference is multiple users accessing the system simultaneously. Hence why I said to ask about anticipated growth in access to this machine as that will inform the spec decisions to keep the system in service as long as possible - generally 5 years is the anticipated lifespan of a server or desktop for SMB and enterprise.
     
    Beyond that, the specifications are also going to depend substantially on what that server is doing. Anyone can look up system requirements for software, but unless you have recommendations based on your use case, that doesn't really tell you much. For example, a Raspberry Pi can be a great server option for light duty and proof-of-concept. But it'll choke if you try to use it to host a website being hit by a significant number of clients simultaneously. That's why servers tend to have much beefier specs compared to desktops, or are multiple nodes behind a load balancer.
     
    And the other major difference between a desktop and server: downtime must be kept as close to zero as possible. Hence the recommendation against going DIY. Prebuilt systems are inspected prior to shipment, and servers are even more closely inspected to ensure the chance of failure after they've been put into service is as close to zero as possible. The manufacturers are also in a much better position to anticipate and account for the one-off dead piece of hardware that could cost you (and your employer) hours or days of time replacing, not to mention any downtime should that part fail after you put the system into service.
     
    That was supposed to be that you have no experience spec'ing a server, let alone building one.
  13. Agree
    brandishwar got a reaction from sazrocks in Our work server needs upgrading... never built a server machine, need suggestions   
    So you weren't willing to go prebuilt because you're afraid your boss would make a bad decision, yet you have no experience spec'ing a system, let alone building one with the requirements you have in mind...
     
    To amend @WereCatf's comments, when you talk to them about spec'ing a system, you may want to talk to whomever in your company can give you an approximation for anticipated growth over the next 3 to 5 years for this system specifically. No need to worry about full company growth, just growth in the number of people expected to access this system. That way you avoid a situation where you're spec'ing a system that'll need to be replaced sooner than your company might like. Your boss may even make that part of the requirements for the purchase.
     
    Since WereCatf already mentioned the service agreement and warranty, there's no need to elaborate on that. Only thing I'll add is to speak to their sales teams about your use case. They'll likely be able to give you an idea of what similarly situated companies are using with Microsoft Dynamics GP (Great Plains) so you can make an informed purchase decision. I wouldn't necessarily try to go for that on your own. You could also try contacting Microsoft, letting them know that you intend to upgrade the server you're using and ask for specification recommendations based on your use case and anticipated growth.
  14. Agree
    brandishwar got a reaction from TechChild in Our work server needs upgrading... never built a server machine, need suggestions   
    The major difference is multiple users accessing the system simultaneously. Hence why I said to ask about anticipated growth in access to this machine as that will inform the spec decisions to keep the system in service as long as possible - generally 5 years is the anticipated lifespan of a server or desktop for SMB and enterprise.
     
    Beyond that, the specifications are also going to depend substantially on what that server is doing. Anyone can look up system requirements for software, but unless you have recommendations based on your use case, that doesn't really tell you much. For example, a Raspberry Pi can be a great server option for light duty and proof-of-concept. But it'll choke if you try to use it to host a website being hit by a significant number of clients simultaneously. That's why servers tend to have much beefier specs compared to desktops, or are multiple nodes behind a load balancer.
     
    And the other major difference between a desktop and server: downtime must be kept as close to zero as possible. Hence the recommendation against going DIY. Prebuilt systems are inspected prior to shipment, and servers are even more closely inspected to ensure the chance of failure after they've been put into service is as close to zero as possible. The manufacturers are also in a much better position to anticipate and account for the one-off dead piece of hardware that could cost you (and your employer) hours or days of time replacing, not to mention any downtime should that part fail after you put the system into service.
     
    That was supposed to be that you have no experience spec'ing a server, let alone building one.
  15. Agree
    brandishwar got a reaction from AbsoluteFool in Our work server needs upgrading... never built a server machine, need suggestions   
    So you weren't willing to go prebuilt because you're afraid your boss would make a bad decision, yet you have no experience spec'ing a system, let alone building one with the requirements you have in mind...
     
    To amend @WereCatf's comments, when you talk to them about spec'ing a system, you may want to talk to whomever in your company can give you an approximation for anticipated growth over the next 3 to 5 years for this system specifically. No need to worry about full company growth, just growth in the number of people expected to access this system. That way you avoid a situation where you're spec'ing a system that'll need to be replaced sooner than your company might like. Your boss may even make that part of the requirements for the purchase.
     
    Since WereCatf already mentioned the service agreement and warranty, there's no need to elaborate on that. Only thing I'll add is to speak to their sales teams about your use case. They'll likely be able to give you an idea of what similarly situated companies are using with Microsoft Dynamics GP (Great Plains) so you can make an informed purchase decision. I wouldn't necessarily try to go for that on your own. You could also try contacting Microsoft, letting them know that you intend to upgrade the server you're using and ask for specification recommendations based on your use case and anticipated growth.
  16. Agree
    brandishwar got a reaction from AbsoluteFool in Our work server needs upgrading... never built a server machine, need suggestions   
    The major difference is multiple users accessing the system simultaneously. Hence why I said to ask about anticipated growth in access to this machine as that will inform the spec decisions to keep the system in service as long as possible - generally 5 years is the anticipated lifespan of a server or desktop for SMB and enterprise.
     
    Beyond that, the specifications are also going to depend substantially on what that server is doing. Anyone can look up system requirements for software, but unless you have recommendations based on your use case, that doesn't really tell you much. For example, a Raspberry Pi can be a great server option for light duty and proof-of-concept. But it'll choke if you try to use it to host a website being hit by a significant number of clients simultaneously. That's why servers tend to have much beefier specs compared to desktops, or are multiple nodes behind a load balancer.
     
    And the other major difference between a desktop and server: downtime must be kept as close to zero as possible. Hence the recommendation against going DIY. Prebuilt systems are inspected prior to shipment, and servers are even more closely inspected to ensure the chance of failure after they've been put into service is as close to zero as possible. The manufacturers are also in a much better position to anticipate and account for the one-off dead piece of hardware that could cost you (and your employer) hours or days of time replacing, not to mention any downtime should that part fail after you put the system into service.
     
    That was supposed to be that you have no experience spec'ing a server, let alone building one.
  17. Agree
    brandishwar got a reaction from Compuration in Where Intel is in REAL Trouble...   
    Hopefully they're looking at a 40GbE card instead of 10GbE. I think one of their switches has a QSFP+ port.
  18. Like
    brandishwar got a reaction from Ben17 in Our Smallest Build EVER? - Velkase Velka 3   
    They actually went with one of the quietest ones available, as well. Most every other I've tried has been LOUD.
     
    They're made for 1U and 2U server chassis, not SFF desktop builds. They also typically have only the bare minimum cable setup to them as well, so there's typically no need for them to be modular since you're likely going to use all of what's attached to it.
  19. Like
    brandishwar got a reaction from Fetzie in Nikon D500 vertical grips   
    Yeah I've got the same one, using it with the the D7200. Works like a charm. Zero issues. Seems pretty well built as well. Can't complain.
  20. Agree
    brandishwar got a reaction from MandicReally in Workflow   
    One thing to add to your workflow: between steps 2 and 3, add a step to create a backup copy of all original files. You can delete those once the final video file is created.
  21. Agree
    brandishwar got a reaction from Necrovarius in Workflow   
    One thing to add to your workflow: between steps 2 and 3, add a step to create a backup copy of all original files. You can delete those once the final video file is created.
  22. Informative
    brandishwar got a reaction from GNU/Linus in Why Can't They Fix This?   
    Out of curiosity, are you using any kind of VPN, such as Private Internet Access, and did you have that installed and running on the test phones as well? VPN software is notorious for draining batteries. I don't know of one that doesn't have that issue, with some worse than others. And it's always going to be a problem simply due to how VPN software works.
     
    I use OpenVPN since I self-host to have access to my home network on the go. And I have to be careful about how long I leave it connected. I typically only have to charge my battery at night on my S7. But if I have OpenVPN running in the background, I have to charge sometime in the afternoon as well. More often if my Internet usage is higher than normal.
     
    At the same time, that could also explain why your phone is HOT when pulling it out of your pocket. The heat is, obviously, coming from the phone's processor. And if you have it connected to a VPN with a lot of Internet activity in the background, it'll become noticeably warm.
  23. Informative
    brandishwar got a reaction from The_Vaccine in phone cameras   
    With the same sensor, there are obviously other variables at play. One is the lens. Which is going to play a BIG role in the picture quality. How the software interprets the signal coming from the camera is also important. Now same OS with the same sensor should mean the lens is the only variable, unless there are issues with the electronics.
     
    And given how tiny the lenses are in smartphones, that doesn't leave a lot of room to get everything right. It's why with interchangeable lens cameras, the best lenses are also the most expensive for the focal range (and whether zoom or prime). Zeiss lenses, for example, have some of the best glass you can find in interchangeable lens cameras, and so can produce the sharpest images, but they come with a massive price tag as well. Lesser glass means lesser image quality but a lower price tag.
     
    That's the case with smartphone cameras in a nutshell. Better smartphones are (hopefully) going to have better glass for their camera lens.
  24. Informative
    brandishwar got a reaction from wasab in Front-end and Back-end Developers   
    Having used all three plus a few others, I'm actually partial to Azure DevOps for source control.
     
    Amazon Web Services is not "a restful web service". It's a collection of a lot of different offerings, each with their own level of abstraction away from the underlying hardware. My website, for example, runs on Amazon Lightsail, which is part of the overall AWS ecosystem.
    I've been saying that since I first started professionally nearly 15 years ago. I was a hobbyist before then, so I knew I'd enjoy doing this professionally. It's been a hard ride at times, but just like with medicine or nursing, if you're going into it for the money, you're going to burn yourself out fast.
     
    Or you'll be one of the ones trying to fast-track to a management position, in which case everyone is worst off.
    Glad to not be one of your interns, since you're not explaining it right.
     
    A better analogy for a web service is a phone call: you dial a number, have a conversation, and hang up. The web service is a Remote Procedure Call of some kind, whether it's requesting a web page or information from the National Weather Service. This is much different from a DLL that is "running somewhere else", since there are numerous considerations that your analogy doesn't encompass. The big one being lack of control. A phone call everyone can relate to. Will the other end even pick up? Will they understand your request? Will you understand the response? What if they hang up too soon or you get disconnected? How do you account for all of that and handle it gracefully?
  25. Informative
    brandishwar got a reaction from Hi P in Front-end and Back-end Developers   
    Having used all three plus a few others, I'm actually partial to Azure DevOps for source control.
     
    Amazon Web Services is not "a restful web service". It's a collection of a lot of different offerings, each with their own level of abstraction away from the underlying hardware. My website, for example, runs on Amazon Lightsail, which is part of the overall AWS ecosystem.
    I've been saying that since I first started professionally nearly 15 years ago. I was a hobbyist before then, so I knew I'd enjoy doing this professionally. It's been a hard ride at times, but just like with medicine or nursing, if you're going into it for the money, you're going to burn yourself out fast.
     
    Or you'll be one of the ones trying to fast-track to a management position, in which case everyone is worst off.
    Glad to not be one of your interns, since you're not explaining it right.
     
    A better analogy for a web service is a phone call: you dial a number, have a conversation, and hang up. The web service is a Remote Procedure Call of some kind, whether it's requesting a web page or information from the National Weather Service. This is much different from a DLL that is "running somewhere else", since there are numerous considerations that your analogy doesn't encompass. The big one being lack of control. A phone call everyone can relate to. Will the other end even pick up? Will they understand your request? Will you understand the response? What if they hang up too soon or you get disconnected? How do you account for all of that and handle it gracefully?
×