Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

tarfeef101

Member
  • Content Count

    1,720
  • Joined

  • Last visited

Awards


4 Followers

About tarfeef101

  • Title
    Perpetually Procrastinating

Contact Methods

  • Discord
    tarfeef101#6691
  • Steam
    tarfeef101
  • Origin
    tarfeef101
  • PlayStation Network
    tarfeef101
  • Xbox Live
    tarfeef101
  • Twitch.tv
    tarfeef101
  • Heatware
    tarfeef101

Profile Information

  • Gender
    Male
  • Location
    Lower Mainland, B.C, Canada
  • Interests
    Hardware, Gaming, Warframe, Mining, Development, Computer Science, DevOps, Cloud, Automation, Machine Learning
  • Occupation
    DevOps Engineer @ Thrive Health

System

  • CPU
    i7-7700K @ 5.0 GHz
  • Motherboard
    Asus Maximus VIII Impact
  • RAM
    32 GB Crucial Ballistix Sport @ 3000MHz
  • GPU
    GTX 1080ti
  • Case
    Corsiar 280x Modded Front Panel
  • Storage
    250GB Samsung 850 Evo, Cruical MX 500 2TB, Gigabyte 500GB NVMe
  • PSU
    EVGA Supernova G1 1000W
  • Display(s)
    Dell S2417DG, AOPEN 24HC1QR
  • Cooling
    Custom Watercooling: Mostly AliExpress Stuff, EK GPU Block Though
  • Keyboard
    Corsair K70 and/or Logitech G610 w/ Cherry MX Browns
  • Mouse
    Logitech G402
  • Sound
    Audioquest Dragonfly USB DAC
  • Operating System
    Windows 10 + Manjaro Linux

Recent Profile Visitors

3,092 profile views
  1. Well unless you're measuring that with an oscilloscope, kinda. Onboard sensors aren't famous for being the most accurate. Some chips/boards expose die sense voltage, but that has to be done and your monitoring software also must be reading that sensor and not another. Also unless you manually disabled all forms of boost on the CPU, it very well may still boost over your settings in the bios
  2. Yeah that's normal if you're not under a heavy multi-core load. Idle and single core goes pretty high relative to all core by default, which is fine. Don't be worried about it
  3. Well, idk why you're not trying to run at 3200. That'll help way more than better timings at 2400. A rough guideline is that your timings are "good" if the primary CAS latency results in the following inequality being true: Speed (MHz) / CAS > 225 To be clear, if it's worse than that, that doesn't make it BAD, it just not impressive or anything. Some of the best value kits out there are 3200 CL16 or 3600 cl 18. Those would result in values of 200 in that equation. Both are totally fine, just not "good". If you're even lower than 200, I'd consider that to be fairly loose (bad) timings since you can spend not a lot to get to at least 200, and even 225 isn't much of a price increase. In your case, though, since you're still under 3600 MHz (so your infinity fabric has room to get faster), and you are using an APU (which benefit a lot from faster ram speeds), I'd recommend prioritizing speed over all else.
  4. That depends entirely on the RAM speed, capacity, number of DIMMs, motherboard/CPU, etc. We need more information.
  5. Just want to shoutout to Thermaltake:
    I moved and lost some case feet and a PSU cable (the latter of which wasn't from thermaltake). 

     

    I asked them to send a replacement, more than 2 yrs after purchase, and for a reason that was totally my fault. Under 24hrs they already sent em out to me. I am impressed, and happy. 

  6. So I have 2 setups from which I think I can draw relevant experience: 1. My personal setup in my tiny apartment: I have a pfsense box (box is a bit of a generous term, it's actually just a motherboard and psu on top of a storage bin) as my router/firewall, and a r7000p in access point mode for my AP. It works great. Pfsense, as one might expect, does everything you might want, and more cause it's just freebsd, has dark mode :), etc. 2. My parents' setup I helped with last time I was there. I left them with a ubiquiti edgerouter x (the cheaper end of their lineup), and a TP-Link EAP AC1750 for their access point. The former is great because I'm a giant nerd, I had extra PC parts, and I host servers and stuff. I killed consumer routers because they couldn't handle the load. However, for just dealing with regular internet traffic for a family, it is unnecessary. However, if you have an old PC lying around with a couple gigs of RAM and a dual-core CPU that's not 10+years old, that's totally good enough. And won't cost you more money. I can't speak much about the AP because, well, I have a 500sqft apartment and 1 user. It doing well in that use case isn't an achievement. The edgerouter x was simple enough, it does actually run on debian if you want to do more on it (though the cheaper ones don't have a lot of horsepower so I wouldn't go very far), and is small+quiet. For a home with 3 people, even working at home and/or streaming all the time, it had no issues. Also has a built-in switch, which most motherboards you might use for pfsense do not. As for that EAP access point, it is great. You can set up proper seamless handoff with them and either a dedicated controller, or a PC running the software for it (if you use pfsense, run it in a VM, and have another for this). The setup is dead simple, even my parents could understand it. Most importantly, though, the performance has been great. 4+ streams and people working at the same time on wireless, no issues. The range was enough for a ~3000 sqft house, even in the farthest rooms it would be over 50mbps (with just a single unit). So I'd say either pfsense if you have the hardware or the intensive use case (as in lots of firewall rules that put the machine under meaningful load), or an edgerouter x if you don't. And I would go for the TP-Link EAP series of access points. IIRC they announced but haven't released wifi 6 units in most countries, so if you want that maybe wait or seek alternatives. But I cannot recommend them enough because my experience has been so very good with them.
  7. Idk, I've worked in a good few places over north America, and there's always been a pretty hard line between the DevOps team and the normal dev team (and QA, for that matter). Most devs I talk to have no desire to get into the weeds of CI/CD, and want to just be able to run a script someone made for them or just push their code and let a pipeline do all the work to build, test, and deploy.
  8. I brought up the slim versions of those images just as a cool fact, almost, and to provide an example of how Centos/debian/Ubuntu aren't all the same. Even debian slim is orders of magnitude larger than alpine. About man-hours vs cloud costs, I'm not sure why you are having issues there. Sure, there's always niche cases where a development process is different from the norm and you could be that. But generally speaking most devs don't interact with the container itself. They just work on the app code and run their tests, and just assume it works. The few devops people and a couple with wider knowledge on the dev team typically are the ones who interact with and modify the container environment. And I do expect a DevOps engineer to be able to handle using an alpine container over a bigger and more common distro, that's part of your job IMO.
  9. Eh that's a pretty broad and poorly represented point. 1) There are slimmer versions of those "big distro" images available, for example debian I know had a slim tag available which is what I use when I need a heavier distro. 2) Managed services in now way make size irrelevant. Far from it. There are many use cases where that still matters: - lower image size means less storage costs - lower image size means CI builds and tests take much less time. In most companies where a single dev may trigger dozens of these pipelines a day, that can mean huge time and resource savings, and much faster prototyping. If you have good DevOps people and learn how to comment, maintaining well-slimmed down images isn't hard. Just comment what you're doing. There's something you didn't touch on which is relevant to this topic which is effective usage of layers to allow for using layer caching to reduce build times. You so often trade off some size to have layers for your most often changed parts of an image separated out in a hierarchy at the end of a dockerfile to avoid rebuilding layer components which rarely change. But even there, there is a balance to be struck. By paying attention to these factors I have saved companies double digit percents on their cloud bills, massively reduced the time of pipeline jobs, etc. Image size and composition absolutely matter.
  10. Yeah in my world docker is everything, so that kinda frames how I approach everything. Which does make stuff simple, at least
  11. To add my example, I use this: https://www.amazon.ca/PW-SH0401B-Switch-Support-Compatible-Windows/dp/B082FVBTGD I haven't tested freesync, potential input latency (though anecdotally I've not noticed any), etc. But it works for my use case. I have also used HDMI switches back in the day (like 10+ years) when VRR wasn't a thing, and had great success even in competitive shooters, so latency wasn't a problem. But the complexity of the HDMI protocol was far lower, that was when the Xbox 360 was new. I'm sure there are equivalent new products though.
  12. Unless you are trying to deploy it in any sort of professional setting
  13. You're missing the point. If your application is lightweight, alpine will allow you to drastically reduce your image size, which is usually something people try to do. One typically reserves heavier distros like debian for when they're the "last resort" due to an incompatibility like I mentioned with musl c.
  14. Have you considered a KVM rather than trying to get a monitor with a billion inputs? They can work quite well, and depending on what you need them for, don't even have to be very expensive. Personally, as an example, I got a 4-port HDMI+USB KVM for about 100 CAD on Amazon for my non-gaming systems, which has allowed me to hook up 5 systems to 1 monitor (DP for use as a secondary display for my main rig, and the HDMI KVM is hooked up to 4 other PCs for when I am working and using my laptop, need to do some work on my server or LAN rig, etc)
  15. I hope you have a good reason for such heavy containers. I occasionally do use debian/9.x-slim or 10.x-slim when I need a base image that can't use musl c, or needs some other thing alpine can't easily provide, but that's not too common for me
×