Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

comander

Member
  • Content Count

    1,293
  • Joined

  • Last visited

Awards

This user doesn't have any awards

1 Follower

About comander

  • Title
    Veteran
  • Birthday Jan 01, 1970

Profile Information

  • Location
    /dev/null
  • Interests
    Tech, data, fitness
  • Biography
    My views are my own and do not represent my employer.
  • Occupation
    Data Scientist

System

  • CPU
    3900x
  • Motherboard
    B450 matx
  • RAM
    4x16GB
  • GPU
    RTX 2080
  • Storage
    Optane P4800x 1.5TB
  • Display(s)
    2x 27" IPS + 1x 35" Ultrawide IPS
  • Cooling
    H100x
  • Keyboard
    Topre Realforce

Recent Profile Visitors

2,194 profile views
  1. THIS IS TERRIBLE, THEY SHOULD FIRE THEIR CEO OVER THIS, HE IS CLEARLY HOMOPHOBIC!!!!!! ------ On a more serious note, different societies have different cultures. It is not the responsibility of the richest ~5% of Americans to force their ideals on the rest of the world.
  2. Intel is making their own optane. Micron got out. Intel is focusing more on the enterprise at the moment. That's why I had to caveat with "the long run" - I expect manufacturing costs will fall but supply won't be as great as it could be. I expect that something akin to the H20 will be more of a thing as time goes on. I feel that 32GB isn't quite enough but I do expect that a jump to 64 GB with a bit better bandwidth and a better controller will do some good on something like a 2TB SSD. That isn't too far away. Here and now, optane CAN make sense in a low-ish cost syste
  3. I have yet to fill the 118GB cache drive on my NAS. If you're in a single user environment it's fine. FWIW ZFS is set up to keep large contiguous files on the main storage and smaller blocks (small files, metadata, etc.) in RAM/cache. This means that latency/IO sensitive operations mostly land in cache and bandwidth intensive operations hit the storage array. If you have 4x HDD @ 200MBps and they're not IO bound, you'll often find yourself hitting ~500-800MBps read speed since they're not IO bound. 800MBps is faster than a SATA SDD. It definitely falls apart if you're writin
  4. Depends on use cases and the criteria and weight of tradeoffs. I suspect that most people could get away with a relatively small amount. As an aside - I expect cost/capacity will improve relative to NAND (just in the long run) and CXL is a thing and that will help. I can totally see a nearish future where it's normal for systems to have an extra optane drive for data caching and RAM spillover (page file). Software needs to get better as well.
  5. I'm aware. ARGHHH. Smart caching software usually ends up being "good enough" though - you get close to the bandwidth of two drives in the best case along with the latency and IOPS of the best drive. For enthusiasts the 58GB 800p still kinda makes sense as a cache (a la primocache or ZFS if you're going the linux route) that mainly targets the OS, programs and metadata in a system... or the 118GB variant if you want to set aside a bit of it for page file and "skimp" on RAM capacity. You can listen to wendell rant and rave about how awesome the concept it here: https://lev
  6. or be like me (in theory) C:\ = 3D Xpoint D:\ = TLC Z:\ = RAID10 HDD array with 32GB RAM = 118GB optane as cache I'm probably selling my 1.5TB monster (only paid $800 for it) to reap the ~$2000 reward and will be "settling" for a 64GB 800p + 1TB XPG 8200 Pro as my C. The 2TB TLC drive was bought like ~3 years ago for like $220 on sale...
  7. It's been a while since I've done thermal paste research... What I settled on was using IC Diamond. It's "good enough" doesn't conduct electricity and lasts a LONG time. If your goal is "reliable and lasts forever" it checks the box. You might lose 1-2C over something "better" but the value of idiot proofing is pretty higher.
  8. 1. You shouldn't worry about bottlenecks, you should worry about mismatched configurations. "bottleneck" has somehow taken on a different meaning among 14 year old amateur systems architects. 2. You SHOULD worry about whether or not your configuration hits your desired service level. If your configuration does NOT meet a given performance/experience target then you need to adjust upward the part(s) which will have the biggest impact. You will NOT get higher frame rates by getting a higher resolution monitor. You WILL put a heavier load on the GPU. This will shift where
  9. If your issue is that that PC is dumping heat into the room... you basically need to cut power draw (read: electricity use gets turned into heat, like a space heater). Your choices are - undervolt parts, remove part, swap to more energy efficient parts, move components out of the room, be able to vent heat elsewhere (attic? outside?)
  10. I never said marketing. For all I know he could be helping with data infra or product dev. There's a lot of things you can "optimize" and the reason that Apple's current CEO got his role is because he optimized Apple's supply chains. Data scientist. Friend from grad school. A pretty decent chunk of people ended up at FAANGs (myself included). All he said is that he works on it. Basically no details. This wouldn't be much more informative than going on Apple's career page and looking through job descriptions... other than that I know the background of some of the people they'
  11. Apple is coming up with more ideas than LTT. Haha. The people I know who work at Apple are all basically "optimizing" things - think getting 2% more profit out of the App Store - as opposed to working on new things.
  12. It's smaller but it'll also be lower latency in all likelihood. This isn't the end of the world.
  13. It'll depend on a number of things. As a rule of thumb, a bunch of USB 2.0 devices in a decent USB 3.0 switch/extender will be fine. If you're worried about latency... you'll probably be fine unless you're in the top 0.0001% of "pro gamers" which I'm not. If you want the absolute best performance, you'll probably want to look into thunderbolt. This will depend on how the switch chip in the hub works. If it's time slotted multi-plexing, this is possible. This is probably the cheapest. If there's a switch chip that is in any way sophisticated, it'll share
  14. The monte hall problem is a case where there's a set pattern and should be treated as a bayes theorem problem. This is because you have info about the PRIOR baseline state of the world and are calculating your figures based on the expected outcome (posterior probability) at the end of a state change. 1. there 3 doors. 1 is a "winner" 2 are "losers" 2. after one door is chosen one of the losers is eliminated. 2a. IF the first choice was the winner (1/3 chance) then 1/2 losers are removed 2b. IF the first choice was a loser (2/3 chance) then the single remai
×