Jump to content
Search In
  • More options...
Find results that contain...
Find results in...


  • Content Count

  • Joined

  • Last visited


This user doesn't have any awards

1 Follower

About Erkel

  • Title

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Too old, get atleast an R710, should be like $150 or less, Are worth like $50 around here from the wholesalers.
  2. Linus gets owned for his BS again lol. 14:25 is the best time stamp to start at. Have some ball dude, You are loosing what little credibility you have left. This is a follow up video, that covers their response on the wan show.
  3. Another LTT video to add to the wall of shame. Can we please get away from these recite a marketing script videos that LTT does from time to time.
  4. I always use hardware raid, but I am old, from the days when software raid was not mature and would corrupt its self if it ran out of CPU. The main advantage is better performance with writing to NVRAM, rather than waiting for the data to be written to disk.
  5. I think you would be best to do testing in AWS or Alibaba first. Amazon do upto ~4TB of RAM machines, Start out small and scale up until you hit diminishing returns. I think you will find that you will need less RAM than you think. What database are you running? Do not waste you time with a workstation, know what works first before you throw money at it.
  6. Run an LSI raid card with an array of SSD's in a 0.5TB Cachecade array. That will give you 0.5TB of SSD level caching. You can do that pretty cheaply, as you can get the controllers and cachecade hardware keys cheap secondhand.
  7. I think you miss understand how the DB world works. If you are hitting disk it is broken. You should be reading out of RAM and writing to NVRAM on the disk subsystem. What is your read to write ratio. basically just size your ram so the database (the active bits) fit into RAM.
  8. Worthless these days, no power, your phone probably has more 2-3x more grunt
  9. Would have to check, have moved on anyway.
  10. Running off of CPU_Fan headers and everything in the bios optimised for fan speed. No corsair link as these are linux server boxes, not desktop PC's Was after max cooling for maximum turbo boost.
  11. I decided to use AIO's (Corsair H100i v2) on some cheap xeon database builds and I got the following: Build1: AIO lasted 11 months, replacement lasted 14 months Build2: AIO lasted 19 months (do not actually know if it in fact lasted that long, as did not get an alarm for it and only found that is was dead when doing some testing) So that is 3 out of 3 AIO's. Pumps are dead, running off of supermicro motherboards with highend Corsair or EVGA power supplies. Are they really only good for ~12 months when run 24/7? Have since given up on AIO and use large noctua CPU coolers instead.
  12. The ongoing disaster that was decided to go with intel SSD's. Have a few million dollars worth of industrial electronics manufacturing equipment that I look after and experiencing ~20% failure rate. This week swapping out 5 intel SSD's that are failing. If you have an SSD I would recommend checking it with something like HD sentinel. Intel 535 and 540 series.
  13. motherboard: Asus ram: KIngston graphics card: Asus or Gigabyte SSD: Samsung, (Avoid Intel, ~20% failure rate so far) hard drive: WD fan: noctua cases: fractal power supply: EVGA, (used to be be Corsair, but marketing has taken over, and burned badly by their marketing department POS cases)
  14. Sounds like an IT recruiting company in NZ lol. Never found had an experience where that staff were IT savy. You are a multinational with the desktops running out of Sydney or something? You on UFB? Maybe simple networking issues. I am in Auckland and we have a china office, RDP is not that bad going all that way.