Jump to content

unijab

Member
  • Posts

    1,050
  • Joined

  • Last visited

Everything posted by unijab

  1. If you have to move it upstairs... its not done, right? part 8: coming soon
  2. I would keep it and get the u.2 to m.2 adapter. If your MB chipset supports booting from nvme is another issue
  3. You can cache https. It's just not as easy to setup.
  4. Have you read this? https://forum.level1techs.com/t/how-to-reformat-520-byte-drives-to-512-bytes-usually/133021
  5. When you plan to use it in a write intensive way.. the write endurance/ dwpd rating should be a factor
  6. About the 900p/905p: You could either get the model that comes with m.2 cable.... or use the standard U.2 cable with something like this: https://www.amazon.com/U-2-M-2-Adapter-Interface-Drive/dp/B073WGN61Y/ref=sr_1_4?keywords=m.2+u.2+adapter&qid=1563804098&s=gateway&sr=8-4
  7. For high write endurance... I like the 900p or 905p drives.
  8. Are you looking for a write or read cache ?
  9. You're assuming too much... even in raid 0... it wouldnt be that simple to hit 8GBps with 40 drives. and even if you got enough spinning drives to get close to 8GBps .. that would be peak sequential write only or read only speeds. Any type of mixed work load on those spinning drives would be terrible for you planned usage as swap. paging space is usually allocated in 4Kb pages. -- what would you be using? -- you're best bet would be getting a handful of the m.2 adapters that hold 4 drives. https://www.asus.com/us/Motherboard-Accessories/HYPER-M-2-X16-CARD/ To use that many of them, you'd be best with a x399 motherboard for as many PCIe lanes as possible. The intel 660p 2TB drives are normally on sale (at least in the US)
  10. If you want something to use as swap space... then you want high I/O... 8GBps sequential isnt easy with spinning rust. Remembers Linus's petabyte project?... over 100 hard drives... and sequential read/write was only about 4GBps. You'll want SSD drives. But it wont be cheap.
  11. This 8GBps supposed to be over the network? or local on a machine ?
  12. Eposvox on YT.. talks about compressing or encoding videos to save NAS space.
  13. Should have set it there and have everyone do another round of testing at your preferred dpi setting
  14. but if you want very good write endurance.... without buying enterprise SSDs. Get intel 800p / 900p / or 905p
  15. I just hope that Linus plans it well and has Wendell (since he'll be in CA soon for ltx) there to do some performance tweaks. That way nobody will default to blaming Linus for doing it wrong. @LinusTech
  16. first: LMAO second: the point of this build is IOPs not peak throughput. Depending on how they/he configures the vdevs... it could easily affect peak performance (iops vs throughput), but either way, 29 sata SSDs could easily saturate 64 Gbps
  17. You dont expect they'll hit what level of performance with ZFS?
  18. The jellyfish server they are competing against.. uses one 10GbE connection per editor. I think the max editors they are shooting for on a non-nvme server is 4.
  19. The point is IOPs. Multiple editors need the I/O responsiveness. The speed per editor will be limited to 10Gb anyway.
  20. best value deal right now is the intel 660p.. 2TB for $182 usd
  21. excuse me, but those two statements are not consistent with each other.
  22. Yes. You could either boot using FC or iscsi
  23. So you have to write your own software??
×