Jump to content

SMURG

Member
  • Posts

    1,312
  • Joined

  • Last visited

Awards

This user doesn't have any awards

6 Followers

Contact Methods

  • Steam
    http://steamcommunity.com/id/smurgwastaken

Profile Information

  • Gender
    Male
  • Location
    United Kingdom
  • Member title
    CEO at SMURG Enterprises
  1. As others have already said, the 1700X is overkill even for multi-streaming. I run my plex server on a 10W 4-core Celeron J1900 with 8GB of RAM and it handles everything I throw at it. Remember, plex only needs to transcode if there are bandwidth limitations (not an issue for local playback) or if the client doesn't support the codec. Also remember that when it is transcoding the overhead is proportional to the difference in bitrate, so assuming your media collection isn't entirely made up of 20mbps 4k h.265 files the overhead isn't massive. As for the 10gbps connections, how are you planning to utilise this? At the moment the issue for home users is that the disks you want for their high capacity aren't able to deliver the speeds to justify the cost of 10gbps, and the SSDs that could justify it are cost prohibitive in terms of capacity. I suppose it you have a big enough SSD cache on either side it might be worth it, but even then whats the benefit? Are you routinely moving hundreds of GB at a time between devices?
  2. I shortstroked a 1TB 10,000rpm velociraptor for my brother's PC and noticed an improvement in boot/game load times, so take that as you will. Wasn't massive but I remember an appreciable difference in boot times at least. I think it's worth doing it for the OS at least anyway in some cases since the partitioning means you can wipe and reinstall the OS without touching the rest of the data if you need to.
  3. Ah okay, well that's easy enough to sort out because I don't need much space for the OS itself. The swap could be pretty big in that case and I wouldn't need to change the mount point. Doing some further reading in the ZFS documentation it talks about using L2ARC read cache as well so I was wondering if that might be a better idea? Like you say, I've never really had too much issue with RAM before when using ZFS but yeah the CPU really is limited to 8GB, and as a side-note it's really fussy about what RAM it will take as well lol. I have managed to make systems lock up before when not using enough but you have to overdo it quite a lot. As you say I'm pretty sure I'd be fine with 8GB now but it's just a thought for the future. Thanks for the info though, if I'm sticking with Ubuntu I'll probably just increase the swap size on the boot device since I have loads spare anyway. If I go to unRAID I'll probably just use the mSATA as L2ARC since the documentation seems to suggest that helps.
  4. So a few years ago I built a file server/NAS based on a SoC board from Supermicro (http://www.supermicro.com/products/motherboard/celeron/x10/x10sba.cfm) using ZFS under Ubuntu, and now want to add another 4 x 3TB of storage to it. The board is pretty perfect for this sort of application, the CPU is passively cooled and has a TDP of 10W yet does everything I need it to (including plex transcoding) and it has 6 native SATA ports plus the 2 I'm drawing from mini-PCIe and an mSATA on top for 9 total of which I'm currently only using 5 (OS and 4 x 1TB in Z1). The only snag with this board is that it only supports 8GB of RAM, which isn't really going to be enough for the storage I'm going to have after I add more drives (the rule of thumb ofc being 1GB per TB of storage with ZFS). My question therefore is will ZFS use swap space to alleviate RAM issues? I can easily install an SSD for this purpose and then just run the OS using the internal USB type A (I've been thinking of moving to unRAID anyway), but obviously if it's not going to help ZFS then there's no point. Either way I'm probably going to take the 4 x 1TB array out of ZFS and just run them separately after I've moved their 3TB contents to the new 9TB volume, but at some point I'll want to replace them with bigger drives and I won't be able to upgrade the RAM any further. 8GB for 12/9TB of storage isn't going to cause much of a problem but I'm conscious that it might if I later dump another 12/9TB on top and I don't want to replace the board if I don't have to when it's perfectly suitable aside from RAM limitations. I've been using ZFS for years now on multiple machines so I'm pretty familiar with it at this point, I just don't know how or if it makes use of the swap space and whether I'd benefit from adding more in lieu of RAM. Hopefully somebody else on here has more experience with this than I do, any help is appreciated
  5. No worries, I too suffer from the "oh the WAN show has started but Twitch needs refreshing" problem lol
  6. On the WAN show, he just explained how autohost works etc. and said "as of now, our autohost is now off". He's moved on now so idk whether he means like permanently or 'pending adjustment' in some way
  7. So Linus just said they're turning autohost off lol?
  8. Honestly I was still trying to work out what her channel is actually about when it suddenly stopped. Like she was trying to cover Linus' material or something from what I could tell but she was basically just being a moron for spectator sport? Is that a thing now?
  9. I haven't voted because I'm conflicted. On the one hand it was pretty hilarious tonight, but I'm unsure whether it would get really old really fast.. I normally like having the tab black and silent until the WAN starts because it means I can leave it on another monitor whilst I play games or whatever waiting for it to start.
  10. Yeah, mine was glorified data entry essentially. I have no respect whatsoever for formal IT education, even at university level - I judge people's abilities on a case by case basis, and that certainly seems to be how it is in the wider world too. I never see posts advertised where a degree in computer science or whatever is required and from what my friends who work in game design and software development tell me, a portfolio of work you've done is much more helpful in securing a job.
  11. Those are fine for local or network playback - in fact they're better because they can import the metadata based on file names. However, for actual live TV they just can't compete with WMC's ease of setup and use. What I do is use WMC on my desktop to watch some TV and record shows to a watchfolder, then Handbrake encodes them to a file that XBMC can interpret, then I use XBMC as the frontend on the TV. Things I have to download because they aren't available here go through the same pipeline using flexget and an RSS feed, as do blu-ray rips - all with very little input from me
  12. Well firstly, Windows 7 Ultimate is essentially the same as Professional unless you specifically need all the language packs for some reason. Secondly, the dealbreaker for me with Windows 10 is that you lose WMC - which is the only decent (and free) software I'm aware of for watching TV with a TV tuner. If it weren't for that I'd upgrade, but seeing as Microsoft decided that the Xbox is everything you could possibly want for a media center I'm stuck on 7 for the foreseeable future. On a laptop that probably doesn't bother you too much, but it's something to bear in mind for future reference.
  13. I'm from the UK so we do too, but we still use miles per gallon when calculating efficiency. Liters and gallons are different, just as gigabytes and gibibytes are different. Your drive IS a 1000GB drive, it's just that Windows thinks in gibibytes.
  14. No... It's like buying your fuel in liters and then complaining that your cars miles per gallon figure didn't work out in liters. The drive is sold to you in Gigabytes, but Windows reports it in Gibibytes. If you'd prefer, the companies could sell it to you as 931GiB if you like, but the price per GiB or GB would be exactly the same, so you're not getting screwed out of capacity.
  15. Why? I reckon I could make it work if I wanted to. If you could get the Intel SATA controller to pick up the drives as a SATA device with some driver kerjiggery you could get Windows on there. Or if you kerjiggered ZFS on Ubuntu you might manage it that way... Frankly though, you might as well just put the OS on a normal SSD (RAID 0, if u must) and use these insane drives for whatever it is you need to be that fast.
×