Jump to content
Search In
  • More options...
Find results that contain...
Find results in...


  • Content Count

  • Joined

  • Last visited


This user doesn't have any awards

About TehWardy

  • Title

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Maybe i'm falling in to a "gap" here ... Hopefully if Linus see's this he can reach out to the manufacturers and ask for this. I cannot be the only person on the planet running 2x 4k screens on my desk and wanting to make the most of the raw space even if it means pushing a second input in to the screen.
  2. @Samfisher You're right, the reasoning these dual input screens originally came about was lack of bandwidth, we see that in 8k displays or higher today for things like billboard type setups or Linus' "16k gaming" setup (which actually is multiple screens for obvious reasons). bang on though: "you want a large UW monitor, that can act as 2 monitors when playing games in full screen" ... EXACTLY that, but also one that carries "at least" the equivalent pixel count of 2 regular 4k displays at 3840x2160 and ideally be able to handle higher than 60FPS gaming.. My reasoning is my day job here ... I need a high pixel count to fit everything I need during the day, but then a low pixel count and a high refresh rate for gaming, so my thinking was a single "dual 4k 120/144hz" UW screen would full-fill both of those requirements and if I decided one day I wanted to go full full screen in a game like Star Citizen for the immersive visuals then that option would be there too (assuming I had the GPU grunt to drive the entire screen. @Mihle Yeh that's exactly what I have seen ... "4k ultra-wide" advertised displays tend to only be "4k" in one of the 2 dimensions. Usually you get 1440 rather than 2160 or you get "4k" across the entire line which for me is a drop of half my current pixel count meaning reading text would become worse or require me to use more pixels. @Denned Nope that's not what i'm after.
  3. Hey guys ... I've noticed the "new kid on the block" in display tech seems to be these new "ultrawide, curved" displays (like the Samsung G9 series) and that got me thinking about my own setup. Some existing models that run at high resolutions support dual Inputs so you can drive say a 1440p tall display with dual HDMI / Display Port inputs. I'd like to do something like the latter there but with my dual 4k screens on my desktop at the moment. The idea is to get something like a 120hz 49inch ultrawide that's the same (2x3840)x2160 and use half for gaming (like i currently do) allowing me to have chat applications or browser ect on the other half of my screen. During the day however ... in my job as a Cloud Solutions Architect I want to be able to take advantage of every last pixel with applications like browser tabs (a lot of them), multiple instances of Visual Studio, SQL Server Tools, Remote desktop sessions ect all open at once. This is the reason I have dual 4k screens in the first place. I'm all for having nice big "ultra wide" screens that nicely curve around my field of view but if I have to drop half my resolution to get that I figure "what's the point" ... Is there a screen that actually exists on the market at the moment that do this?
  4. Agreed 10 drives should well outstretch the performance of a single 1gbit ethernet pipe when each drive is indiviudally capable of a write rate around 150MB/s. Even with the computation time (which seems to be swallowing a mere 0.3% of my total available compute power). Which is why I don't think ssd's are really needed here. I doubt the pool was scrubbing since it's a new pool, although I did see a 20MB/s improvement the following morning in the ongoing migration copy process, so maybe! I haven't tested the local perf ... figured there was no gain to that since it's how fast i can get data on and off it that I care about. I do have plenty of space for expansions, maybe a 10gig ethernet card is the way to go and a LACP supporting switch. I hadn't clicked that aggregation wouldn't work as a single 2 gig pipe, nice catch! The SMB 3.0 idea is interesting though ... One of my friends suggested that I finish the data migration then install windows on the metal, pass through the physical discks to freenas running on a HyperV VM ... this would give me smarter networking on to the raw hardware and I could "configure a virtual switch" for passing the data in to freenas on a single pipe at the new higher speed. Looking at the documentation I got the impression this wasn't "expanding" the existing pool but instead running a second pool along side it. Is that not the case? Weird that the UI pushes you to add the same number of drives in the second vdev ... Until you realise that what it's doing is basically turning 2x RAID6 pools in to a single striped RAID 10 pool when you do that.
  5. I do have a separate server I've been using as a hypervisor, that uses HyperV which is pretty awesome for what I need. TBH I only need to run a few bits like Plex, web services, which I can probably just download plugins and deploy them to a jail (from what I understand). There's definitely a lot to learn, i'm still a bit worried about how i'm going to setup the users / groups for it as trying to set myself up took a while. Right now my biggest problem is getting the 20TB of existing data on there, it seems to be pushing about 1TB per hour at the moment and not having a LACP capable switch slows things down a lot. Windows storage spaces confused the hell out of me, and I couldn't see a way to configure a single raid 6 volume so I figured, for ease of configuration FreeNas did (on face value at least) appear to offer a "cleaner" solution. Might be something worth virtualising at some point and testing out. Thanks for the feedback.
  6. To get mine to work i had to create a new group and put my new user in it, then grant that group access to the pool. Apparently the default "wheel" group in to which the install puts root doesn't grant you anything.
  7. Hi all, i'm new here so go easy on me please. So I'd just like to say, I've been in cloud solutions as an architect for a few years now and am "familiar" with the software side of dealing with scale but this has truly whittled me down to the point of no return. I have 2 existing 5 bay qnap nas boxes, and my plan was simple, build a new server and migrate all my data from those nas boxes on to it, hopefully if I use the right hardware I can run a few other things as VM's too. So I went out and bought from some local guy a pretty beefy dual Xeon server with 24 bays worth of storage slots and a set of 10x4TB WD Red HDDs. I did some digging and settled on disk sizes of 4TB since one of my existing nas boxes already runs 5 of them I figured once I was setup I could add those back in and continue to use them too. Then came the day of the build. Since i'm a windows guy I figured I'd start with Windows server 2019 datacenter edition (really go all out). After the relatively complex learning curve of dealing with rack mounted Supermicro boards over the ME ethernet port to "remote in to the OS installation" (super cool) I started to realise the error of my ways. My plan was to have a single Raid 6 volume consisting of the first 10 drives, and once all the data was over, add a few more of the ones from my existing nas as both more data drives and have one or two hot spares (i figured I would play that by ear). First problem ... Windows apparently doesn't support the creation of Raid 6 volumes. Second Problem ... The Motherboard has a single SAS port on it, running to a backplane that runs 4 SAS / SATA drives, then there's an add in card with 2 more SAS ports doing the same thing. So I had a total of 12 SAS based drives that I could plugin, and I needed 10 to get going. Having plugged in my drives to the first 10 bays I had only a few of them show up in windows, this perplexed me, hours of digging later I found out I had to "flash the card" in to "IT mode" to allow it to act as a sort of pass through provider of the 8 drives it could support, without doing so the card was expected to have raid configured on it. That in itself wasn't a problem though, since I had figured out that windows can't give me a raid 6 volume I had to use the hardware / compatible tooling to make what i wanted happen. Having installed the drivers for my SAS card and the relevant tools, I then hit problem 3 ... Raid arrays could only be defined on drives connected to a single controller. There had to be a solution for this ... well it turns out there is ... Enter FreeNas, I set about installing that since a Raid-Z volume can be defined across the entire system, the OS just needed to see the drives and I had already done the tricky bit of putting the SAS card in to it's required IT mode state. Being second hand the server then spent the next 5 hours of my time presenting me with dodge cables, what I thought was failed power rails on the cpu turned out to be splitters that weren't needed, and a bunch of "small bios, SAS bios and Os related settings" to get everything to correctly show up. So I create my Raid-Z2 volume (which is the FreeNas version of a Raid 6 volume ... I then have the pain of "unix based systems not being quite the same as a Linux based system" or my linux friends telling me "oh that's my distro, can't really help you there", and other such issues. And what was my problem ... permissions. I found the most trivial guides (even on youtube e.g. this one https://www.youtube.com/watch?v=sMZ-s8wHkHw) and for some reason, whatever I did, it just didn't work. So ... I FINALLY solve the permissions issues and a get a share to show up on my local machine from the newly configured Raid 6 volume on what is basically a super-computer for what i'm about to ask it to do and then .... I begin the file copy from my old NAS boxes. Within minutes, drives start dropping off the array, all hell starts breaking loose ... yet more "broken splitters" are to blame, not a bad test of volume recovery though I suppose, panic mode reduces a little, and then I realise the speed of my transfer ... 80MB/s oh dear. I have dual 1gbit ethernet going from each nas, and the same going from the server all directly attached to a Netgear GS316 switch. Surely this can't be it? I've transferred at over 120MB/s from my local machine to one of the nas boxes before. Then it hit me! ... I'm doing a drag and drop from box to box on my local machine, I'm the bottleneck! So I hit up the freenas shell ... Hundreds of "attempted" combinations of using rsync, scp, ftp, ssh, ect ,ect later I give up. I simply can't get a nas / the new freenas box to directly talk to each properly. It either out right fails, or the transfer speed is <30MB/s. At this point ... I'm sitting here thinking, was I right to ditch windows? Internally on my LAN I really only use windows anyway ... but I simply couldn't get a volume setup as I could in FreeNas so this was the next best thing right? ... I scream at the screen, pet the dog, get some consolation token from the wife and sit down to think about this logically ... Why am I doing this on my own! So here I am ... So I have tons of questions, but really don't know where to start, I guess the obvious start point is ... - Should I continue to use a Unix based storage solution when i'm clearly out of my depth or is there some sort of software solution I can add to windows that would achieve the same solution on top of windows allowing me to manage this in a way that i'm used to? - What do people think of FreeNas? I have already figured out that adding the other 5 drives later is going to be a problem, i'm likely going to need to get hold another 5 so that the same number of drives can be added, my understanding is that ZFS builds a sort of Raid 10 config on top of 2 Raid 6 (RaidZ2) volumes in that situation. - Am I looking storage wrong here, what should I do when i'm looking at 40+ TB of disks and I want it to be extensible for future capacity needs? - Are raid volumes even some thing that should be extended? - How do people manage say ... 100TB of storage? - Can anyone help me with either a FreeNas or Window Server based setup? I hope my story gives other noobs like me some insight in to the experience of moving from a "pre-built" nas like a qnap / synology solution, but hopefully the comments will help improve this for others to help avoid the pitfalls. I thought I was prepared, clearly I wasn't, I made some silly mistakes, but despite all that, i'm not convinced this is the end as I still have performance issue to resolve.