Jump to content

TehWardy

Member
  • Posts

    11
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

TehWardy's Achievements

  1. ok, so the NVMe card that "requires" access to 16 lanes is able to run on 8 based on this ... Then why does it require 16 lanes in the first place? Also ... if I need 8 lanes for each card will I be able to install a decent "high speed NIC" in the machine in another free slot (even if it's only 4x slot)? Is a lane that's used still usable by other devices? What's complicated, or what am I making complicated about this? I'm trying to determine the limits of connectivity options my motherboard has and under what conditions those apply.
  2. @porina I think the point was "whatever the perf, more is never gonna hurt". The requirement for code compiles is generally a large set of files need to be pulled from disk, some new files then get computed, saved and then loaded by the tooling for debugging. This is a common task devs do all day every day, so if that goes from taking minutes to seconds for a large solution that's an active real performance gain for that developer. Octane is basically just a fancy NVMe drive (a single one, but optimised to all hell) ... the card I have "requires" 16 PCI lanes when 4 slots are used. By that I mean ,,, it physically does not work unless plugged in to the primary slot with a supporting bios option to enable the use of 16 lanes for that cards needs. As @Hackentosher pointed out though the bandwidth available to a 16x PCIe slot is some 32GB/s which no set of known NVMe drives I've ever seen can max out so that card containing the 4 drives CLEARLY doesn't NEED all 16 lanes it requires to function. Then there's the issue of other add in cards in the machine (should I choose to use them). So here's my point As was said above: On AM4, you get 16 lanes from the CPU going to slots A NVMe raid card containing 4 NVMe drives "requires" all 16 of those lanes. So does my Graphics card function at all? My issue I guess is understanding that allocation of lanes / bandwidth ... based on this information i'm actually using more lanes than I have but that's clearly not true, so does anyone actually "know" how PCIe buses handle this stuff? I'm a qualified systems engineer but I trained back when ISA and just regular old PCI was a thing clearly I have something to learn here but i'm curious weather this is actually something that can be "tested for". One example situation I can think of would be something like a point cloud render that is able to query the raw data in real time form the NVMe devices and stream "render instructions" to the GPU. This sort of operation would be high in bandwidth, high CPU compute if the points could be made to animate, and also by extension high in both "to the GPU bandwidth" and GPU compute time which is why we see stuff like "unlimited detail engine" not showing a lot of typical modern game functionality whilst showing insanely high levels of raw pixel data. I really like to see LTT do something on this ... "demystify" how bandwidth and lanes are allocated so that I can make more informed buying decisions.
  3. @Hackentosher See that was the reasonable conclusion I came to when I was buying but upon installing everything I found the following was true ... - A decent NVMe drive has a max throughput "well below that of a 4x PCIe setup" - A card supporting 4 of these would easily max out all 4 drives and still have spare bandwidth on 8x PCIe 4.0 Lanes - The card specifically requires 16 lanes to function and the board requires the primary slot to be used for this. My GPU is a 2080 Ti (typical that I would buy this about 2 months before the 3080 announce but hey ho) ... when an NVMe drive says "it needs 4x PCIe lanes" what does that even mean since PCIe 3.0 and PCIe 4.0 are drastically different in bandwidth delivery capability, surely what matters here is bandwidth not the lane count but this is a specifically needed thing regardless of PCI version that an NVMe drive requires 4 lanes. This creates a vacuum of tied up lanes that are just simply not using the bandwidth they have available, those same lanes could deliver more elsewhere. @porina Speed is never enough for a developer, we get what we can reasonably afford but ultimately it boils down to "do I wait 20 seconds for a build or 10 seconds", when you do 500 builds a day to test the lines of code you write as you go all those 10 seconds add up. I'm not claiming to need ridiculous speeds in the general sense but having the card in there lets me RAID the 4 drives and get high speed by splitting my drive access times across them ... it's not a capacity issue, it's a performance one. Which brings me back to how these lanes are actually used (the underlying problem I can't figure out). Further thoughts (adding to my confusion) I have 16x "allocated to" drives that definitely can't use all that, I have a GPU that under the right conditions probably could stream data at the max chat of a 16x slot but under common game engine and benchmark workloads of today likely won't so it's sort of a weird ask in the first place and yet the board due to how NVMe works it's like this weird arbitrary thing that NVMe seems to "require". So for the sake of this discussion ... lets say I had a "nVidia 4090" GPU that could reasonably saturate a 16x slot ... am I in a weird gap right now that doesn't present the problem that would just by sheer luck? AND If that ficticious 4090 could cause a bandwidth limit then surely we could feasibly hit that today simply by as per my first post ... adding a 40gbit NIC that needs 4 or more lanes. My understanding is that i'm already using more potential lanes than I have but it "somehow works" (how?) so this would further expand the problem. Also how does the GPU know to run all 16 vs only 8 or 4 lanes as that doesn't seem to "require" 16 lanes?
  4. I was thinking about the machine under my desk I've noticed a weird thing lately in how computers are built / used (a common pattern if you like). In my case I'm a software developer and gamer so I use a lot of local space and when compiling code there's an advantage to be able to read a large number of small files quickly. To that end I bought a 4x NVMe card from Asus that "requires" a 16x slot to run all the drives and even has options for it in the bios to the extent that the add in card can only be put in the primary slot. This got me thinking Optimally speaking a decent GPU also runs off a 16x slot and with 4 PCIe Lanes reserved for the chipset (listed in many mobo specs including mine) I technically "need / want" 36 PCIe lanes is my understanding. But being a standard Ryzen setup on a B550 chipset board the specs show that I should expect 20 + 4 (reserved for chipset) PCIe lanes are what I have. so I'm 12 lanes short of the optimal scenario here. So this leads me to my question How the heck do modern Ryzen 2 or better computers manage PCIe lanes and how many do I actually need for optimal performance given that all the benchmarks or reviews of such systems talk about EITHER running a GPU on a mobo or a storage solution on a mobo but I can't seem to find anything about combining these and what the impact is on the lanes / bandwidth needs of these setups. Or to put it another way ... Should I consider a threadripper setup for my scenario or am I missing something here about PCI lane "needs"? What would happen if I plugged in say a decent NIC (say a 40gbit card or something) in to my existing Ryzen setup, do I have enough lanes?
  5. Maybe i'm falling in to a "gap" here ... Hopefully if Linus see's this he can reach out to the manufacturers and ask for this. I cannot be the only person on the planet running 2x 4k screens on my desk and wanting to make the most of the raw space even if it means pushing a second input in to the screen.
  6. @Samfisher You're right, the reasoning these dual input screens originally came about was lack of bandwidth, we see that in 8k displays or higher today for things like billboard type setups or Linus' "16k gaming" setup (which actually is multiple screens for obvious reasons). bang on though: "you want a large UW monitor, that can act as 2 monitors when playing games in full screen" ... EXACTLY that, but also one that carries "at least" the equivalent pixel count of 2 regular 4k displays at 3840x2160 and ideally be able to handle higher than 60FPS gaming.. My reasoning is my day job here ... I need a high pixel count to fit everything I need during the day, but then a low pixel count and a high refresh rate for gaming, so my thinking was a single "dual 4k 120/144hz" UW screen would full-fill both of those requirements and if I decided one day I wanted to go full full screen in a game like Star Citizen for the immersive visuals then that option would be there too (assuming I had the GPU grunt to drive the entire screen. @Mihle Yeh that's exactly what I have seen ... "4k ultra-wide" advertised displays tend to only be "4k" in one of the 2 dimensions. Usually you get 1440 rather than 2160 or you get "4k" across the entire line which for me is a drop of half my current pixel count meaning reading text would become worse or require me to use more pixels. @Denned Nope that's not what i'm after.
  7. Hey guys ... I've noticed the "new kid on the block" in display tech seems to be these new "ultrawide, curved" displays (like the Samsung G9 series) and that got me thinking about my own setup. Some existing models that run at high resolutions support dual Inputs so you can drive say a 1440p tall display with dual HDMI / Display Port inputs. I'd like to do something like the latter there but with my dual 4k screens on my desktop at the moment. The idea is to get something like a 120hz 49inch ultrawide that's the same (2x3840)x2160 and use half for gaming (like i currently do) allowing me to have chat applications or browser ect on the other half of my screen. During the day however ... in my job as a Cloud Solutions Architect I want to be able to take advantage of every last pixel with applications like browser tabs (a lot of them), multiple instances of Visual Studio, SQL Server Tools, Remote desktop sessions ect all open at once. This is the reason I have dual 4k screens in the first place. I'm all for having nice big "ultra wide" screens that nicely curve around my field of view but if I have to drop half my resolution to get that I figure "what's the point" ... Is there a screen that actually exists on the market at the moment that do this?
  8. Agreed 10 drives should well outstretch the performance of a single 1gbit ethernet pipe when each drive is indiviudally capable of a write rate around 150MB/s. Even with the computation time (which seems to be swallowing a mere 0.3% of my total available compute power). Which is why I don't think ssd's are really needed here. I doubt the pool was scrubbing since it's a new pool, although I did see a 20MB/s improvement the following morning in the ongoing migration copy process, so maybe! I haven't tested the local perf ... figured there was no gain to that since it's how fast i can get data on and off it that I care about. I do have plenty of space for expansions, maybe a 10gig ethernet card is the way to go and a LACP supporting switch. I hadn't clicked that aggregation wouldn't work as a single 2 gig pipe, nice catch! The SMB 3.0 idea is interesting though ... One of my friends suggested that I finish the data migration then install windows on the metal, pass through the physical discks to freenas running on a HyperV VM ... this would give me smarter networking on to the raw hardware and I could "configure a virtual switch" for passing the data in to freenas on a single pipe at the new higher speed. Looking at the documentation I got the impression this wasn't "expanding" the existing pool but instead running a second pool along side it. Is that not the case? Weird that the UI pushes you to add the same number of drives in the second vdev ... Until you realise that what it's doing is basically turning 2x RAID6 pools in to a single striped RAID 10 pool when you do that.
  9. I do have a separate server I've been using as a hypervisor, that uses HyperV which is pretty awesome for what I need. TBH I only need to run a few bits like Plex, web services, which I can probably just download plugins and deploy them to a jail (from what I understand). There's definitely a lot to learn, i'm still a bit worried about how i'm going to setup the users / groups for it as trying to set myself up took a while. Right now my biggest problem is getting the 20TB of existing data on there, it seems to be pushing about 1TB per hour at the moment and not having a LACP capable switch slows things down a lot. Windows storage spaces confused the hell out of me, and I couldn't see a way to configure a single raid 6 volume so I figured, for ease of configuration FreeNas did (on face value at least) appear to offer a "cleaner" solution. Might be something worth virtualising at some point and testing out. Thanks for the feedback.
  10. To get mine to work i had to create a new group and put my new user in it, then grant that group access to the pool. Apparently the default "wheel" group in to which the install puts root doesn't grant you anything.
  11. Hi all, i'm new here so go easy on me please. So I'd just like to say, I've been in cloud solutions as an architect for a few years now and am "familiar" with the software side of dealing with scale but this has truly whittled me down to the point of no return. I have 2 existing 5 bay qnap nas boxes, and my plan was simple, build a new server and migrate all my data from those nas boxes on to it, hopefully if I use the right hardware I can run a few other things as VM's too. So I went out and bought from some local guy a pretty beefy dual Xeon server with 24 bays worth of storage slots and a set of 10x4TB WD Red HDDs. I did some digging and settled on disk sizes of 4TB since one of my existing nas boxes already runs 5 of them I figured once I was setup I could add those back in and continue to use them too. Then came the day of the build. Since i'm a windows guy I figured I'd start with Windows server 2019 datacenter edition (really go all out). After the relatively complex learning curve of dealing with rack mounted Supermicro boards over the ME ethernet port to "remote in to the OS installation" (super cool) I started to realise the error of my ways. My plan was to have a single Raid 6 volume consisting of the first 10 drives, and once all the data was over, add a few more of the ones from my existing nas as both more data drives and have one or two hot spares (i figured I would play that by ear). First problem ... Windows apparently doesn't support the creation of Raid 6 volumes. Second Problem ... The Motherboard has a single SAS port on it, running to a backplane that runs 4 SAS / SATA drives, then there's an add in card with 2 more SAS ports doing the same thing. So I had a total of 12 SAS based drives that I could plugin, and I needed 10 to get going. Having plugged in my drives to the first 10 bays I had only a few of them show up in windows, this perplexed me, hours of digging later I found out I had to "flash the card" in to "IT mode" to allow it to act as a sort of pass through provider of the 8 drives it could support, without doing so the card was expected to have raid configured on it. That in itself wasn't a problem though, since I had figured out that windows can't give me a raid 6 volume I had to use the hardware / compatible tooling to make what i wanted happen. Having installed the drivers for my SAS card and the relevant tools, I then hit problem 3 ... Raid arrays could only be defined on drives connected to a single controller. There had to be a solution for this ... well it turns out there is ... Enter FreeNas, I set about installing that since a Raid-Z volume can be defined across the entire system, the OS just needed to see the drives and I had already done the tricky bit of putting the SAS card in to it's required IT mode state. Being second hand the server then spent the next 5 hours of my time presenting me with dodge cables, what I thought was failed power rails on the cpu turned out to be splitters that weren't needed, and a bunch of "small bios, SAS bios and Os related settings" to get everything to correctly show up. So I create my Raid-Z2 volume (which is the FreeNas version of a Raid 6 volume ... I then have the pain of "unix based systems not being quite the same as a Linux based system" or my linux friends telling me "oh that's my distro, can't really help you there", and other such issues. And what was my problem ... permissions. I found the most trivial guides (even on youtube e.g. this one https://www.youtube.com/watch?v=sMZ-s8wHkHw) and for some reason, whatever I did, it just didn't work. So ... I FINALLY solve the permissions issues and a get a share to show up on my local machine from the newly configured Raid 6 volume on what is basically a super-computer for what i'm about to ask it to do and then .... I begin the file copy from my old NAS boxes. Within minutes, drives start dropping off the array, all hell starts breaking loose ... yet more "broken splitters" are to blame, not a bad test of volume recovery though I suppose, panic mode reduces a little, and then I realise the speed of my transfer ... 80MB/s oh dear. I have dual 1gbit ethernet going from each nas, and the same going from the server all directly attached to a Netgear GS316 switch. Surely this can't be it? I've transferred at over 120MB/s from my local machine to one of the nas boxes before. Then it hit me! ... I'm doing a drag and drop from box to box on my local machine, I'm the bottleneck! So I hit up the freenas shell ... Hundreds of "attempted" combinations of using rsync, scp, ftp, ssh, ect ,ect later I give up. I simply can't get a nas / the new freenas box to directly talk to each properly. It either out right fails, or the transfer speed is <30MB/s. At this point ... I'm sitting here thinking, was I right to ditch windows? Internally on my LAN I really only use windows anyway ... but I simply couldn't get a volume setup as I could in FreeNas so this was the next best thing right? ... I scream at the screen, pet the dog, get some consolation token from the wife and sit down to think about this logically ... Why am I doing this on my own! So here I am ... So I have tons of questions, but really don't know where to start, I guess the obvious start point is ... - Should I continue to use a Unix based storage solution when i'm clearly out of my depth or is there some sort of software solution I can add to windows that would achieve the same solution on top of windows allowing me to manage this in a way that i'm used to? - What do people think of FreeNas? I have already figured out that adding the other 5 drives later is going to be a problem, i'm likely going to need to get hold another 5 so that the same number of drives can be added, my understanding is that ZFS builds a sort of Raid 10 config on top of 2 Raid 6 (RaidZ2) volumes in that situation. - Am I looking storage wrong here, what should I do when i'm looking at 40+ TB of disks and I want it to be extensible for future capacity needs? - Are raid volumes even some thing that should be extended? - How do people manage say ... 100TB of storage? - Can anyone help me with either a FreeNas or Window Server based setup? I hope my story gives other noobs like me some insight in to the experience of moving from a "pre-built" nas like a qnap / synology solution, but hopefully the comments will help improve this for others to help avoid the pitfalls. I thought I was prepared, clearly I wasn't, I made some silly mistakes, but despite all that, i'm not convinced this is the end as I still have performance issue to resolve.
×