Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

leadeater

Moderator
  • Content Count

    16,687
  • Joined

  • Last visited

Awards

About leadeater

  • Title
    Fanatic
  • Birthday Sep 23, 1987

Profile Information

  • Location
    New Zealand
  • Gender
    Male
  • Occupation
    Systems Engineer | IT

System

  • CPU
    Intel i7 4930K
  • Motherboard
    Asus Rampage IV Black Edition
  • RAM
    16GB G.Skill TridentX F3-2400C10-4GTX
  • GPU
    Dual Asus R9-290X
  • Case
    LD PC-V8
  • Storage
    4 512GB Samsung 850 Pro & 2 512GB Samsung 840 Pro & 1 256GB Samsung 840 Pro
  • PSU
    EVGA Supernova NEX 1500 Classified
  • Display(s)
    Dell U3014 30"
  • Cooling
    Custom EKWB, 3x 480 RAD everything cooled inc ram (why not?)
  • Keyboard
    Razor Black Window Ultimate BF4
  • Mouse
    Mad Catz R.A.T. 5
  • Sound
    Custom build speakers, home theater sound
  • Operating System
    Windows 10

Recent Profile Visitors

23,135 profile views
  1. Yes but not every game is or will be that size, and hey like I said how many games do you actually play consistently? You know you have the option of simply uninstalling games you don't actively play and if you feel like it at a later date downloaded it again. The part where the common person own 100 50GB games and plays all of them always, that's the part, because it's not a real scenario. And if your working data set you commonly use is greater than your SSD cache is you will be cache trashing, that SSD will die quickly, and performance will suffer as you are accessin
  2. Ah yes because everyone was and still is using tiny SSDs and are constantly moving files around Or they aren't and have an appropriately sized SSD and almost never move files. https://www.amazon.com/Best-Sellers-Computers-Accessories-Internal-Solid-State-Drives/zgbs/pc/1292116011
  3. It already is, it's a feature/capability within Storage Spaces and i use it.
  4. Or also just use Storage Spaces and set an SSD to Journal mode, another "free" (included with OS) option that can also be migrated across OS installs and PC upgrades. But like your post says, it's a solution for the extreme minority at the expense of complexity and performance consistency,
  5. Lets also not forget that the new generation and the current/old generation of consoles don't have more than 1TB (usable less than that even) and games are still the most common large storage users for most people, far more common than any other usage. Yes games are getting bigger, not all of them are huge though, and there is pending technology changes that might actually reduce the size of game installations so who knows what the data growth is going to be like. I also understand the new generation of consoles have/will have expandable storage, computers already have this and if
  6. You likely want it set to thin not thick. If you leave it as a thick volume you won't be able to create any more volumes if you need to. That's also why you are getting a free space warning as a thick volume actually uses up the space even if there is no actual data usage. A think volume will only use as much space as there is actual data and will automatically grow in size as needed.
  7. All the software in question supports AVX, also you'll find the Xeon W-3275 is not that much more than the 3990X, not even $1000 more. $4450 vs $3600 list price MSRP, you don't actually pay list price for Xeons unless you are going through retail and buying a single boxed CPU.
  8. While that is true for most of the apps that do run on Mac OS a lot of them have GPU support and outside of the tile based renderers core scaling isn't really that great, you can get better performance with fewer faster cores. Then you have first party Apple applications that support the Afterburner card which 64 TR cores will never match when that can be utilized. 3990X is still very expensive, really only makes sense if you are purely CPU focused. There's more to a well rounded workstation than just slapping a very expensive high core count CPU in it and then assuming it's faster
  9. That doesn't really matter that much, and that's Apple choice to use Intel platform. Intel still has things AMD does not like AVX-512 which gives vast performance improvements. Threadripper is still on the Zen 2 architecture which has implications with performance with applications that are multi threaded and also have inter core communication which is most professional applications other than ones like Blender that do tile based rendering. At the time Apple was going through the design phase Intel was the more optimal choice and Mac OS was already optimized for Intel architectures
  10. I would have to assume yes, his track record as of late is not great. I'd advise simply ignoring him.
  11. It wasn't before either, the heads sit above the platter and I believe this height is unchanged.
  12. It can use remote differential compression yes but it still monitors at the file level and thus has the issue of files being write locked. I know about that issue because I've been hit with it and done a bunch of tests to figure out that is the case. So in my view I classify it as file level replication because that is the mechanism to actually monitor changes rather than at the block level like is the case on something like a Netapp FAS.
  13. No you build a cluster with servers for this purpose, exactly the same as Backlaze does but they use their own software defined storage solution. What issues with DFS? Or do you specifically mean DFS-R? DFS-R is replication at the file level and the only real issue with that is it can't handle file locks or if a file is write locked changes don't replicate until the file handle is closed. DFS-R isn't widely used anyway. DFS Namespaces are however and I can't say there is any issues with that, using that is a huge benefit as you never have to change share mount paths even when yo
  14. No it is not BS, tell me which blocks are for what and relate to what without the filesystem table? Can't can you? Yep you're doomed to do caching to any degree of efficiency without that. Doing it on the drive is not the same as PrimoCache. There is no driver intercepting anything if you do it on the disk controller rather than in OS software. Just because you're convinced it's possible with zero real technical issues doesn't actually mean that is the case. You can either believe the issues I have point out or sit and rage at something not happening beca
  15. Why not, if you like Mac OS and you use an Autodesk product that has a Mac OS version along with your other software there isn't any reason to not stick with that platform you want. But it's not like Audotdesk is the only, like the other software I mentioned those are just some of many. When it comes to design software Mac OS is not short on those and when it comes to the ones that are widely used that list is actually small anyway with almost all having a Mac OS version. But that wasn't really the point I was getting at anyway. In these software there is very little ga
×