Jump to content

slickplaid

Member
  • Posts

    11
  • Joined

  • Last visited

Awards

This user doesn't have any awards

1 Follower

Profile Information

  • Member title
    Junior Member

slickplaid's Achievements

  1. Thanks! Yeah, they are used with a controller in raid 6. I'll check out those cases. Thanks for the input.
  2. I am a little old for pretty lights I think you're right on the SSI CEB being the same size as E-ATX (305x330mm for both). It looks like even the mounting holes are the same. I would love to get a MAGNUM TX10-D (hue hue) But the ~$900+ price tag for a *case* is a bit offsetting. The FT05 is pretty lame that it only supports 2x5-1/4 drives and 2 SSDs. It might have worked and says it supports E-ATX/SSI CEB I might just go ahead and grab the Antec Titan650 if I can find one without a power supply and figure out if it'll fit that power supply I have in my build. That, or simply downsize to the ATX Deluxe variant of the motherboard. Still on the fence.
  3. With your suggestions, I've modified the build so we could use the Define R5 and just daisy chain the Displays through a Quadro 6k and have the 980ti when it comes out be there for any graphics testing that needs to be done. We have a few 780s that we could pile in there if needed for the amount of monitors we'll be using. http://pcpartpicker.com/p/zhW4GX Surprisingly, it doesn't save a ton of money. Looks like we're talking saving a few hundred dollars when the overall cost is in the thousands. But hey, a saving money is always good as long as it doesn't affect reliability.
  4. My business is footing the bill. You'd be surprised what this workstation does and reliability is paramount, hence the overpriced enterprise-ish stuff. I'll look into those options. We are still once the fence between that processor or just going all out Xeon for 128GB+ of ECC RAM. If that's the path we choose, the case becomes a null point since we'd be moving into server grade chassis at that point. It's really hard to bridge the gap with enthusiast grade hardware and workstation/server grade stuff when you're right on the cusp for functionality. Once you make that transition the price doubles or triples in a lot of cases, which gets rough when rationalizing the purchase.
  5. I am building a workstation computer for work and *really* want to use the new Define R5 for it. My other want is an ASUS x99-E WS motherboard. Are these mutually exclusive given that the ASUS x99-E WS motherboard is an SSI CEB form factor instead of an ATX variant? I'm trying to build it out on pcpartpicker and when I go to add a case, there are only two, rather hideous options. (Cooler Master Cosmos II and the Rosewill RISE) Does anyone know of some better options for this build or if I actually could fit that motherboard into the Define R5? Here is my planned setup: http://pcpartpicker.com/p/thQ6mG
  6. Optimized 123k: http://i.imgur.com/odMR1NY.gif Optimized 84k (colors start to get muddled due to not having enough colors for gradients): http://i.imgur.com/pp6LBXa.gif I can go lower if you don't mind some of the animations becoming faster/less motion in each animation.
  7. And, one more for kicks: gfycat Makes him look kinda like a juggalo though in the datamoshed part.
  8. I made a couple of gifs, messing around with datamoshing. Yes, I was bored. Here's the album, and on gyfcat: 01, 02, 03, 04
  9. I guess the real question is, for a workstation type use case, is this board better suited or would someone rather go with ASUS's Z97-WS?
  10. Mostly linux flavors, but on the occasion I need to, Windows.
  11. It sounds like a dual processor has a lot of benefit for me in a professional sense, and not much negative in a casual gamer sense. Anyone have any more details backing or refuting this? As for running two different graphics cards on one motherboard: I've had a lot of issues with graphics cards. About 4-6 years ago, it was difficult to get a lot of games and/or graphics cards to properly use all of the monitors. They would disconnect, drop back into SLI/xfire randomly, the second card would completely disappear from the hardware list and not allow me to turn off/on xfire modes. Just a lot of problems. I've moved to nVidias graphics cards after many years of being an AMD guy, and I'm not sure I'll go back any time soon. Their drivers are much more stable and multi-monitor setups that don't use SLI but do use multiple graphics cards don't suffer from a lot of the problems I've experienced with AMD. That said, keeping my system homogeneous with a single brand, type and model of graphics card is extremely important to me, even if it's "suppose to" work. I'd rather be safe with dual Titans or 780ti's then tearing my hair out dealing with different types, drivers, etc. for multiple cards. I have never used any of the quadro or firepro workstation cards before, but I would imagine they're *much* better at handling large amounts of displays. Their gaming chops aren't up to snuff though vs what I could get in a consumer grade gaming graphics sense. Since I will only utilize them for a lot of monitors and not 3d/2d rendering, I feel like I wouldn't be getting my monies worth. Instead of ECC memory, since daily offsite backups are more important to me than immediate errors in data, I would probably side more towards expanding my NAS or a PCI-e SSD for the working/running copies of virtual machine images (I/O speed). My reasoning for ECC memory is this, and no, it's not a bad thing... just an expensive thing. I was just curious if there's some amount of memory where the chance of memory errors (lots of ram, more chances) starts weighing into a much higher crash/error rate for the system as I routinely have processes running overnight for calculations and compilations:
  12. I am working on building a new PC for work first, and play second. I need a few bits of advice. I am not looking for a full build order of all parts, just advice on specific aspects of the parts. I work a lot with VMs and on a day to day basis, I have sometimes up to 8 different virtual machines running at the same time on a single workstation compiling and processor intensive tasks running on them. Current system requirements: >=64GB of Memory - It's probably overkill at this point, but I go through so much memory when working, I don't want to ever have to worry about it. Graphics card setup that can support a minimum of 5 monitors, possibly 4k mixed with 1440p monitors for output. For gaming, it will only ever run on a single 4k or 1440p monitor for resolution/performance. Monitors are arranged in a stacked arrangement, not surround. I do not do much video editing or rendering to need FirePro/Quadro grade graphics processing for any 3d/2d rendering operations. Some video encoding, but not enough to warrant a huge expense on a professional graphics solution, unless it is to push the large amount of displays. If someone has argument for this setup, I would gladly entertain the idea. Existing hardware: PC will be built from the ground up and the only parts reused are peripherals (kb, mouse, etc) and monitors. I have a rack of lower end servers (10 linux servers) in my office. They all get used to perform longer operations to free up VM space on my workstation and for production purposes. My first thoughts for this build is to split out work and play. Build a dual processor, high amounts of RAM for my VMs and just build a decent gaming rig for gaming and media consumption. The part that makes me hesitate is that with the amount of pixels I'm pushing through the multiple monitors, I'll need at least two higher end graphics cards not in xfire/sli to have enough inputs to plug in the monitors and run them. I currently run 2x GTX670s to push four 1440p displays, and can game without much of a problem on one of the 1440p displays. The graphics cards are not in xfire/sli or I lose display to one of the monitors (xfire/sli limits to one card which only has three outputs). Questions: If I did go dual processor, does it hurt gaming and/or gaming compatibility? With this many displays, is it more sane for me to go something like an AMD FirePro W600 (~$500USD, up to 6 displays, max 4k res on each) or a W9000 (~$3400USD probably way overkill) vs 2x780ti (~$1400USD) or 2xTitan (~$2kUSD) configuration? If I get up and over 64GB in memory on some dual processor solutions, am I starting to get into where I'll need ECC memory and the costs associated with that? I realize some of your first thoughts on this would be to expand out my rack of servers to allow me to reduce the cost of running so many virtual machines. This is a possible solution, but I've found the maintenance and the time spent having to run back and forth when formatting them costs me more than simply building a bigger PC to handle them in VMs. The cost of increasing RAM and Cores is less than multiple servers and their maintenance. Another thought would be to simply build a larger server with these specs and farm out the VMs to that, lowering the specs I need on my operating machine. That is also a solution I've entertained, but my response to it is simply that I enjoy having all that RAM and cores at my fingertips rather than over a 1gbe or 10gbe LAN connection. Not a great explanation, but splitting it out would eventually lead me to have two really expensive systems (server [ram/cpu] and workstation [ram/graphics]) and then possibly another gaming setup in the other room. So, can I accomplish this in one awesome rig? Should I split it out to workstation/gaming? Anything I've completely missed that you think should be addressed?
×