Jump to content

Teary_Oberon

Member
  • Posts

    8
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Teary_Oberon's Achievements

  1. =edited to fix misunderstanding= He pretty clearly said "300 and change GB of RAM" when I asked him. It makes me worry that he is serious because 300GB or 300TB of Disk Capacity doesn't make any sense for our use. Our current workstation server alone has 1TB of Disk Capacity and we are taking up 600GB of it. 300GB of storage is too little and we couldn't fill 300TB in 50 years.
  2. Currently our business is at 15 users doing basic CAD design work. We don't do any rendering, we don't do any virtualization and don't have any plans to in the near future. Our current server is a shared file server for design projects and that is it. Big Boss comes in today and says that he is getting us a new dedicated server, and that it is going to have 300GB+ of RAM because the server sales people told him he needed that much for "future expansion." Now to me, that seemed a little bit excessive and shocking at first hearing, but I wasn't really sure and I didn't want to speak up because I haven't messed around with servers very much. That is why I wanted to check in with you guys here at the Linus forums to make sure that my boss isn't getting scammed. Is 300GB+ of RAM for a 15 user file server too much, too little, or what do you think a recommended number might be?
  3. Actually I was digging through the Phanteks screw box, and I noticed two slightly extended (+2-3mm) standoffs not listed in the manual's parts list. Since there were only two, I installed them under the backplate on the right side corners and just left the middle right screw hole empty (8 total screws now instead of 9, but that's what factors of safety in engineering are for right? ). It seems to have fixed the problem, and I believe that there is only very minor flex in the board now. It should be noted for future users though, that the Rampage V Edition 10 motherboard will not fit in a Phanteks case without extended standoffs. The 5mm head standoffs are simply not long enough for the motherboard back plate. Neither the Phanteks manual nor the Asus manual appears to mention this incompatibility anywhere.
  4. MOBO: Asus Rampage V Edition 10 Screws: 9x M5 5mm screw 9x Standoff screw w/ 5mm head So I am trying to fasten this Asus motherboard to my chassis, but I am having some pretty extreme difficulties. For the life of me I cannot get the motherboard to sit flush on the standoffs, and so half of the screws can't actually reach the standoff heads. After looking closely, I think that the problem may very well be this partial backplate that Asus has installed by default (see picture). The backplate is sitting 6mm high off the PCB, while the standoff heads are only 5mm tall. Add the backplate thickness and PCB thickness and the M5 screws over the backplate can't even reach the standoff heads to thread, while the screws on the other side of the MOBO reach fine. The whole thing seems like it sits at a slope. Question: has anybody else had this kind of issue with getting their MOBO flush? Is it safe or proper to actually remove the backplate entirely, or is there something else I can do to get around the issue?
  5. DESIRED SETUP Asus X99-PRO/USB 3.1 MOBO i7 6850k (40 lane) 2 graphics cards in SLI 1 Samsung 960 Pro m.2 1 ThunderboltEX 3 add-in card REFERENCES ThunderboltEX 3 specs page: link Asus X99-PRO/USB 3.1 specs page: link Asus X99-PRO/USB 3.1 manual: link 1* The PCIe x16_4 shares bandwidth with M.2 x4. When M.2 socket is occupied, the PCIe x16_4 slot will be disabled. COMMENTS AND QUESTION So at first I was thinking of using an Asus Deluxe II motherboard, but the asinine pcie lane allotment got me to looking at other options. Enter the X99-Pro/USB 3.1. So the PRO specs say that PCIe x16_1 & PCIe x16_3 can be used for SLI without interfering with the m.2 slot, which is awesome. PCIe x16_4 apparently gets disabled with the m.2 inserted, leaving only PCIe x16_2 open. Question 1: could anybody here kindly tell me if I will be able to still fit a ThunderboltEX 3 card in PCIe x16_2? Is the graphics card in the slot directly underneath going to interfere or block anything, especially if I want to add water cooling? Question 2: also, even if the ThunderboltEX card will fit, does anybody know what that would do to the pcie lane allotment? The ThunderboltEX 3 card uses pcie 3.0 x4 (I think from the PCH but not sure), so what exactly, if anything, is it going to be stealing lanes away from? Is it going to end up throttling something? Apologies if any of my questions sound dumb. This is my first pc build and I really want to make sure to not screw it up
  6. I know I don't need it...but I don't need SLI either, or the 960 Pro ssd for that matter, but I want to get them anyways just for the fun of it and because I can afford it. I like to challenge myself to take every task to the highest level possible, because that is what makes it fun. I just wanna know if x16/x16 + m.2 is possible, because it seems like it would be an interesting personal achievement (along with completing my first overclock, setting up my first water cooling loop, ect.) Also, if I use a PCIe to m.2 adapter card, can I still use the ssd as a Boot Drive, or is the native m.2 slot required for that?
  7. MOBO: X99-Deluxe II (link) CPU: i7 6850k (link) User Guide w/PCIe Diagram: link Preface: This is my first computer build ever, and I've never touched the inside of a computer before this, so please forgive me if I am asking dumb questions. I really just want to try to get it right the first time. Goal: I want to eek as much performance out of my hardware as possible (naturally), and to me that means using a new 960 Pro m.2 ssd for the boot drive while trying to retain as much bandwidth as possible for the two graphics cards (ideally x16/x16). Problem: Now the guide says that I can do a x16/x16 and keep a u.2 slot, but I don't want a u.2 slot -- I want a x16/x16 with an m.2 slot. PCIe x16_2 looks like an older 2.0 slot, so it is out of the question. It doesn't seem like PCIe x16_3 can be used since it shares bandwidth with the m.2 slot I need. Question: So if I have the m.2 slot filled, the u.2 slots empty, and a graphics card in PCIe x16_1, can I then put a second graphics card in either PCIe x16_4 or PCIe x16_5 and keep x16/x16? And if so, would that lead to any kind of SLI bridge issues or waterblock issues since the distance between cards is getting so big?
  8. So this is my current, future-build parts list: PC Parts Picker List I am really interested in trying to set up an m.2 RAID array (just because and to see if it is possible). But from what I have read, there seem to be a lot of issues surrounding such arrays, the biggest being: X99 MOBO's (which I want to use for the 6850k) only have a single, native m.2 slot, but support 40 lane CPU's. Z170 MOBO's have dual on-board m.2 slots, but only support 28 lane CPU's (6700k), which means no dual GPU setups without lane throttling. Correct me if I'm wrong on this. An Asus Hyper m.2 Adapter in a PCIe slot is possible, but apparently you can't boot from it, which limits the user to a Windows software RAID. There could be possible CPU or MOBO performance ceilings / bottlenecks for the new, super fast Samsung 960 Pro's (each of which hits 3.5Gb/s individually). Not too sure about the specifics of this point or the upper limits of 6850k's or x99's, but I do worry about it rendering the entire array useless. But then I got to looking around the interwebs, and stumbled across this very interesting 2x M.2 NGFF SSD RAID Controller Card that claims to support dual onboard m.2's in a single PCIe x4 slot. Possible work-around for some of the above issues? Question: is this the magic bullet that will finally allow an easy m.2 hardware boot RAID on an X99, or is there some kind of catch or drawback or limitation to it? Seems almost to the level of "too good to be true."
×