Jump to content

CoolaxGaming

Member
  • Posts

    4,739
  • Joined

  • Last visited

Awards

This user doesn't have any awards

About CoolaxGaming

  • Birthday Mar 08, 2002

Contact Methods

  • Steam
    CoolaxGaming
  • Origin
    CoolaxGaming
  • Twitch.tv
    CoolaxGaming
  • Twitter
    CoolaxGaming

Profile Information

  • Gender
    Male
  • Location
    A place
  • Interests
    Gaming,
    Hardware and
    Software
  • Biography
    Owner of the Coolax-Gaming Network
  • Occupation
    Student
  • Member title
    Wat

Recent Profile Visitors

14,632 profile views
  1. Hey guys! Memory I am shopping for server memory and I would like some advice. The server we are building needs large amounts of ddr4 memory, the mobo we have has 8 slots, Supermicro MBD-X12SPI-TF-O. Here are my constraints Size: 256GB (Our applications run memory heavy with lighter cpu usage) Price Points: $2k ($3k is probably our max, not sure at what point performance gained for every extra dollar starts to approach 0) Configuration: 8x32 or 4x64 ECC DDR4 Our application benefits from lower latencies more often than higher frequencies. However speeds lower than 2666 is not optimal either, however we acquired this info on various 12x Nanya dimms with 16GB per stick and 2Rx8 So not sure how this will work out. We didnt pick this manufacturer by the way, we were at the mercy of our server provider for that. However this new server will be custom built by us and probably sent to a equinix datacenter, so we have full flexibility on what to buy. And I was searching exclusively for samsung b die dimms so far, as I have had a good experience with them in the past and been told that b die's are the best. However I am open to other manufacturers if they provide better performance and reliability. This server is the core of our infrastructure and any potential for errors needs to be minimized. ======================================================================================================================================== Others specs so far: Xeon 5318N X12SPI-TF-O Mellanox ConnectX-6 Dx 3x980 PRO 1 TB (Not bought for this build, from inventory) 2x Kingston KC300 2 TB Highpoint 4 M.2 NVME RAID Controller (Not bought for this build, can replace) Everything above is already purchased. Below is what I mgiht get in addition to this ======================================================================================================================================== Storage Now for storage! Reliability is obviously important, however since most of our applications are pretty light we will probably not need more than 3 TB of space. The server will be writing logs for a ton of applications very quickly. We have two OEM (PM981 1TB) 980 pros running in software raid 0 (Server hasent rebooted in 700 days its a disaster waiting to happen). We would need something equivelent to that or better. I was thinking 4x 980 PRO's with a raid card but I see sk hynix and others have new nvme ssds that have passed the 980 pro. And since we were restricted by host to this choice, I dont know how well the 980 pro performs for sustained writes. SSD Constraints (Very rough): Min 3 TB Total RAID Redundency required (I was thinking raid 10?) Read faster than 2x980 pro's in raid 0 Write faster than 2x 980 pro's in raid 0 Low latency $2000 budget including the raid card (Does not include anything already purchased) I have 3x 980 Pro 1TB unopened from a build I was going to do a couple months ago. I was going to use them but like I said, the market seems to have better options now. I was previously set on a p5800x from intel since thats whats in my pc at home and it is a level above everything else, I realized I would not have redundency anymore unless I got 2 of them, which is a bank breaker. I am also open to having two sets of drives, OS+Regular App Data Drives Data wont be written too often besides for updating/deploying new apps. However must be faster than the 2x980 pro in raid 0 High Read speeds required Low latency required High Random R+W speeds Redundency needed Log Drives High Sustained Write Speeds (Ram cache will fill up) High Read speeds in bursts (Reading huge log file) ======================================================================================================================================== Raid Card Lastly, I need a raid card recommendation for this system as well! I am not sure how the highpoint raid card performs, I got it for my personal pc and never used it. If I decide to go for RAID 10 with 4x 1TB 980 Pros and RAID 0 with 2x KC300 what hardware raid controller would work best? I will only have 1 16x slot open as the mellanox nic and nvidia gpu will take up the other 2. Would I need 2 raid cards (or would it work just be worse with 1?) for the aformentioned setup? I definetely cant use the Hightop card if I go for 2 sets of drives as it only has 4 slots Misc: Chassis and cooling recommendations are welcome! Thanks in advance guys!
  2. Hey guys, This is my first time buying a server motherboard. I have a Xeon Gold 5318N processor, and I wanted some advice on which motherboard I should pick for it. Normally I would buy whatever is the best asus mobo, as I never have issues with asus parts, but they dont have a single socket motherboard that supports this processor, they have a bunch of dual socket mobos, but since we only have 1 cpu we will be going the single socket route. Seems like supermicro, asrock, and gigabyte are the only ones making boards. Can you guys let me know the pros and cons of them? And lmk if you can suggest me a good motherboard based on the notes below! - Socket: LGA4189 Ice Lake version - We are open to buying a barebones server, as I know some motherboards arent sold without the rest of the server (Asus dual socket lga4189 for example). - Will need to fit a sas raid card, nic, fpga. 3-4 x16 slots - Since nic will be used a good onboard ethernet port is not needed. - Must be single socket as mentioned - 8 memory slots CPU, Memory, and mobo may be upgraded in January if sapphire rapids has some strengths we can utilize, so I am open to buying used hardware as well for now. I would wait but unfortunately we need a server running by Dec 1, which is why I am having to continue this build rather than wait for scalable 4th gen. Lastly, i know amd is doing well but for compatibility reasons we cant really consider them, plus we already had the cpu lying around.
  3. Hey guys, I have recently finished a new build and I am getting very weird gpu tempratures. Some context of the cooling stuff: 8x nf a12x25 on 2x 480mm radiators (front) 8x nf a12x25 on 2x ek thick 480mm radiators (top) + 2x nf a12x25 as rear exaust dual d5 ek pump This stuff cools 2 blocks, the maximus z690 glacial running a 12900ks and a 3080ti (regular waterblock and an active backplate from EK). The thing is, the 3080ti tempratures are totally whack. The card runs so much hotter than it is supposed to be. I had a 11900k (I believe) with a 2080 super in my old build which just ran off 1x 480mm rad and 1x 120mm rad and I never saw temps cross 70? Anyways, ambient is usuallly 23-26 degrees C and I never let it go over or under that. However the GPU temprature is very weird: GPU Load: 86% Memory Controlled load: 75% Memory Temprature: 46C HotSpot: 103C GPU Temprature: 78C This is over a 10-15 minute ldoad. The thing is, the thermal camera I have never shows anything go over 30-35 degrees, but I thinjk thats just reading the temp of the water, which is confirmed by the motherboard water temp sensor never going over ~37C Now, so far I am thinking I should just drain this thing, undo the GPU block get new pads and go over the whole thing again and see if I missed a pad or didnt spread out the paste. Though I am confused why the hotspot is so much higher than everything else. Another thing is that the tempratures go up within a second or two under load and they drop all the way down a second or two after load. This is a bit unusual for me as I remember temps going down slowly in previous builds, never this fast. Anyone got any other ideas besides just redoing the water block?
  4. Interesting, thank you @RONOTHAN##I did not know the part about pcie scaling. Regarding the DIMM's tbh I might just stick with my 64GB DDR4 and just figure out how to do XMP. I dont think ill have to wait that long hopefully. Might just offload some test runs to cloud temporarily no point getting a headache over this stuff you are right. Sad I have most of the parts already though LOL. oh well, the rest of the build will still work as I have that extra block, ima re-evaluate the cpu+mobo+ram upgrade in a couple months, and just keep the set as a funny reminder of when you shouldnt press the buy button hastily haha. Argh, I wish U.2 could be used on the m.2 or dimm slots somehow so I wouldnt have to worry about the damn pcie slots.
  5. Adding on to that I just checked and it seems that the CPU connection is normally better I *think* having a x8 x8 shouldnt cause a bottleneck as gpu is x16 4.0 (which would be x8 5.0 or does it not convert like that? Sorry the last CPU I knew well was probably the 5960x , Currently I actually run at x8 4.0 because my z590 has the same issue).
  6. Hey man, thanks a bunch for the extra info! Memory speed has never really been a big issue its more about capacity. I dont think I have ever gotten XMP to work properlytbh as I just checked and my system is running at 2800Mhz LOL. Java/chromium can be a huge memory hog and today I was unfortunately maxing out my 64 gb. I think I might consider a workstation board instead honestly. Regarding the TPM I do use bitlocker as I sometimes have to travel back and forth between 3 locations, before I used to just transfer the data I needed between machines the night I arrive, but it started to get more and more of a hassle so I just take my ssd back and forth (Right now I have a m.2 drive), its a bit of a headache to get everything setup as hardware varies, but its a lot easier than ususal. Besides bitlocker, I do want to look into using TPM w PKCS11 (as a store) for other cryptography stuff, this will probably be a more important use case. However because of the amount of stuff I have been unaware of regarding the first gen problems, I will probably put a halt on this build till I know more. I am 80ish% sure ill get the formula once I check up on the pcie stuff more. isint it faster to have stuff connected to the cpu? I might look into gigabyte mobos tbh
  7. ha! love this example, I was just about to mention this. on a serious note, @richbtheres literally many other ways that your stuff can die, it reminds me of someone who would always hold stuff for pickups at post office because they said trucks could damage them. at the end of the day theres a lot of precautions you can take, hows your rooms humidity? how clean is the power, does it effect your stuff? usually the answer is if you were in a situation where it mattered, you probably would already be aware of it and would take that precaution already. the reason linus doesent mention it in every video (i used to watch a couple years ago and from what i saw he would use like some mat or smth 50% of the time, and hes made a video on the topic itself), is that static electricity isint specific to building your pc, chances are youd already know if you were in an enviornment that is extremely charged and has no grounding.
  8. Hey man thanks for the reply! See I believe I have read something similar to what you just said regarding the pcie lanes but I couldnt remember which motherboard it was for. Could you clarify this for me so I can understand it a bit better? So the PCIEX16_1 would be connected to the CPU I know that at least. And here it says The PCIEX16_2 shares bandwidth with the M.2_1 (Top most m.2 slot), which is connected to the CPU as well. Plus the pcie slots are under the CPU section of the glacial manual. Since I wont be using any M.2 slots as the SSD is PCIE only, in this case would it not run on x8 x8 connected to the cpu? Based on the research ive done since, the intel 12900k has 20 lanes that can operate 1x16+1x4 or 2x8+1x4, since I wont be using any M.2 slots like I mentioned would the formula and glacial not come out to be the same if m.2 is disabled? This was the confusion I was having before actually. And for the other part "The Extreme is one of the boards that steals PCIe lanes away from the GPU" is this something in addition to the configuration differences or is there something else thats different? Writing this out I think yours makes perfect sense if I would have a M.2+1xGPU as it would be 16 lanes + 4 lanes vs 8+8 with a pcie ssd instead of a m.2 one (which unfortunately is the case) Regarding the 4x32GB DDR5 I had no clue about that thank you for letting me know. I havent opened the ram packages yet so Ill make sure to keep the packaging just in case the 128 solution does not work out, ill probably end up benchmarking it vs 2x32 DDR5 and 4x16 DDR4 (I use this right now). Would the post times be longer than 4x16GB ddr4? Also I have a 12900k that I havent opened yet but I might just bite the bullet and wait a bit as I see the 12900ks might be out soon! AndI dont think the apex has a tpm2.0 either which is my primary concern tbh. I might still order it to see if the dimms would work better, ty for the suggestion
  9. Hey friends! Its been a while since I have been on this forum so please bear with me if I am unknowingly breaking some rules I was able to get my hands on a Maximus Z690 Extreme Glacial after waiting for so long! I couldnt find one being sold straight from a large retailer (no 3rd party), till today! I was able to grab the last one (its oos now and before it was 2.6 3rdP) Amazon had available for 2k! I honestly thought id have to wait months more and I am so excited right now. Before I get criticized for getting this overpriced mobo, I just want to defend myself by saying that I really really wanted the glacial motherboard because of just how good it looks. I honestly have not seen one designed like that, and back in september I really wanted to get the Z590 (Maximus XIII Extreme) extreme glacial but had to settle for the non glacial one and felt like I missed out on a better board since. But when Z690 came out I felt like I had the perfect excuse to finally upgrade to the glacial! Plus I think ill keep this build for a while so I wanted to make sure it was decked out in every area (Startup did really well this year so I felt like I deserved it!), speaking of which, I had no clue you could get 32GB sticks now. That is so awesome since working with high throughput apps in java + my habit of having a billion chrome tabs open can be a stressor on the memory sometimes. And since there is DDR5 now I realized I could finally go from 4x16GB to 4x32GB! Its crazy to think about how excited I was when I finally got a 128GB ssd for my OS and could never have imagined 128GB ram fitting in 4 sticks. Anyways, sorry for the flexing/sharing, I have genuinely been geeking out all day and when I needed a question answered about the board I felt it would be awesome for a nostalgic login to the community that really got me invested in computer hardware. Honestly I would not have the oppurtunities I have today if not for what I learned from this community. I used to post here when I was 12 y/o 8 years ago trying to convince people that the AMD FX 8350 was the shit (unfortunately, now I realize it was "shit" and not "the shit" LOL), and learning about random stuff by watching linus's videos religiously every day. ==== QUESTION(s) STARTS HERE IF YOU WOULD LIKE TO SKIP MY NOSTALGIA === Okay onto the problem! I have been hearing that Windows 11 requires TPM 2.0, and having read up on the tech I would love to go for a discrete module and had one picked out and everything and was about to order it. However I quickly checked the new mobo's manual and I do not think it has a place for a discrete TPM module, which I confirmed by looking here: https://www.asus.com/us/site/motherboards/Intel-Alder-Lake-Z690-H670-B660/websites/download/ASUS_Z690_Full_Specs.pdf. I wanted to know if this means (hopefully) the glacial has one built in? If thats not the case and its like a firmware one, should I just say stfu to my need for better looks and go with the Maximus Z690 Formula? I dont really overclock and from what I have checked it seems like the Formula might actually be better for me if the glacial lacks TPM as I am not into OC anymore. Another question is, I will have the P5800X 1.6TB which is PCIE and I dont think I will ever use M.2 slots (I dont play many games anymore ill only play like the campaign on the newest cod of bf and then uninstall it after. Thus my storage hovers around 600GB so I have enough headroom already). Now the asus PCIE lane width is confusing thie shit out of me so if anyone can help me I would appriciate it, I have a 3080Ti and I will have the P5800X and thats it. Im not going to use any other m.2, dimm.2, sata, pcie slots at all. The P5800X is quite fast so I dont want it to be limited in this case, and I think with the Glacial I might have to run the 3080Ti on x8 pcie 5.0 and the drive on x8 pcie 5.0 (which is more than enough for the drive) BUT with the formula I think I can run the 3080TI on pcie 5.0 x16 and the drive on pcie 4.0 x16 (Not sure about this, I might be totally wrong I was super confused by this chipset) Now I know the formula is 1.3k cheaper and since I wont OC theres 0 point for me to get the glacial (Plus I have an unopened EKWB for the CPU which will be going to waste with the glacia), but try to ignore the price difference. Just assume these two boards cost 800 each. For my specs, would you prefer the formula or the glacial? Quick note: I hate PC noise and sometimes some motherboard part gets hot and it makes me pissed which is why these two boards are the only ones I am considering. In addition, I have only used Asus boards in my builds, and ever since I got the rampage v extreme I have never gotten anything non rog for a motherboard, and I would like to stick with this as well! What do you guys think? Thank you so much in advance, I love you all Heres the full build btw, ill italicize the stuff I am keeping from my current build (Ill post pics when the mobo comes as thats what + the intel p5800x! is wt im waiting on) - Intel i9 12900k - ROG Maximus Z690 Glacial - 2 packs of: DOMINATOR® PLATINUM RGB 64GB (2x32GB) DDR5 DRAM 5200MHz C40 - MSI 3080TI Vector Trio - P5800X 1.6TB ssd - phanteks enthoo primo all noctua fans - be quiet! Dark Power Pro 12 1200W, BN646, 80 Plus Titanium - 360 rad + 360 rad (FK I REALIZED I GOT A 360 ACCIDENTLY INSTEAD OF 480) - block/backplate for gpu - https://www.ekwb.com/shop/ek-quantum-kinetic-tbe-300-d5-pwm-d-rgb-acetal - https://www.ekwb.com/shop/ek-quantum-magnitude-d-rgb-115x-nickel-acetal idk if ill use this tho cuz glacial does not need one EDIT: I have 3x 2560x1440 monitors 165hz and the pc is used mostly for java/c(++) tls-related development
  10. That 500 gb is for my os. I have 4 seperate drives remaining on the other hand.
  11. Hey guys, So I have run into a small issue. I just got a new nvme m.2 samsung 850 pro to replace my current 2 x AMD r7 ssd 240 gb raid 0 configuration. Now my issue is, for games, I have a 2x 500GB Samsung 850 raid 0 config. Is it worth to put the 2x AMD and the 2x Samsuing in raid 0? Bare in mind, data reliability/points of failure is not an issue. I am running an x99 motherboard.] To add to this, I have not even gotten close to filling up the 1TB yet. So the lost space is not an issue either Thanks!
  12. The surface book was my original "look" but it does not seem to be upgradeable, and is a bit too expensive, I think about 1350.
×