Search the Community
Showing results for tags 'epyc'.
-
Hi guys, I know the title sounds paradoxical in its face. But indeed, I would appreciate recommendations for CPUs that have low idling energy usage (~20-50W max) while having superior or at least comparable multithread performance compared to a 14900K and preferably also higher maximum RAM support. I don't mind the energy consumption when its under load (it can hit 600W for all I care), but what I am looking for is a CPU with low idle power draw. In fact, the 13900k/14900k themselves are good examples of CPUs that might fit this criteria - the power draw of my 13900k could be as low as 20W at idle, but their multicore performance is great (albeit the power draw under load is pretty high - but that's okay). Alas, I require a CPU which is more powerful and/or supports more RAM than the 13900k. Budget is not an issue. Honestly I was thinking of getting the Threadripper 7980x, if not for the fact that I read that its idling power draw is ~150W. Server CPUs are fine too. Alternatively, is there a way to play with the (bios) setting of the Threadripper to lower its idling power usage to <50W while maintaining its peak performance? Thanks so much!
-
Hi, so have no idea why this is happening. It started after I updated Plex on one of my VMs in Proxmox. Before that, it worked fine. To be honest, I did not try to run the system without my drives, but this will be my next step. So, I am a little confused if you have some advice or thing to test, please let me know, it was kinda expensive system for me. Specs: CPU: AMD Epyc 7551p MB: Supermicro h11ssl-i RAM: 64 GB skhynix 2666 ECC Boot drive: M.2 Samsung 980 1tb Powersupply: bequiet System Power 9 700W No GPU
-
AMD’s EPYC forms the basis for Threadripper, and with Zen 3, it’s not even a fair fight with Intel anymore. How can Intel turn this around, and can AMD keep it going?
-
Hi Guys, Im planning to build a new EPYC 2U Server, but since I have never built in a Rackmount Case Im really worried that some parts might not fit. Also, I have no idea what PSU might be good for my usecase (Proxmox with TrueNAS, Plex, Firewall, Gameservers, Mailserver, Nextcloud...): Case Fantec SRC-2612X07 2HE 680MM 301,55 € x1 Mainboard Supermicro H12SSL-I 508,74 € x1 CPU AMD Epyc 7352 24/48 1.208,96 € x1 Cooler Inter-Tech CPU-Kühler A-26, 2HE Active 37,45 € x1 RAM 64GB Samsung M393A8G40AB2-CWE 317,00 € x4 PSU ??? ? € x1 M2s Samsung 970 Evo Plus 500GB 65,90 € x2 SSDs Crucial MX500 1TB 88,00 € x8 Adaptor SAS 36 SFF-8087 to 4 x SATA 10,99 € x2 NIC 10Gtek® 10GbE PCIE 160,99 € x1 Any help is greatly appreciated!
-
Today, November 8th 2021, amd announced in a live-stream what their next generation data center products will look like. Starting out with their newest server cpu code named Milan-X, which essentially is Milan with a truck load of cache added on top of it, 512MB worth (8*64MB), as they showed a few months ago it looks exactly like the old milan processors, keeping the same 64 core maximum, same socket etc, and current servers can support it with just a bios update. Then on the graphics side they announced the MI250x and MI250, a dual die, octa channel HBM 2e, with 3.2TB/s of memory bandwidth, 128GB of vram, and 220 Compute units based on tsmc's 6nm node with a monster tdp to match of 560W. then by the end they also gave us a sneak peak into what's next, confirming MLID's leak to a teeth, with zen 4 having 2 versions, one code named GENOA with up to 96 cores, and another code named BERGAMO with up to 128 Cores, the latter optimized for density and less so general compute both will be made on tsmc's 5nm and launch around late 2022, and interestingly its already sampling to customers, (Bergamo to come later in 2023). along the expected specs like DDR5, PCIe 5.0, is a very important one CXL My thoughts I love too see packaging tech continue to be pushed by amd, a small detail of the cdna 2 announcement is that its actually using embedded silicon interposers to connect the gpus to the hbm stacks, and to connect the two gpu dies, this means that hbm will now become much more flexible as one no longer needs to have a silicon interposer the size of the entire substrate, added to the fact that hbm 3 is near its prime time things are looking good for the standard, can't wait for all this tech to trickle down to us mortals. Just noticed they also announced CXL support with zen 4 epyc, so we might be close to getting a more efficient use of the pcie lanes our cpus have as well as finally a interconnect where optane can thrive. Sources
-
Country: United States Games, programs or workloads that it will be used for: Linux Computer Node? Other details: I don't know what the customer wants these for, but I'm building them! As ever my job has steered me into building something amazing. Something not quite Epic, but truly Ripping... That's so bad I'm sorry. We have a customer that has in the past asked for some really beefy rigs as detailed on the forums last June. Mid March of this year they came back asking for more. Not more of the same, but simply More. By that I mean they want systems with more Cores/Threads, RAM, Storage and all of it in a 2/3U Rack mount... Now I told them point blank that I can't go past 128GB's of RAM on anything short of HEDT and they said Yes. After a few quotes we came down to the following specs. CPU: R9-3970X (32c/64t) Cooler: Dynamix A26 (Small and fits.) RAM: 8x32GB Crucial Ballistix 3,200Mhz CL16 (They wanted 512GB but didn't want to pay for ThreadRipper Pro) Mobo: ASRock TRX40 Creator (Does what they want +10Gb NIC for inter node connection) Storage- SSD: 1TB Samsung 970 EVO (Because they don't need a 980PRO) HDDs: 3 x 8TB Seagate Ironwolfs in RAID 5 (More Storage, More Better) GPU: GT710 (No GPU Compute in these rigs.) PSU: BeQuiet PurePower 11 500w (Enough power, 80+ Gold and Modular) Case: iStar D-Storm 300-PFS (3U, Full ATX, Rack mount) + Extra Noctua fan for airflow. The only thing I'm woried about is the small CPU Cooler, but it's rated for the job given enough airflow. Here is the Parts Picture less the case, I'll upload the build when It's all done.
-
Hello All and happy new year! Quick question for you all, I'm currently building a home server and picked up a used 1st Gen Epyc from a Dell Server. I put it into a Supermicro H11SSL-NC motherboard. But I can't seem to get it to POST. I borrowed my buddies Epyc that he has plus his RAM from his PowerEdge server but it still did not POST. I have sent my motherboard back to supermicro twice already (posted both times for them) and have tried a different power supply. I've made some calls and did some googling but no one seems to know if a Dell Epyc is actually compatible with a SuperMicro motherboard. I'm assuming Dell might put some microcode onto the cpu? CPU: AMD 7601 (from a poweredge server) MB: supermicro h11ssl-nc RAM: SKynix 32GB 2666 ECC memory (they came with the CPU), But I also tried some supported Samsung memory and still no POST. TL:DR - Should a Dell Eypc CPU work in a Supermicro Motherboard? If not, are there any workarounds?
- 6 replies
-
- supermicro
- dell
-
(and 1 more)
Tagged with:
-
Hi there, currently I am planning my first Epyc build. I want to get the Epyc 7443P. Here are the components I have choosen: Supermicro H12SSL-CT 2x I went with 4x now Micron RDIMM 32GB, DDR4-3200, CL22-22-22, reg ECC (MTA18ASF4G72PDZ-3G2E1R) For now I want to go with 2x 32GB modules and get more later. Will this RAM work fine with the motherboard? I am currently not so familiar with server builds. Anyway, I want to have more PCIe lanes, ECC, flexibility and the remote management. It will be a nice addition to my home network, since I have upgraded to 10GbE. I plan to run unRAID or something else. Maybe Windows Server 2022 Datacenter. I have not decided yet. Especially the higher boost frequency is important for me, I want to do some Gaming, too. My usual workload will be: Running VMs, programming, compiling, rendering (Blender), Photoshop, etc... My current hardware: Ryzen Threadripper 1900X AMD Radeon Pro W5700 8GB (will go in the new build) MSI X399 Carbon Gaming AC 8x 16GB (running at 2666 MHz) A couple of M.2 NVMe and S-ATA SSDs (will go in the new build) Seasonic FOCUS GX-850 80+ Gold (will go in the new build) Noctua NH-U14S TR4-SP3 (will go in the new build) Mellanox ConnectX-3 (will go in the new build) Thank you in advance.
- 8 replies
-
- epyc
- workstation
-
(and 1 more)
Tagged with:
-
Budget (including currency): $20,000 Country: Uzbekistan Games, programs or workloads that it will be used for: Deep Learning training and inference Other details (existing parts lists, whether any peripherals are needed, what you're upgrading from, when you're going to buy, what resolution and refresh rate you want to play at, etc): Currently we have GeForce RTX 3090 Graphics Card, 10th Generation Intel Core i9-10850K Processor, 64 GB RAM, 1 TB SSD and it is not enough for our current workload and we want even more computational power, so we were thinking of upgrading to 4x RTX 3090 with server grade level high end AMD Epyc
-
I’ve been using TrueNAS Core with 4 different Windows 11 VMs (I know not the most recommended) accessible via 6 Zero Clients and 4 Laptops (VMs have multi-users and running RDP Wrapper to achieve this) It was running all good for past 5-6 months but recently it’s been that TrueNAS shutdown the VM on it’s own whenever there is High usage in network shares. So, I’m thinking of shifting to a new platform for the Host OS. I’ve shortlisted three out of many which have support plans and I’ll be okay to take one for a support but not costing 25% of Server Cost per Year Proxmox (have used it multiple times but everytime some error or mis-configuration led me to shift from here, also it’s on a cheaper side for support plans) XCP-NG (have heard alot of good things but never tried it, plans are a bit expensive but I’m okay with it) VMware ESXi (have used it previously liked the way it handled VMs but have 2 problems, one, hardware raid for achieving redundant storage and second, expensive support plans and even the community isn’t that friendly) Now for the Hardware, AMD EPYC 7542 Supermicro H12SSL-I MBD Micron 128GB 3200MT/s ECC Memory (4x 32GB) Samsung 860 Evo 250GB x2 (for Host OS) Gigabyte Aorus 500GB x4 (in Mirror with Stripe) (for VM disks) Intel X710-DA2 10G SFP+ NIC Seagate Exos 10TB x2 (for network shares and local backup) With this kind of Hardware in a single node, what Host OS will be the best option for Virtualisation? (PS - for the single Node there are 2 backups available, one server at my home in different location from the above which will be installed at my office. And second in the cloud with Backblaze B2, obviously encrypted)
- 1 reply
-
- operating system
- proxmox
- (and 4 more)
-
Budget (including currency): $3000 Country: US Games, programs or workloads that it will be used for: Want Base as Linux - version TBD - this will host two passthrough Windows VM's for Windows 10 gaming. All other VM's off this box will be either Linux or Windows Server 2019 Other details (existing parts lists, whether any peripherals are needed, what you're upgrading from, when you're going to buy, what resolution and refresh rate you want to play at, etc): Use existing drives/case/etc. Parts RTX2070 (2) for Windows 10 virtualization https://www.newegg.com/asrock-rack-romed8-2t/p/N82E16813140044 Epyc Server Board - see elow - think option 2 is bette EDIT - Decided on the same Gigabyte MZ32-AR0 that Linus used - has IOMMU in BIOS whereas ASRock does not. Has MORE memory slots. Has most of the features I need (not as good as ASRock) https://www.newegg.com/amd-epyc-7402p-socket-sp3/p/N82E16819113592?Description=epyc rome cpu 24 core&cm_re=epyc_rome_cpu_24_core-_-19-113-592-_-Product 24 Core Epyc CPU So this motherboard is more expensive than the Gigabyte motherboard Linux used. This one has all 7 pice x 16 gen 4 slots, 2 oculus, 1 thunderbolt, etc. Adding Adaptec 71605 - not sure if I will do passthrough to Windows Server 2019 as all drives attached are on current Windows Server 2019 implementation Using 45 drive case to replace 36 bay that had motherboard, aIso will need to buy a new case for the ASRock motherboard Questions: 1) I cannot really afford all 8 sockets of RAM to be populated - can I start with say 2 or 4 memory sockets populated? 2) Did Linus ever get around the SMT issue? If not, they could be a deal breaker for me. I think the Threadripper TR3 has a workaround for VFIO and SMT, but there are no 7 pice 4 7 slot motherboards. I would be fine with 7 slots provided maybe 3 were pcie 4 and the rest were pcie3 so I can put in expansion card 3) Thoughts for a case? 4) Should I watercool? Will that lower the overall temperature in the room? Or which is the ideal air cooler? 5) Any good tutorials on VFIO? 6) For the motherboard provided above, how would i find out the IOMMU groupings and if this will do what I want 7) Any ideas for a case? I do NOT need any drives in their except the ones populated on the motherboard as I will hook up to external 45 bay All help is is sincerely appreciated. While I have been running Linux, I want to cut down on physical Windows machines (ideally want to get off Windows eventually completely). New to VFIO and GPU passthrough. Timothy Atwood
-
Hello guys, Recently I bought a Supermicro H11SSL-I-O motherboard, and put my EPYC processor in. However, the board can't boot up at all. When I checked the manual, it says there should be a POWER_OK LED on board lighting up - but I'm not seeing that LED lighting up. Is there anything special I need to do except put in the CPU and memory? Also, if I plug the ATX power supply to a power tester, it works. If I plug it into the motherboard and pull it out then plug it into the power tester, it doesn't work (just like the PSU is detecting a short circuit and disabled itself). If I cut the power to the PSU and plug it back into the power tester it works again. This happens whether with or without CPU/memory plugged in. However, the IPMI/BMC LED on the board is still working (green blinking) and doesn't seem to have a problem. Configuration: CPU: EPYC 7501 / EPYC 7351P Memory: Mocron MTA18ASF1G72PZ-2G1A2 1RX4 PC4-2133P ECC REG / SK hynix HMA42GR7MFR4N-TF PC4-2133P ECC REG PSU: Dual Supermicro 920w 1U PSU with power distributor Details: According to the manual of the motherboard, the POWER_OK led should light up when it detected correct voltage. It also said to plug in the memory only on memory slot C1 and put CPU in for debugging. But after doing these things I still can't get that LED light up.
-
Sources: http://www.anandtech.com/show/11476/computex-2017-amd-press-event-live-blog-starts-10pm-et http://www.anandtech.com/show/11482/amd-cpu-updates-threadripper-64-pcie-lanes-epyc-june-20th More details confirmed for Threadripper: Confirmed to be half an EPYC processor 16 core 32 thread top end made up of 2 dies All products featuring 64 lanes PCIe, 60 from CPU 4 from chipset All products featuring DDR4 Quad channel memory All products featuring 16MB L3 Cache X399 Chipset (Sick burn right here) Release Summer 2017 Still no confirmation of other SKUs EPYC news: 32 core 64 thread, 128 PCIe lane CPU coming June 20th Die shot: #covfefe
- 68 replies
-
- amd
- threadripper
- (and 4 more)
-
Hi guys I asked two days ago for which one is better for workstation it's r7 1800x vs r9 . Iheared that AMD EPYC is going to be available sooner so I'm thinking of building a dual socket workstaion with it . But is it only for servers or just like xeon for server/workstation
- 15 replies
-
Original source: http://wccftech.com/amd-unveils-epyc-cpus-32-cores-64-threads-datacenter/ AMD unveiled the new EPYC datacenter CPU based on AMD Naples silicon. Up to 32 cores with 64 threads. A highly scalable, 32-core System on Chip (SoC) design, with support for two high-performance threads per core Industry-leading memory bandwidth, with 8-channels of memory per “Naples” device. In a 2-socket server, support for up to 32 DIMMS of DDR4 on 16 memory channels, delivering up to 4 terabytes of total memory capacity. The processor is a complete SoC with fully integrated, high-speed I/O supporting 128 lanes of PCIe 3, negating the need for a separate chipset A highly-optimized cache structure for high-performance, energy efficient compute AMD Infinity Fabric coherent interconnect for two “Naples” CPUs in a 2-socket system Dedicated security hardware Personally excited to see how the infinity fabric coherent interconnect will help the systems performance.
- 19 replies
-
So, I guess the rumor mill is going to start to spin regarding second gen everything AMD. According to Canard PC, we should expect a second gen Epyc with 64 cores, 256MB of L3 cache, 8x DDR4-3200 and 128 PCIe 4 lanes. If that's the case and not just a rumor and turns out to be true, I wonder what's going to be Intel's response to this in the near future. I can see Linus having a field day with that CPU if it turns out to be real! Source
-
Surprised this hasn't been posted yet.. So, for some time rumors have flown that Threadripper is made of "failed" Epyc chips, as there are only two of the four dies in use, but AMD has finally spoken up to clarify things. As it turns out, two of the dies are inert and do not even contain transistors. Basically, they are just spacers, there for structural integrity and nothing else. Source: https://www.overclock3d.net/news/cpu_mainboard/amd_clarifies_why_threadripper_uses_4_silicon_dies/1
- 101 replies
-
- amd
- threadripper
- (and 4 more)
-
Threadripper 1950X vs Epyc 7401P? Hello, I am planint to build new PC and looking at CPU-s. I do a lot of 3d work (zbrush, Mari, Maya, Houdini, Substance Painter, Substance Designer, Topogun 2, 3D coat, Knald, AE and Photoshop). But i also like to game time to time with max settings. I was looking at Epyc CPU and also Threadripper 1950X I am confused would i be able to game with max settings without any slowdowns or hickups with Epic? What is difference in Base, All and Max clock speeds? Can i clock the EPYC also to 3.0 or more? Is there good cooler available for that so i wont fry my cpu? Can i use regular graphics cards in server motherboard? (I currently have NVIDIA GeForce GTX 780 (3072 MB) ) Where can i see what is the core speed? Or is all core total 2.0 or 2.8 or 3.0?
-
I spent a few hours working on this project. it shows the total ghz / retail price of cpu. I may add intel with ipc adjustments later. Ryzen 3 1200- 0.1247706422018349 1300X- 0.1147286821705426 Ryzen 5 1400- 0.0804733727810651 1500X- 0.0783068783068783 1600- 0.0986301369863014 1600X- 0.0963855421686747 Ryzen 7 1700- 0.0899696048632219 1700X- 0.0761904761904762 1800X- 0.0641282565130261 Ryzen Threadripper 1900X- 0.058287795992714 1920X- 0.0600750938673342 1950X- 0.0640640640640641 Epyc 7251- 0.0488421052631579 7281- 0.0664615384615385 7301- N/A 7351- N/A 7351P- 0.0618666666666667 7401- 0.0389189189189189 7401P- 0.0669767441860465 7451- N/A 7501- 0.028235294117647132 7551- N/A 7551P- 0.0457142857142857 7601- 0.0243809523809524 RYZEN.txt
-
Hi guys! Would like some feedback from you concerning this. Mainly because I can think of a lot of professional workflow where it works like this, but of course can't think of all of them. So would appreciate from you to share some hopefully different view of the landscape and make me learn something new hehe. Also. the reason why I'm putting the discussion here is that LTT is one of the only channels where some build like this could actually happen IMO. INTRO Here is my train of thought. Threadripper is all fine and dandy. For that price, Intel is just the worse option. Also, most probably when Intel goes over 10 cores (upto 18) with pushing down base clock speeds (nobody cares about boost clocks if you are doing operations which use all the cores) it will see even worse returns for money compared to their 10core parts. But, honestly. I don't see the need for threadripper in mid-term also. I mean. It's not meant for gaming. You will basically not gain any single thing gaming on it compared to a ton cheaper 1700 OCed (or 1800x if no OC). I'm not going to count each dolar (like added for cooling) because, it's a cheap thing concerning the cost of the platform and possible uses. Also, space constrains (Threadripper does fit in a regular PC case is a huge plus), but also. We are talking professional workflows here THE MAIN QUESTION Here is what I'm thinking 1xThreadripper 1950X 16C32T part costs >1.000 USD. Overclockable to 3.9GHz on all cores (some go to 4.0, but, let's just leave 3.9 as a realistic expectation). 64 PCIe lanes 2xEpyc 7281 16C32T parts cost >2x600 USD = 1.200 USD with 32C64T. Base clock, all cores go to 2.7GHz. 128 PCIe lanes USEFULNESS GPU Workloads: So as I look at it. Some people may buy into the Threadripper for it's 64PCIe lanes not caring for CPU performance or not needing it. (maybe their worflow is GPU specific and needs to have as many of them running on a single machine) But for them also it would be better to just buy the single 8C16T Epyc with 128 PCIe lanes for >400. Even tough I honestly don't think price is an object when comparing what would be the cost of actually populating all 64 PCIe lanes wih GPUs (not to mention 128) Space here doesn't play a role because with this many GPUs you are moving away from normal PC cases of course. CPU Workloads: My logic goes like. If you are using the CPUs for rendering. Than you are using all the cores completely. In that case. The single core IPC is really not that important. Even a slow 2.7GHz 32C64T should be able to provide better performance in highly parallel environments than even a 4.0 GHz OCed 16C32T part. The only workflow which could benefit from a faster TR chips could be memory speed conscious workflows (3200 MHz + for TR vs 2666 MHz for Epyc) Mixed CPU+GPU workloads From the above mentioned scenarios, I don't see how a mixed workflow would be in any way worse since it would benefit from both of the facts mentioned above. VIRTUALIZATION Nothing special to mention here. PLATFORM COST Here the advantage is of course in the consumer market Threadripper part. But, my personal opinion is that for these workflows the prices aren't that big of a deal. I mean. CPU prices themselves pale in comparison to the possible storage costs, GPU costs, memory costs... FUTURE-PROOF I have intentionally mentioned 2x Epyc 7281 16C32T parts. Because it has a clear path of upgrade-ability towards the 2x Epyc 7601 32C64T parts for the mind boggling 64C and 128T system. Which at the present, would require paying almost 7x the price premium for about 2x the performance. What is your opinion guys (girls included in guys for PC warriors)?
- 8 replies
-
- amd
- threadripper
- (and 4 more)
-
Motherboard maker Gigabyte had just announced their server motherboard for AMD Epyc processor. Called Gigabyte MZ31-AR0, the board uses a single socket configuration with 16 DDR4 ram slots. It has 5 physical x16 and dual physical x8, with all of them running at their full speed, except for one of the x16 operating at x8 mode only and 24pin with dual 8pin for power. Bottom right, there is a M.2 and 4x slimSAS for 16 SATA ports, where a single slimSAS can running up to 4 SATA drives. On the I/O, it has dual USB 3.0 and 2.0, RJ-45 gigabit management lan, serial, VGA, ID button, and dual SFP+ 10Gb/s ports using Broadcom® BCM 57810S controller. No price or release date info yet. http://b2b.gigabyte.com/Server-Motherboard/MZ31-AR0-rev-10#ov
-
AMD Epyc 7000 series cpus leaked! These pics will do the talking. https://videocardz.com/70266/amd-epyc-7000-series-specs-and-performance-leaked
-
I was curious on peoples thoughts on the new platform as im most likely going to be buying into it (most likely the 7820x). I currently have a i7 6700k and although great at gaming im really looking for a more powerful CPU workstation wise. As of now I dont want to pre-order as its not a safe bet, how long should I wait before buyin it. Any other thoughts aswell are interesting to talk about.