Jump to content

Camofelix

Member
  • Posts

    114
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Profile Information

  • Occupation
    I do the thing with the 1 and the 0
  • Member title
    Junior Member

System

  • CPU
    Personal Machine: Dual Skylake xeons (Amazon "e-waste")
  • RAM
    never enough
  • Operating System
    Gnu/linux with custom kernel, a metric [redacted] of VM's etc
  • Laptop
    MacBook Pro

Recent Profile Visitors

1,304 profile views
  1. I believe I had e-core's disabled and changed to dynamic clock of 4.8 and ring bus at 4500. power limits set to "water cooled" mode. I've used XMP in the past but did not have it enabled
  2. @Guest 5150Just in case, I tried both sticks separately in each of the four slots, results were as above. Praying it isnt a dead board out of nowhere
  3. Hi ladies and gents, looking for some advice after my system died on me as of yesterday. Issue: When booting, EZ_debug LED for CPU flashes red for a moment, case fans and fans on PSU spin up for a split second before shutting down again, briefly starting a second time, before shutting down completely. I've ruled out PSU, CMOS/Bios revision/configuration, pulling all devices and more (see Steps attempted). System has been fine since November 2021 Relevant specs: i7-12700k MSI-z690 pro A DDR4 2*16GB G.Skill RipJaws 3200 16-18-18-38 After that I have multiple disk drives, an rx5600xt etc. but those have all been removed to trouble shoot this issue. Steps attempted: I've attempted using a different, known good PSU I've cleared CMOS using first the jumper method, then the CMOS battery method. I've attempted using the MSI bios flash from USB function from the motherboard, using bios revision 10, 11, 12 and 13. Disconnecting all external PCI-e cards, peripherals, etc. leaving only CPU_power1, case fans and the 24pin connected, removing things like RAM and so on. Any suggestions?
  4. After further review, and going down the rabbit whole, I ended up going with the Fractal Meshify 2, and it's been quite good so far. Thanks again for having taken the time gentleman
  5. Yeah, time permitting I'm hoping to have time after the kernel 5.17 merge window to look into it. It's a *somewhat* niche case, but malloc mixed with bit-shifting for exponential trees isnt completely uncommon in HPC, so could have some problems sitting there sucking up cycles in super computers as I type this Thankfully those environments tend to use the CRAY intel or custom compilers which are immune to this.
  6. Not quite sure what you mean Went further down the rabbit hole, and it's a bug in how Glibc (the GNU C library) and GCC do malloc. Replacing the memory allocation subroutines with TC malloc, JE malloc or HOARD malloc all yielded *massive* uplifts in performance, leading to GCC-12 surpassing ICC and CLANG-11 (when those 2 are using GLIBC malloc) I haven't had the time to integrate TC malloc etc. with OneAPI yet, but hope to do so soon.
  7. Yes, but not in a while. think original DDR/DDR2 era and prior Also had cards that would take memory on board as high speed cache. You'd load your files into it on boot like a ramdisk, but they'd use the PCI (not PCIe) bus instead of actual system ram. Was a way of doing SSD's before SSD's really existed in the consumer space Conceptually similar to a very dump, very very very expensive optane/ssd cache for an HDD
  8. Addressing the size, there's no reason to minimize space on this sort of prototype product. From the PCB, it seems to be Revision R1.00T. More relevant is that the larger size makes it easier to attach oscilloscope probes to the pads exposed on the PCB near the ROG logo, making it much easier to debug any issues. As for overbuilding mother boards, as you extend traces, and then jump from on discrete material to another (from the main board to the Dimm's pins for example) you get signal bounce back creating noise on the lines amongst other issues. LTT's cable testing video illustrates a simplified version of this problem. If the initial signal from an overbuilt board is cleaner than that from a lower end board, the odds of success with the higher end board, while not guaranteed, are higher. It's the same idea as when an OC MB will only have one memory slot per channel, to avoid bounce back*. *Technically reflection issues with 2 dimms per channel depends on T topology vs daisy chain topology, but that's beyond the scope of this post.
  9. Summary Found floating online in a few small circles of beta/early testers are what look like prototype/samples of ASUS ROG branded DDR4 to DDR5 adapter risers with integrated power and logic circuitry. These custom adapters with integrated components are needed because of the difference in architecture on the dimm's themselves. Thankfully Alder Lake sports a memory controller capable of both DDR4 and DDR5 capabilities, and if validated on the ROG boards, this could lead to a transitional adapter for early adopters waiting for DDR5 to mature who also already own top of the line DDR4 Quotes My thoughts it's an interesting case we find ourselves in. This sort of device would only be possible if Asus had either foreseen this issue, overbuilt their trace signalling beyond even normal spec or both. What will be interesting is how with this adapter the highest end Asus boards, in combination with the highest end alder lake processors, will be able to OC the current top of the Line DDR4 dimms for potential OC world records. EDIT: For the sake of clarification, I'd like to highlight that the board in the video is a prototype that is intentionally oversized for easier debugging with tools such as an oscilloscope. If this product were to come to market, I would be very surprised to see it be even 1/3 as tall as the prototype shown in the video. Sources
  10. Turns out it isn't alder lake at all. It's pervasive as far back as nehalem on all Gcc versions. There didnt seem to be a lot of interest on the LTT forums, so I stopped updating this thread, but the main L1T thread has much more info: I've been tracking this more on the Level1 techs forums (https://forum.level1techs.com/t/wip-testing-update-its-not-just-alder-lake-it-goes-back-to-nehalem-gcc-50-performance-regressions-vs-clang-and-intel-compilers-in-specific-workloads-across-all-opt-settings/179712/10) I've dug through a lot of the assembly, but haven't gone *all the way down* the rabbit hole as it were. (If you count 100+ different runs as not going all the way down I guess ) It seem's GCC is trying to pre-cache instructions a lot, almost n64 instruction cache style, using wayyyyyyyyyyyyyyyyyyyyy more registers at times and wasting cycles.
  11. Gotcha! What my current thinking is, is the 5kD silent due to the point mentioned above for dampening spindle noise, mount the AIO to the roof of the case, grab a pair of the brackets that you linked above and mount those in the open space on the floor of the case near the PSU cover, then route cables from there
×