Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Adrien Angeldust

Member
  • Content Count

    3
  • Joined

  • Last visited

Awards


This user doesn't have any awards

About Adrien Angeldust

  • Title
    Newbie

Profile Information

  • Gender
    Male
  • Occupation
    Java Programmer

System

  • CPU
    Intel i7 8700K
  • RAM
    32GB DDR4 3200MHz G.Skill Trident-Z CL14
  • GPU
    Asus Strix 1080Ti OC
  • Case
    Fractal Design Meshify S2 Tempered Glass
  • Storage
    2x SATA3 Intel SSD 520 - 60GB (RAID0), 2x SATA3 WD Black 1TB (RAID0), 2x Kingston Fury X 240GB SATA3, 1x M.2 Kingston A2000 - 1TB
  • PSU
    Seasonic G-550 550W Gold
  • Display(s)
    ASUS PG27VQ 27" TN 2560x1440, 165Hz
  • Cooling
    NZXT Kraken X62
  • Keyboard
    Razer Blackwidow Elite + Razer Tartarus V2
  • Mouse
    Razer Naga
  • Operating System
    Windows 10 Pro, Linux Mint
  • Laptop
    MacBook Pro 15" Mid 2015
  • Phone
    Google Pixel 3a XL

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. However when i think of it more, if there must be some corrections to achieve higher frequencies, then it cannot be simple detections but also corrections, otherwise we would see no difference in performance between higher and lower frequencies and observe tons of crashes without corrections. But surelly systems do crash if you go with high enough with OC. I am no doctor either, just wondering how it is.
  2. I see, so basically in case of Non-ECC RAM its not the error-correction thats happing there, but more like error-detection. Hoewever can't just the operation be re-run on original data if error is detected similar to what happens with graphics crads dropping frames? Can't higher bandwidth overcome the amount of re-runs if you are effectively able to almost double the bandwidth without ECC?
  3. Hi everyone, Recently an interesting thing came into my mind after watching one of the OC videos for RTX3080 done by Jayztwocents. In there he mentioned that modern memories often have the auto-correction features so we are no longer see the artifacts in the screen as often as we used to but instead increasing the frequency causes higher amount of autocorrections needed and lowering the overall fps and performance. I know RTX3080 uses a bit different memories compared to DDR4, but some base auto-corrections in DDR4 is already in place to achieve higher frequencies of DDR4. As such I was wondering if for example DDR4 or even upcoming DDR5 already have some level auto-correcting features to be able to achieve higher frequencies. If so doesn't it practically making special ECC memories for server usage kinda obsolete and pointless to pay extra? ECC memory usually have lower frequencies compared to non-ecc ones, however if there is already some error-correction in non-ecc ones, why have the ecc ones at all? I understand that in past it was helpful as these auto-corrections were necessary, but what about these days with modern chips when memories must have it to achieve high frequencies anyway? Does anyone ever actually really had need to use DDR4 ECC in servers? Or its nowadays just for the convenience because we used to need it with older memory standards?
×