Jump to content

PGRacer

Member
  • Posts

    24
  • Joined

  • Last visited

Awards

This user doesn't have any awards

About PGRacer

  • Birthday November 29

System

  • CPU
    5950x
  • Motherboard
    ASROCK X570 Taichi
  • RAM
    32GB Corsair Vengeance 4000Mhz CL16 @ 3800Mhz CL14
  • GPU
    AMD HD6970 (RIP EVGA RTX3090 FTW3, waiting for 3090Ti to replace)
  • Case
    Phanteks Enthoo Primo White
  • Storage
    2x m.2 512GB
    1x SATA SSD 512GB
    1x 3TB
  • PSU
    Seasonic Prime GX-1300
  • Display(s)
    BenQ 32" 4k @60Hz, 2560*1600 @120Hz
  • Cooling
    EKWB 12/16 custom loop. 5x120mm Wide Radiators.
    Heatkiller IV Pure Copper CPU Waterblock
    5x Corsair 120mm RGB Fans (visible radiator airflow)
    2x Corsair 140mm RGB Fans (visible rear airflow)
    2x Antec 120mm White LED Fans (visible front airflow panel)
    5x Antec 120mm Black Fans (hidden behind radiators)
  • Keyboard
    Corsair K55 RGB
  • Mouse
    Corsair K55 RGB
  • Sound
    Senheisser BT450 Headphones
  • Operating System
    Windows 10
  • Phone
    Samsung S9+ 128GB

PGRacer's Achievements

  1. I've been watching ebay uk this weekend quite closely following the crypto crash. We have reached the crossover point! On the Scan website last I saw the 3090FE cards were selling for ~£1500 when they had them in stock. Cards are currently selling for ~£1550. By the time you pay over 10% in ebay fees, the scalpers will be making a loss. This means that the miners aren't looking to buy, scalpers aren't looking to buy, only gamers left. This might also encourage some miners to start selling off cards while they still have some value, which increases availability. Prices could spiral south from here as more and more might want to sell before they devalue completely. If you are looking to buy a card then very soon might be a good time to buy. Anyone else seeing this in other regions?
  2. Thermal Grizzly Kryonaut Extreme claims to have 14.2 W / (m K). Almost all the others seem to have 12.5W / (m K). Is there a noticeable difference in real world temps? I'm looking for the absolute best, if it exists. It's a custom rig that gets taken apart for cleaning twice a year, long life is not an issue.
  3. I want to keep things valid for RMA so that's not really an option.
  4. For GPU & CPU, which is the best performing? I'm too clumsy for liquid metal, so non-conductive only please.
  5. Hi everyone, I'm a complete noob with mining so please be gentle. I have 2 rigs, a media rig with a GTX1060-6GB, and my main rig with 2xGTX1080-TI. Media Rig is a standard air cooled setup maybe a tiny overclock on the GPU, but nothing to write home about. Main Rig is a custom loop watercooled setup, both cards happy at 2GHz all day long without breaking 60 dev ever, 55 is a rarity. So I watched Linus' video and downloaded nicehash miner (not quickminer). And this is my performance (See pics).... My 1060 has a higher MH/s than the 2 1080-TIs Can anyone here point me to what to look for? Obviously something is config'd wrong. According to this https://miningchamp.com/gpus/272/EVGA-GeForce-GTX-1080-Ti-hashrate I should be expecting roughly double what I am getting from each card. SLI is turned off, I assume that's how they need to be? Other than that the mining is configured pretty much how it came out of the box, I did "update" all the algorithms.
  6. I'd like to see a video explaining why with Intel and nVidia on 12/14nm process, and AMD on 7nm process, why is AMD not destroying the competition on both sides? Supposedly 7nm should use less power and create less heat, yet AMD still cannot compete at the top end with nVidia. Whats the full story?
  7. It did catch on in a way, nVidia bought the tech and integrated it into their gfx cards.
  8. Oh I am used to it, I wrote a ray tracer as a homebrew project. Worked on implementing a photon mapper, worked on adding a kd tree for acceleration, decided I was going to use the GPU for even better acceleration, went out and decided to treat myself to a new PC to help develop it on, 2 shiny watercooled 1080TI's with 7000+ compute cores, marvellous I thought. 2 weeks later nVidia announced the RTX stuff. *bangs head on table*
  9. What games do you play? What resolution do you play at? Is graphics quality or FPS more important? What budget do you have?
  10. I forgot to add, you need to match the caddy to your hard drive type. SATA is the most common, IDE is ancient and unlikely, M.2 will need a different adapter, possibly too new if your PC is 5 years old.
  11. You can use one of these https://www.amazon.co.uk/TeckNet-Docking-Including-External-Tool-Free/dp/B00IS7Y96I/ref=sr_1_3?keywords=usb+hdd+caddy&qid=1557795558&s=gateway&sr=8-3 HDD or SSD goes in the caddy, plugs in via USB. I just googled for a caddy as a reference btw, I'm not recommending that particular one. Just make sure you get one that is USB2 compatible unless you have a USB3 port on your laptop. If you do have USB3 port then try and get a USB3 caddy as it will copy the files much quicker.
  12. I have an original PhysX card, still in the cellophane in the box, never been opened. If anyone is really interested I can photgraph the box :D.
  13. If that is the case then why not at least develop it to the point where it isn't a detriment to the games?
  14. Ray tracing is the end point for graphics though. It is a physical simulation of real world physics, bouncing photons round all the geometry to calculate the lighting for whatever is visible on screen. Ray tracing isn't a new concept, it was just impossible for computer graphics as the processing requirements were too high, hence the need for rasterization. Admittedly there were some early games that did use ray tracing but it was only for the primary color, they didn't do any shading as such. A very simplified basic history of computer graphics for gaming [which might be the wrong way round in a few cases but it doesn't matter to make my point]... The first non 2D games had basic colours applied to triangles. Then gfx cards got a bit more powerful... So then textures were applied to triangles Then gfx cards got a bit more powerful... So then they started shading the triangles using diffuse lighting. Then gfx cards got a bit more powerful... So then they started shading the triangles using diffuse lighting and hard shadows. Then gfx cards got a bit more powerful... So then they started shading the triangles using diffuse lighting, specular highlights, and hard shadows. Then gfx cards got a bit more powerful... So then they started shading the triangles using diffuse lighting, specular highlights, and soft shadows. Then gfx cards got a bit more powerful... So then they started shading the triangles using diffuse lighting, specular highlights and reflections, and soft shadows. Then gfx cards got a bit more powerful... So then they started shading the triangles using diffuse lighting, specular highlights and reflections, and ambient occlusion. The reason we have those different rasterization tecniques is because they were the closest you could get to reality with the GPU/CPU power available. Once ray tracing is a possibility, there is no more accurate method to program for. There is room for improvement, such as moving to 64 bit floating point values to help with issues such as Z-fighting. Of course the ability to process more and more polygons per second. Of course the lighting is only one part of the story and accurate movement physics are now done by the gfx card, however ray tracing or rather hardware ray triangle intersection also helps out with things like collision detection. Hybrid Rasterization and ray tracing is the way it will happen as there are some shortcuts you can take by combining the two which visually make zero difference and means that gfx cards will remain back compatible with rasterized games. So yes there definitely is a stagnation point for computer graphics, you can't get more realistic than a physics simulation for lighting.
×