Jump to content

Remember this? Well, it's time for a revision...

 

Imagine an nVIDIA card with the following qualities:

  • uses either Volta or Pascal refresh die(s)
  • consists of 2 modules, with the following properties:
    • 4096 CUDA cores
    • clockspeeds - core :: 2.33GHz, boost :: 2.99GHz, memory :: 2.0GHz, boost :: 2.6GHz
    • 256 TMUs
    • 2048 Tensor cores
    • 12GB dedicated VRAM - either GDDR6 or HBM 2
    • 4096-bit bus width
    • 10nm process
    • 240W TDP
    • dedicated ASICs for AVX and other instructions that can be 'vectorized' and offloaded from the CPU
    • options for inclusion of DLSS hardware (not required)
    • NUMA-aware, and can toggle it, depending on the workload
    • same API (call) support as the Titan RTX and Titan V
  • with a shared pool of 8GB HBM3 ECC VRAM (for tasks that benefit from being handled by both modules simultaneously)
  • linked via an individually-addressable backplane, such that:
    • each module can be addressed and accessed as individual (PCI-e) devices (passthrough)
    • or both cards can be treated as one PCI-e device, controlled by an active backplane
    • the connection is NVlink over PCI-e gen. 4
  • dedicated power conditioner/filtration for the backplane (which provides power for the modules)
  • dedicated sensors for HBM, GDDR6, and GPU die (power draw, temperature, etc.)
  • dedicated power and core control for each module (user can undervolt and OC the modules as they please)
  • 6-fan design, to cool the GPU die, power delivery circuitry, and shared VRAM
  • copper heatpipes/vapour-chambers, for enhanced cooling and thermal dissipation
  • 2x HDMI, 2x DVI-D, 2x DisplayPort, 1x Thunderbolt 3
  • Support for 2-way SLI, nvLink, and Quadro Sync
  • NVENC gen. 6 for content creators
  • dual-slot or triple-slot PCI-e AIB design, depending upon the number of modules.

I call this hypothetical device, the nVIDIA GeForce GTX Titan XV. Anyone want one? ? 

  1. Crunchy Dragon

    Crunchy Dragon

    I'd be down for one of those.

     

    My rig can handle the TDP, no problems at all.

    What do you mean by "shared VRAM", though?

  2. LukeSavenije

    LukeSavenije

    would want one, but it would be waaaaaaaaaaaaaaay outside of my range

  3. TopHatProductions115

    TopHatProductions115

    Each module is a fully-addressable GPU, with dedicated VRAM onboard. However, there is also a shared pool of VRAM for when the modules are performing tasks that benefit from not being worked on individually by each module. If the two modules can share a single memory pool for certain workloads instead, that could improve performance.

  4. TopHatProductions115

    TopHatProductions115

    I wonder what would happen if I pitted the hypothetical GPUs against each other... 

  5. A Lini

    A Lini

    Why not Turing, and why DVI. Those clock speeds are also insane

  6. LukeSavenije

    LukeSavenije

    because imagination

  7. A Lini
  8. TopHatProductions115

    TopHatProductions115

    @A Lini The cards are already expensive as is, due to specs. Turing would turn it into a pure joke with no real context. DVI-I and DVI-D was also included on some Pascal cards, so not old. The clock speeds are put as so because of implied die shrink, which would possibly increase efficiency and reduce power draw (7 and 10nm). As such, pulling off higher clocks shouldn't be as difficult (at least from what little I know). But, let's subtract 25% from the clocks, for a reality check. Would that still be too much? :D 

  9. A Lini

    A Lini

    with minus 25% the clock speeds would be perfectly reasonable with a die shrink

  10. Skanky Sylveon

    Skanky Sylveon

    Not 4069 cuda cores?  I am disappoint. 

  11. Skanky Sylveon

    Skanky Sylveon

    Alzo, it should be called the titan XD.

  12. TopHatProductions115

    TopHatProductions115

    Time for a scarier idea - imagine this with Ampere execution units...

    :P 

×