Jump to content

JinnGonQui

Member
  • Posts

    20
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

JinnGonQui's Achievements

  1. Nvidia Control panel is not allowing me to enable all 3 monitors at the same time ... only two .... anyone else experience this, or have thoughts on fixing ... Google hasn't been any help
  2. I have a new problem with this. I downloaded a highly detailed 3D model of this Neo G7, and laid 3 of them out in a "perfect" 'curve', in order to get preliminary measurements for "the system" stands-footprint width and depth, and the off-desk-surface spatial consumption. I took it with a grain of salt after comparing the 3D model's extents with spec-page dimensions. Turns out they consume more width than I had "anticipated", because the "surround" was too surrounding ... I've found a sweet spot ... however, even the stands-footprints makes the rest of the desk nearly worthless. Contemplating a triple-arm contraption to get the surface under the monitors "clear" .... the depth-delta of the side-guys is significantly impacting my searching for an armature. Recommendations ?
  3. Calibrating / Community Profiles, will be new for me, excited to learn more about it .... in the past just tested stuff on as many peer and fr'amily monitors and TV's as I could .... really inefficient, and proven not particularly effective .... always too narrow, no matter how widely diverse the demographics -- mostly due to the nano-geographical width.
  4. Yes, gaming and in-engine dev/testing on the middle .... bbbbuuuttt .... so many functions frequently reside on side monitors, intentionally UI-out-of-"view", but directly impacting view. For example: in Maya, Max, UE, etc there will be material node graphs, shader tweaking, texture tweaking, maps refining, etc .... that, if way off, is pretty bad ... had to replace a pair of LCD's 6 years ago because color went silly on just one (bought at same time, on same shelf, right next to each other -- not that that really matters as for guarantee of pairity -- the concept of abstract expectation) Calibrating / Community Profiles, will be new for me, excited to learn more about it .... in the past just tested stuff on as many peer and fr'amily monitors and TV's as I could .... really inefficient, and proven not particularly effective .... always too narrow, no matter how widely diverse the demographics -- mostly due to the nano-geographical width.
  5. Budget (including currency): already have System up and running -- willing to put another $2k USD into it if needed. Country: USA Games, programs or workloads that it will be used for: 3D: Architectural Visualization Renderings and Animations and photo-real free-roam real-time truly interactive VR, CADD, game art, game design; games of all types (for research ); Other details (existing parts lists, whether any peripherals are needed, what you're upgrading from, when you're going to buy, what resolution and refresh rate you want to play at, etc): EXISTING: Windows 11 Pro 64-but, i9-12900K, 3090-Ti; via: Displayport 1.4, for: 4K, HDR, 10-bit, 120hz ... on INCOMING triple Odyssey Neo G7 32" monitors. : Settings, etc; to maximize "quality" ??? (I prefer beyond dazzling image/video quality, which includes silky smooth motion (trad-off for me, leaning to "quality", but not 1995 frame-rates.) .
  6. That clearly does not mean that SLI is dead. The article for last week makes it clear that DX12 and Vulkan avail the advantages without the need for the profiles. A tech does not need to be mainstream to be influential, in fact influencing the future rarely is mainstream.
  7. Thanks for sharing! The article is very very clear that SLI is NOT dead, just NVIDIA's strangle hold on being involved in the profiles for drivers ... they are very clear to state that it continues to be valuable to games studios and non-game uses ... and is integral in the DX12 and Vulcan foundations, the developers just have to leverage it, and can do so without NVIDIA ... which means there's enough evolved interest in studios and content software developers that it was agnostic ally implemented at a deep level ... how that relates to the NVLINK connector, I remain very interested.
  8. I have an Aus X99-E WS, with 4x CMD32GX4M4A2666C15. I game some, and this is more than enough for that; but, I primarily do Architectural Visualizations via UE4, and some with V-Ray and Corona ... I desperately need to up my RAM (32GB was great 5 years ago, now it is constantly a problem). Do I need to tightly match the specs of my CMD32GX4M4A2666C15 sticks? ... or can I find Dom Plat's that are like C16, etc ... I suspect they still need to be 2666 and XMP2 compat' ... not sure.
  9. 100% clear from the horses mouth on the hardware capability (https://www.nvidia.com/en-us/deep-learning-ai/products/titan-rtx/) ... looking for confirmation on the engine side of things
  10. Quad Gigabyte AIO 2080-Ti (GV-N208TAORUSX-W-11GC), dual NVLINK == quad'ish performance, double'ish VRAM ??? ... for Arch Viz in Unreal Engine 4 (Unreal Studio / DataSmith) ... . ... so, following the thought process: if a software supports NVLINK -- AND it supports memory pooling, being only across the NVLINK, the benefit to Arch Viz would be : maximizing VRAM for GPU Lightmass baking which is massively VRAM hungry, as well as "should" take advantage of all available GPU's in box .... . ... so the question is : Does Unreal Engine 4 (Unreal Studio) support memory pooling via NVLINK ? . . references : https://www.pugetsystems.com/labs/articles/NVLink-on-NVIDIA-GeForce-RTX-2080-2080-Ti-in-Windows-10-1253/#DoGeForceRTXCardsSupportMemoryPoolinginWindows? https://forums.unrealengine.com/development-discussion/rendering/1460002-luoshuang-s-gpulightmass?p=1600770#post1600770 https://www.nvidia.com/en-us/titan/titan-rtx/#textcomponentenhance593c4762_06f5_4894_a6a1_3f7a26e70a60 https://www.newegg.com/gigabyte-geforce-rtx-2080-ti-gv-n208taorusx-w-11gc/p/N82E16814932075?item=N82E16814932075&source=region&nm_mc=knc-googleadwords-pc&cm_mmc=knc-googleadwords-pc-_-pla-_-video+card+-+nvidia-_-N82E16814932075&gclid=CjwKCAjw7uPqBRBlEiwAYDsr14lZlurSCWoQyoG7OuorWkFVjZss_6vKwkbIfLwyoedfmqKDZZu5HxoCQLUQAvD_BwE&gclsrc=aw.ds
  11. Nerd moment: is the reason why we haven't seen workstation power supplies beyond 1600W …. because it's the max Wattage a standard 15A circuit on a 120VAC system will not trip if the Power Factor of the draw is 90%+ ? (most high-end PSU's rated that high are performing "80+ Titanium": 90%+) ... W/(PF(VAC)=A ... W=Amax(PF(VAC) https://www.rapidtables.com/calc/electric/Amp_to_Watt_Calculator.html https://en.wikipedia.org/wiki/80_Plus https://www.newegg.com/p/pl?N=100007657 600014119 600014130 600014131 600372164 600372167 600372171 (none of these are over 1600w for 120VAC)
  12. What would that final custom loop solution cost ... internals, coolings, labor, etc. ???
  13. I have a 1080 connected to a pair of P2715Q displays. The displays are advertised and spec'd as 10-bit ... but the 1080 is not capable of the 10-bit. I have a similar question regarding the Titan RTX: can it push full 4k over DP to two displays in 10-bit?
  14. Minor suggestion: updating the URL's on the profile section of FloatPlane.com does not give warning feedback if a valid URL is not entered. Otherwise, I really like the clean interface and the simplicity of getting around.
  15. Bouncing off of Linus and Alex's recent stellar review of the GX701, how does that compare in performance-across-the-board to the G703GX-XS98K [90NR01B1-M01530] ... and how do they compare in terms of physical-user-interface? SIze and weight are obvious distinctions; one appearing to be rather portable, the other looking particularly tasty as a portable desktop (not-replacement). {My consideration perspective being: some gaming, lots of architectural visualization modeling and texturing, on-the-go last-minute-tweak arch viz rendering, and frequent architectural VR.}
×