Jump to content

Which GPU you have for video rendering doesn't matter as long as you have one

Master Disaster
1 hour ago, DocSwag said:

There's a few things with this that you have to take into account 

-This was done in premier. If you used something else (FCP, Sony Vegas, etc.) you could potentially see different results

-This was ONLY an encoding test. Jay didn't test ANYTHING else. So we have no idea what it could look like for other things.

Vegas hasn't updated GPU acceleration support for GTX cards for a LONG time now. Anything 600 series and up aren't fully supported from what I understand.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, crystal6tak said:

Vegas hasn't updated GPU acceleration support for GTX cards for a LONG time now. Anything 600 series and up aren't fully supported from what I understand.

There's is da Vinci resolve or whatever it's called tho :P 

Make sure to quote me or tag me when responding to me, or I might not know you replied! Examples:

 

Do this:

Quote

And make sure you do it by hitting the quote button at the bottom left of my post, and not the one inside the editor!

Or this:

@DocSwag

 

Buy whatever product is best for you, not what product is "best" for the market.

 

Interested in computer architecture? Still in middle or high school? P.M. me!

 

I love computer hardware and feel free to ask me anything about that (or phones). I especially like SSDs. But please do not ask me anything about Networking, programming, command line stuff, or any relatively hard software stuff. I know next to nothing about that.

 

Compooters:

Spoiler

Desktop:

Spoiler

CPU: i7 6700k, CPU Cooler: be quiet! Dark Rock Pro 3, Motherboard: MSI Z170a KRAIT GAMING, RAM: G.Skill Ripjaws 4 Series 4x4gb DDR4-2666 MHz, Storage: SanDisk SSD Plus 240gb + OCZ Vertex 180 480 GB + Western Digital Caviar Blue 1 TB 7200 RPM, Video Card: EVGA GTX 970 SSC, Case: Fractal Design Define S, Power Supply: Seasonic Focus+ Gold 650w Yay, Keyboard: Logitech G710+, Mouse: Logitech G502 Proteus Spectrum, Headphones: B&O H9i, Monitor: LG 29um67 (2560x1080 75hz freesync)

Home Server:

Spoiler

CPU: Pentium G4400, CPU Cooler: Stock, Motherboard: MSI h110l Pro Mini AC, RAM: Hyper X Fury DDR4 1x8gb 2133 MHz, Storage: PNY CS1311 120gb SSD + two Segate 4tb HDDs in RAID 1, Video Card: Does Intel Integrated Graphics count?, Case: Fractal Design Node 304, Power Supply: Seasonic 360w 80+ Gold, Keyboard+Mouse+Monitor: Does it matter?

Laptop (I use it for school):

Spoiler

Surface book 2 13" with an i7 8650u, 8gb RAM, 256 GB storage, and a GTX 1050

And if you're curious (or a stalker) I have a Just Black Pixel 2 XL 64gb

 

Link to comment
Share on other sites

Link to post
Share on other sites

Misleading. Depends on the software being used. And just doing encoding is also misleading. Thats one task out of many video rendering task, even time line scrubbing. 

CPU: Amd 7800X3D | GPU: AMD 7900XTX

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, huilun02 said:

And when you're not rendering in Adobe?

This is an important point: From what I've seen there's a wide range of inconsistencies among different editing software. This basically translates to replace "Video Editing Workstation" with "Premiere Workstation", "Vegas Workstation" "Final Cut Workstation" "Hitfilm Workstation" etc.

 

Basically you can optimize the build for the specific software you'll be using. Sounds like a chore to me but when you start looking into professional software in general this is usually the way it goes: there's similar decisions to be made when building servers for example that are task specific, software specific even software version specific.

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, raphidy said:

IIRC, only some effects from Adobe software are utilize by the GPU computing. If you use Sony Vegas before version 13? It can be CUDA accelerated with keplers cards and newer versions can only use opencl (for processing only and not rendering) and AMD would have an edge in Vegas nowadays.

Vega dominates in Vegas ba dun....

 

I'll see myself out...

 

It's funny to me that he was like wth and never tested other software, yeah Hollywood uses these they also use programs like avid for editing

The Vinyl Decal guy.

Celestial-Uprising  A Work In-Progress

Link to comment
Share on other sites

Link to post
Share on other sites

I think we should specify this applies to premiere - this doesn't mean all editing programs work in the same way. On top of that, premiere may behave differently depending on what effects you use and other things. From the looks of it, it seems all the gpus he tested finish their work way before the CPU does, so you get the same result regardless. Perhaps with more (or faster) cores the cpu would finish its workload first and you'd actually end up seeing a difference between the gpus.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, crystal6tak said:

Vegas hasn't updated GPU acceleration support for GTX cards for a LONG time now. Anything 600 series and up aren't fully supported from what I understand.

Vegas will still leverage GPU for certain effects, but for the most part it is all on the CPU.

 

If I remember correctly, premiere uses the GPU in a weird way where you don't see a difference in the render times but rather in scrubing, preview windows and special effects.

Intel Xeon 1650 V0 (4.4GHz @1.4V), ASRock X79 Extreme6, 32GB of HyperX 1866, Sapphire Nitro+ 5700XT, Silverstone Redline (black) RL05BB-W, Crucial MX500 500GB SSD, TeamGroup GX2 512GB SSD, WD AV-25 1TB 2.5" HDD with generic Chinese 120GB SSD as cache, x2 Seagate 2TB SSHD(RAID 0) with generic Chinese 240GB SSD as cache, SeaSonic Focus Plus Gold 850, x2 Acer H236HL, Acer V277U be quiet! Dark Rock Pro 4, Logitech K120, Tecknet "Gaming" mouse, Creative Inspire T2900, HyperX Cloud Flight Wireless headset, Windows 10 Pro 64 bit
Link to comment
Share on other sites

Link to post
Share on other sites

While I thought to see more difference like with RX Vega still surprising. Also, definitely more tests, cause bottleneck is there. Like Adobe software ey. 

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

No point reading topic, when title contains the spoiler/conclusion...

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

Yea... Adobe and Vegas supposedly stopped supporting after Keplar. The Cuda cores were vastly different. I used to want the 780 just because of that but then... I also play games. After FX used to also use Cuda acceleration but people were complaining their brand new GPU's weren't doing anything and it was a big mess. This was right when people wanted to use the 980 or 980Ti or even 970 but the Cuda's weren't showing up as usable devices. Oh well... I know it barely does anything in Vegas 13 and now Sony has sold it to another company. Sony Vegas as far as we know it is dead. Now... it's just Vegas.... What a shame.

Link to comment
Share on other sites

Link to post
Share on other sites

You have start by asking yourself why a GPU helps with this task in the first place. That will help you think about whether there is an upper bound to how much a faster GPU can help, and whether this bound could depend on the software used. Or the particular job.

Link to comment
Share on other sites

Link to post
Share on other sites

48 minutes ago, NumLock21 said:

No point reading topic, when title contains the spoiler/conclusion...

That's generally how headlines work, no point reading the paper when the front page says everything and no point watching the news when all the articles are outlined within the first 2 minutes of the show, right?

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

23 hours ago, Jurrunio said:

So you suspect PCIe bandwidth to be a limitation on cards using PCIe 3.0 x16 and you suggest using PCIe 3.0 x4 to make this possible bottleneck even more serious? Seriously how do you do research with logic like that.

If they all bottleneck evenly it's pcie bandwidth, sounds pretty simple to me.. You can't get any faster than a pcie 3.0 x16, but you can get slower. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, LePawel said:

If they all bottleneck evenly it's pcie bandwidth, sounds pretty simple to me.. You can't get any faster than a pcie 3.0 x16, but you can get slower. 

I dont get your logic. How can you prove something to be the limiting factor by increasing its effect?

CPU: i7-2600K 4751MHz 1.44V (software) --> 1.47V at the back of the socket Motherboard: Asrock Z77 Extreme4 (BCLK: 103.3MHz) CPU Cooler: Noctua NH-D15 RAM: Adata XPG 2x8GB DDR3 (XMP: 2133MHz 10-11-11-30 CR2, custom: 2203MHz 10-11-10-26 CR1 tRFC:230 tREFI:14000) GPU: Asus GTX 1070 Dual (Super Jetstream vbios, +70(2025-2088MHz)/+400(8.8Gbps)) SSD: Samsung 840 Pro 256GB (main boot drive), Transcend SSD370 128GB PSU: Seasonic X-660 80+ Gold Case: Antec P110 Silent, 5 intakes 1 exhaust Monitor: AOC G2460PF 1080p 144Hz (150Hz max w/ DP, 121Hz max w/ HDMI) TN panel Keyboard: Logitech G610 Orion (Cherry MX Blue) with SteelSeries Apex M260 keycaps Mouse: BenQ Zowie FK1

 

Model: HP Omen 17 17-an110ca CPU: i7-8750H (0.125V core & cache, 50mV SA undervolt) GPU: GTX 1060 6GB Mobile (+80/+450, 1650MHz~1750MHz 0.78V~0.85V) RAM: 8+8GB DDR4-2400 18-17-17-39 2T Storage: HP EX920 1TB PCIe x4 M.2 SSD + Crucial MX500 1TB 2.5" SATA SSD, 128GB Toshiba PCIe x2 M.2 SSD (KBG30ZMV128G) gone cooking externally, 1TB Seagate 7200RPM 2.5" HDD (ST1000LM049-2GH172) left outside Monitor: 1080p 126Hz IPS G-sync

 

Desktop benching:

Cinebench R15 Single thread:168 Multi-thread: 833 

SuperPi (v1.5 from Techpowerup, PI value output) 16K: 0.100s 1M: 8.255s 32M: 7m 45.93s

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, LePawel said:

If they all bottleneck evenly it's pcie bandwidth, sounds pretty simple to me.. You can't get any faster than a pcie 3.0 x16, but you can get slower. 

What you are saying is like "they are bottlenecked by the CPU. Let's test wirh a slower CPU!". That won't tell you anything. 

Link to comment
Share on other sites

Link to post
Share on other sites

Woah, that is quite interesting indeed.

 

Well, its good to know if i ever find myself seeking to develop an adventurous movie project :l.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, SpaceGhostC2C said:

What you are saying is like "they are bottlenecked by the CPU. Let's test wirh a slower CPU!". That won't tell you anything. 

what I'm saying is "We don't know what the bottleneck is, let's carry out a test for x to confirm or eliminate a factor". This really isn't hard to understand, it's basic research. 

 

 

Edited by wkdpaul
Cleaned up
Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Jurrunio said:

I dont get your logic. How can you prove something to be the limiting factor by increasing its effect?

Because there's more than 1 variable? Research 101?

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, LePawel said:

Because there's more than 1 variable? Research 101?

I only see pcie bandwidth as the independent variable. What's the other one?

 

4 hours ago, LePawel said:

what I'm saying is "We don't know what the bottleneck is, let's carry out a test for x to confirm or eliminate a factor". This really isn't hard to understand, it's basic research. 

 

 

Your "research" can only show whether PCIe 3.0 x4 is enough, not whether PCIe 3.0 x16 is enough

Edited by wkdpaul
Cleaned up

CPU: i7-2600K 4751MHz 1.44V (software) --> 1.47V at the back of the socket Motherboard: Asrock Z77 Extreme4 (BCLK: 103.3MHz) CPU Cooler: Noctua NH-D15 RAM: Adata XPG 2x8GB DDR3 (XMP: 2133MHz 10-11-11-30 CR2, custom: 2203MHz 10-11-10-26 CR1 tRFC:230 tREFI:14000) GPU: Asus GTX 1070 Dual (Super Jetstream vbios, +70(2025-2088MHz)/+400(8.8Gbps)) SSD: Samsung 840 Pro 256GB (main boot drive), Transcend SSD370 128GB PSU: Seasonic X-660 80+ Gold Case: Antec P110 Silent, 5 intakes 1 exhaust Monitor: AOC G2460PF 1080p 144Hz (150Hz max w/ DP, 121Hz max w/ HDMI) TN panel Keyboard: Logitech G610 Orion (Cherry MX Blue) with SteelSeries Apex M260 keycaps Mouse: BenQ Zowie FK1

 

Model: HP Omen 17 17-an110ca CPU: i7-8750H (0.125V core & cache, 50mV SA undervolt) GPU: GTX 1060 6GB Mobile (+80/+450, 1650MHz~1750MHz 0.78V~0.85V) RAM: 8+8GB DDR4-2400 18-17-17-39 2T Storage: HP EX920 1TB PCIe x4 M.2 SSD + Crucial MX500 1TB 2.5" SATA SSD, 128GB Toshiba PCIe x2 M.2 SSD (KBG30ZMV128G) gone cooking externally, 1TB Seagate 7200RPM 2.5" HDD (ST1000LM049-2GH172) left outside Monitor: 1080p 126Hz IPS G-sync

 

Desktop benching:

Cinebench R15 Single thread:168 Multi-thread: 833 

SuperPi (v1.5 from Techpowerup, PI value output) 16K: 0.100s 1M: 8.255s 32M: 7m 45.93s

Link to comment
Share on other sites

Link to post
Share on other sites

lets just say the test was lacking any actual depth and investigation. Appreciate the effort but the conclusion "its all the same any gpu will do it" was very anticlimactic. I wish gamers nexus does a follow up with some actual data and answers.

GPU drivers giving you a hard time? Try this! (DDU)

Link to comment
Share on other sites

Link to post
Share on other sites

45 minutes ago, Jurrunio said:

I only see pcie bandwidth as the independent variable. What's the other one?

Your "research" can only show whether PCIe 3.0 x4 is enough, not whether PCIe 3.0 x16 is enough

If we do lower the PCIe to x4 and the numbers show us that performance has declined, it could be an indication that the PCIe is limiting factor. 

 

I still think it's the memory or CPU. 

Cor Caeruleus Reborn v6

Spoiler

CPU: Intel - Core i7-8700K

CPU Cooler: be quiet! - PURE ROCK 
Thermal Compound: Arctic Silver - 5 High-Density Polysynthetic Silver 3.5g Thermal Paste 
Motherboard: ASRock Z370 Extreme4
Memory: G.Skill TridentZ RGB 2x8GB 3200/14
Storage: Samsung - 850 EVO-Series 500GB 2.5" Solid State Drive 
Storage: Samsung - 960 EVO 500GB M.2-2280 Solid State Drive
Storage: Western Digital - Blue 2TB 3.5" 5400RPM Internal Hard Drive
Storage: Western Digital - BLACK SERIES 3TB 3.5" 7200RPM Internal Hard Drive
Video Card: EVGA - 970 SSC ACX (1080 is in RMA)
Case: Fractal Design - Define R5 w/Window (Black) ATX Mid Tower Case
Power Supply: EVGA - SuperNOVA P2 750W with CableMod blue/black Pro Series
Optical Drive: LG - WH16NS40 Blu-Ray/DVD/CD Writer 
Operating System: Microsoft - Windows 10 Pro OEM 64-bit and Linux Mint Serena
Keyboard: Logitech - G910 Orion Spectrum RGB Wired Gaming Keyboard
Mouse: Logitech - G502 Wired Optical Mouse
Headphones: Logitech - G430 7.1 Channel  Headset
Speakers: Logitech - Z506 155W 5.1ch Speakers

 

Link to comment
Share on other sites

Link to post
Share on other sites

The Jay's test is absolutely incorrect: Premiere does not GPU accelerate the export as much if you have a simple clip in timeline with no effects or adjustments to it. Heavy projects with multicam clips, color correction, blending modes, transform parameters modified etc, will bring up the GPU load and there there will be differences. The test is flowed and the conclusion is wrong. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×