Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Master Disaster

Which GPU you have for video rendering doesn't matter as long as you have one

Recommended Posts

What about in Davinci Resolve?


“Remember to look up at the stars and not down at your feet. Try to make sense of what you see and wonder about what makes the universe exist. Be curious. And however difficult life may seem, there is always something you can do and succeed at. 
It matters that you don't just give up.”

-Stephen Hawking

Link to post
Share on other sites
1 hour ago, DocSwag said:

There's a few things with this that you have to take into account 

-This was done in premier. If you used something else (FCP, Sony Vegas, etc.) you could potentially see different results

-This was ONLY an encoding test. Jay didn't test ANYTHING else. So we have no idea what it could look like for other things.

Vegas hasn't updated GPU acceleration support for GTX cards for a LONG time now. Anything 600 series and up aren't fully supported from what I understand.

Link to post
Share on other sites
4 minutes ago, crystal6tak said:

Vegas hasn't updated GPU acceleration support for GTX cards for a LONG time now. Anything 600 series and up aren't fully supported from what I understand.

There's is da Vinci resolve or whatever it's called tho :P 


Make sure to quote me or tag me when responding to me, or I might not know you replied! Examples:

 

Do this:

Quote

And make sure you do it by hitting the quote button at the bottom left of my post, and not the one inside the editor!

Or this:

@DocSwag

 

Buy whatever product is best for you, not what product is "best" for the market.

 

Interested in computer architecture? Still in middle or high school? P.M. me!

 

I love computer hardware and feel free to ask me anything about that (or phones). I especially like SSDs. But please do not ask me anything about Networking, programming, command line stuff, or any relatively hard software stuff. I know next to nothing about that.

 

Compooters:

Spoiler

Desktop:

Spoiler

CPU: i7 6700k, CPU Cooler: be quiet! Dark Rock Pro 3, Motherboard: MSI Z170a KRAIT GAMING, RAM: G.Skill Ripjaws 4 Series 4x4gb DDR4-2666 MHz, Storage: SanDisk SSD Plus 240gb + OCZ Vertex 180 480 GB + Western Digital Caviar Blue 1 TB 7200 RPM, Video Card: EVGA GTX 970 SSC, Case: Fractal Design Define S, Power Supply: Seasonic Focus+ Gold 650w Yay, Keyboard: Logitech G710+, Mouse: Logitech G502 Proteus Spectrum, Headphones: B&O H9i, Monitor: LG 29um67 (2560x1080 75hz freesync)

Home Server:

Spoiler

CPU: Pentium G4400, CPU Cooler: Stock, Motherboard: MSI h110l Pro Mini AC, RAM: Hyper X Fury DDR4 1x8gb 2133 MHz, Storage: PNY CS1311 120gb SSD + two Segate 4tb HDDs in RAID 1, Video Card: Does Intel Integrated Graphics count?, Case: Fractal Design Node 304, Power Supply: Seasonic 360w 80+ Gold, Keyboard+Mouse+Monitor: Does it matter?

Laptop (I use it for school):

Spoiler

Surface book 2 13" with an i7 8650u, 8gb RAM, 256 GB storage, and a GTX 1050

And if you're curious (or a stalker) I have a Just Black Pixel 2 XL 64gb

 

Link to post
Share on other sites

Misleading. Depends on the software being used. And just doing encoding is also misleading. Thats one task out of many video rendering task, even time line scrubbing. 


Main PC:  Motherboard: Asus Crosshair V Formula Z | RAM: Amd R9 Gamer 32gb 2400mhz | Case: Cooler Master HAF X Case | Storage: Amd R7 480gb, 2x Crucial M500 240gb, Toshiba 5TB | PSU: Antec True Power Quattro 1200 | CPU: Amd FX-9590 | GPU: Asus Amd Fury X | Keyboard: Logitech G710+ | Mouse: Logitech G502 | Sound: Razer Leviathan | OS: Windows 10 Pro | Display: Dell u3415w | Cooling: Apogee XL, Heatkiller Fury X w/ Back Plate, 720mm Rad

Link to post
Share on other sites
6 hours ago, huilun02 said:

And when you're not rendering in Adobe?

This is an important point: From what I've seen there's a wide range of inconsistencies among different editing software. This basically translates to replace "Video Editing Workstation" with "Premiere Workstation", "Vegas Workstation" "Final Cut Workstation" "Hitfilm Workstation" etc.

 

Basically you can optimize the build for the specific software you'll be using. Sounds like a chore to me but when you start looking into professional software in general this is usually the way it goes: there's similar decisions to be made when building servers for example that are task specific, software specific even software version specific.


-------

Current Rig

-------

Link to post
Share on other sites
5 hours ago, raphidy said:

IIRC, only some effects from Adobe software are utilize by the GPU computing. If you use Sony Vegas before version 13? It can be CUDA accelerated with keplers cards and newer versions can only use opencl (for processing only and not rendering) and AMD would have an edge in Vegas nowadays.

Vega dominates in Vegas ba dun....

 

I'll see myself out...

 

It's funny to me that he was like wth and never tested other software, yeah Hollywood uses these they also use programs like avid for editing


The Vinyl Decal guy.

Celestial-Uprising  A Work In-Progress

Link to post
Share on other sites

I think we should specify this applies to premiere - this doesn't mean all editing programs work in the same way. On top of that, premiere may behave differently depending on what effects you use and other things. From the looks of it, it seems all the gpus he tested finish their work way before the CPU does, so you get the same result regardless. Perhaps with more (or faster) cores the cpu would finish its workload first and you'd actually end up seeing a difference between the gpus.


...is there a question here? 🤔

sudo chmod -R 000 /*

What is scaling and how does it work? Asus PB287Q unboxing! Console alternatives :D Watch Netflix with Kodi on Arch Linux Sharing folders over the internet using SSH Beginner's Guide To LTT (by iamdarkyoshi)

Sauron'stm Product Scores:

Spoiler

Just a list of my personal scores for some products, in no particular order, with brief comments. I just got the idea to do them so they aren't many for now :)

Don't take these as complete reviews or final truths - they are just my personal impressions on products I may or may not have used, summed up in a couple of sentences and a rough score. All scores take into account the unit's price and time of release, heavily so, therefore don't expect absolute performance to be reflected here.

 

-Lenovo Thinkpad X220 - [8/10]

Spoiler

A durable and reliable machine that is relatively lightweight, has all the hardware it needs to never feel sluggish and has a great IPS matte screen. Downsides are mostly due to its age, most notably the screen resolution of 1366x768 and usb 2.0 ports.

 

-Apple Macbook (2015) - [Garbage -/10]

Spoiler

From my perspective, this product has no redeeming factors given its price and the competition. It is underpowered, overpriced, impractical due to its single port and is made redundant even by Apple's own iPad pro line.

 

-OnePlus X - [7/10]

Spoiler

A good phone for the price. It does everything I (and most people) need without being sluggish and has no particularly bad flaws. The lack of recent software updates and relatively barebones feature kit (most notably the lack of 5GHz wifi, biometric sensors and backlight for the capacitive buttons) prevent it from being exceptional.

 

-Microsoft Surface Book 2 - [Garbage - -/10]

Spoiler

Overpriced and rushed, offers nothing notable compared to the competition, doesn't come with an adequate charger despite the premium price. Worse than the Macbook for not even offering the small plus sides of having macOS. Buy a Razer Blade if you want high performance in a (relatively) light package.

 

-Intel Core i7 2600/k - [9/10]

Spoiler

Quite possibly Intel's best product launch ever. It had all the bleeding edge features of the time, it came with a very significant performance improvement over its predecessor and it had a soldered heatspreader, allowing for efficient cooling and great overclocking. Even the "locked" version could be overclocked through the multiplier within (quite reasonable) limits.

 

-Apple iPad Pro - [5/10]

Spoiler

A pretty good product, sunk by its price (plus the extra cost of the physical keyboard and the pencil). Buy it if you don't mind the Apple tax and are looking for a very light office machine with an excellent digitizer. Particularly good for rich students. Bad for cheap tinkerers like myself.

 

 

Link to post
Share on other sites
40 minutes ago, crystal6tak said:

Vegas hasn't updated GPU acceleration support for GTX cards for a LONG time now. Anything 600 series and up aren't fully supported from what I understand.

Vegas will still leverage GPU for certain effects, but for the most part it is all on the CPU.

 

If I remember correctly, premiere uses the GPU in a weird way where you don't see a difference in the render times but rather in scrubing, preview windows and special effects.


Intel Xeon 1650 V0 (4.4GHz @1.4V), ASRock X79 Extreme6, 32GB of HyperX 1866, x2 XFX GTR RX 480 (@ 1370 MHz), Silverstone Redline (black) RL05BB-W, Crucial MX500 500GB SSD, Seagate Barracuda 500GB 7200RPM, WD AV-25 1TB 2.5" HDD, Seagate 2TB SSHD, SeaSonic Focus Plus Gold 850, x3 Acer H236HL, be quiet! Dark Rock Pro 4, Logitech K120, Tecknet "Gaming" mouse, Creative Inspire T2900, HyperX Cloud Flight Wireless headset, Windows 10 Pro 64 bit
Link to post
Share on other sites

While I thought to see more difference like with RX Vega still surprising. Also, definitely more tests, cause bottleneck is there. Like Adobe software ey. 


Ryzen 7 3800X | X570 Aorus Elite | G.Skill 16GB 3200MHz C16 | Radeon RX 5700 XT | Samsung 850 PRO 256GB | Mouse: Zowie S1 | OS: Windows 10

Link to post
Share on other sites

No point reading topic, when title contains the spoiler/conclusion...


Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to post
Share on other sites

Yea... Adobe and Vegas supposedly stopped supporting after Keplar. The Cuda cores were vastly different. I used to want the 780 just because of that but then... I also play games. After FX used to also use Cuda acceleration but people were complaining their brand new GPU's weren't doing anything and it was a big mess. This was right when people wanted to use the 980 or 980Ti or even 970 but the Cuda's weren't showing up as usable devices. Oh well... I know it barely does anything in Vegas 13 and now Sony has sold it to another company. Sony Vegas as far as we know it is dead. Now... it's just Vegas.... What a shame.

Link to post
Share on other sites

You have start by asking yourself why a GPU helps with this task in the first place. That will help you think about whether there is an upper bound to how much a faster GPU can help, and whether this bound could depend on the software used. Or the particular job.

Link to post
Share on other sites
Posted · Original PosterOP
48 minutes ago, NumLock21 said:

No point reading topic, when title contains the spoiler/conclusion...

That's generally how headlines work, no point reading the paper when the front page says everything and no point watching the news when all the articles are outlined within the first 2 minutes of the show, right?


Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Samsung 970 Evo 500GB NVMe | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Windows 10 Pro X64 |

 

Server:-

Raspberry Pi 4 Model B running OMV Arrakis and an 8TB Seagate USB 3.0 external HDD

Link to post
Share on other sites
23 hours ago, Jurrunio said:

So you suspect PCIe bandwidth to be a limitation on cards using PCIe 3.0 x16 and you suggest using PCIe 3.0 x4 to make this possible bottleneck even more serious? Seriously how do you do research with logic like that.

If they all bottleneck evenly it's pcie bandwidth, sounds pretty simple to me.. You can't get any faster than a pcie 3.0 x16, but you can get slower. 

Link to post
Share on other sites
3 hours ago, LePawel said:

If they all bottleneck evenly it's pcie bandwidth, sounds pretty simple to me.. You can't get any faster than a pcie 3.0 x16, but you can get slower. 

I dont get your logic. How can you prove something to be the limiting factor by increasing its effect?


CPU: i7-2600K 4751MHz 1.44V (software) --> 1.47V at the back of the socket Motherboard: Asrock Z77 Extreme4 (BCLK: 103.3MHz) CPU Cooler: Noctua NH-D15 RAM: Adata XPG 2x8GB DDR3 (XMP: 2133MHz 10-11-11-30 CR2, custom: 2203MHz 10-11-10-26 CR1 tRFC:230 tREFI:14000) GPU: Asus GTX 1070 Dual (Super Jetstream vbios, +70(2025-2088MHz)/+400(8.8Gbps)) SSD: Samsung 840 Pro 256GB (main boot drive), Transcend SSD370 128GB PSU: Seasonic X-660 80+ Gold Case: Antec P110 Silent, 5 intakes 1 exhaust Monitor: AOC G2460PF 1080p 144Hz (150Hz max w/ DP, 121Hz max w/ HDMI) TN panel Keyboard: Logitech G610 Orion (Cherry MX Blue) with SteelSeries Apex M260 keycaps Mouse: BenQ Zowie FK1

 

Model: HP Omen 17 17-an110ca CPU: i7-8750H (0.125V core & cache, 50mV SA undervolt) GPU: GTX 1060 6GB Mobile (+80/+450, 1650MHz~1750MHz 0.78V~0.85V) RAM: 8+8GB DDR4-2400 18-17-17-39 2T Storage: 1TB HP EX920 PCIe x4 M.2 SSD + 1TB Seagate 7200RPM 2.5" HDD (ST1000LM049-2GH172), 128GB Toshiba PCIe x2 M.2 SSD (KBG30ZMV128G) gone cooking externally Monitor: 1080p 126Hz IPS G-sync

 

Desktop benching:

Cinebench R15 Single thread:168 Multi-thread: 833 

SuperPi (v1.5 from Techpowerup, PI value output) 16K: 0.100s 1M: 8.255s 32M: 7m 45.93s

Link to post
Share on other sites
3 hours ago, LePawel said:

If they all bottleneck evenly it's pcie bandwidth, sounds pretty simple to me.. You can't get any faster than a pcie 3.0 x16, but you can get slower. 

What you are saying is like "they are bottlenecked by the CPU. Let's test wirh a slower CPU!". That won't tell you anything. 

Link to post
Share on other sites

Woah, that is quite interesting indeed.

 

Well, its good to know if i ever find myself seeking to develop an adventurous movie project :l.

Link to post
Share on other sites
8 hours ago, SpaceGhostC2C said:

What you are saying is like "they are bottlenecked by the CPU. Let's test wirh a slower CPU!". That won't tell you anything. 

what I'm saying is "We don't know what the bottleneck is, let's carry out a test for x to confirm or eliminate a factor". This really isn't hard to understand, it's basic research. 

 

 

Edited by wkdpaul
Cleaned up
Link to post
Share on other sites
4 hours ago, Jurrunio said:

I dont get your logic. How can you prove something to be the limiting factor by increasing its effect?

Because there's more than 1 variable? Research 101?

Link to post
Share on other sites
4 hours ago, LePawel said:

Because there's more than 1 variable? Research 101?

I only see pcie bandwidth as the independent variable. What's the other one?

 

4 hours ago, LePawel said:

what I'm saying is "We don't know what the bottleneck is, let's carry out a test for x to confirm or eliminate a factor". This really isn't hard to understand, it's basic research. 

 

 

Your "research" can only show whether PCIe 3.0 x4 is enough, not whether PCIe 3.0 x16 is enough

Edited by wkdpaul
Cleaned up

CPU: i7-2600K 4751MHz 1.44V (software) --> 1.47V at the back of the socket Motherboard: Asrock Z77 Extreme4 (BCLK: 103.3MHz) CPU Cooler: Noctua NH-D15 RAM: Adata XPG 2x8GB DDR3 (XMP: 2133MHz 10-11-11-30 CR2, custom: 2203MHz 10-11-10-26 CR1 tRFC:230 tREFI:14000) GPU: Asus GTX 1070 Dual (Super Jetstream vbios, +70(2025-2088MHz)/+400(8.8Gbps)) SSD: Samsung 840 Pro 256GB (main boot drive), Transcend SSD370 128GB PSU: Seasonic X-660 80+ Gold Case: Antec P110 Silent, 5 intakes 1 exhaust Monitor: AOC G2460PF 1080p 144Hz (150Hz max w/ DP, 121Hz max w/ HDMI) TN panel Keyboard: Logitech G610 Orion (Cherry MX Blue) with SteelSeries Apex M260 keycaps Mouse: BenQ Zowie FK1

 

Model: HP Omen 17 17-an110ca CPU: i7-8750H (0.125V core & cache, 50mV SA undervolt) GPU: GTX 1060 6GB Mobile (+80/+450, 1650MHz~1750MHz 0.78V~0.85V) RAM: 8+8GB DDR4-2400 18-17-17-39 2T Storage: 1TB HP EX920 PCIe x4 M.2 SSD + 1TB Seagate 7200RPM 2.5" HDD (ST1000LM049-2GH172), 128GB Toshiba PCIe x2 M.2 SSD (KBG30ZMV128G) gone cooking externally Monitor: 1080p 126Hz IPS G-sync

 

Desktop benching:

Cinebench R15 Single thread:168 Multi-thread: 833 

SuperPi (v1.5 from Techpowerup, PI value output) 16K: 0.100s 1M: 8.255s 32M: 7m 45.93s

Link to post
Share on other sites

lets just say the test was lacking any actual depth and investigation. Appreciate the effort but the conclusion "its all the same any gpu will do it" was very anticlimactic. I wish gamers nexus does a follow up with some actual data and answers.


GPU drivers giving you a hard time? Try this! (DDU)

Link to post
Share on other sites
45 minutes ago, Jurrunio said:

I only see pcie bandwidth as the independent variable. What's the other one?

Your "research" can only show whether PCIe 3.0 x4 is enough, not whether PCIe 3.0 x16 is enough

If we do lower the PCIe to x4 and the numbers show us that performance has declined, it could be an indication that the PCIe is limiting factor. 

 

I still think it's the memory or CPU. 


Cor Caeruleus Reborn v6

Spoiler

CPU: Intel - Core i7-8700K

CPU Cooler: be quiet! - PURE ROCK 
Thermal Compound: Arctic Silver - 5 High-Density Polysynthetic Silver 3.5g Thermal Paste 
Motherboard: ASRock Z370 Extreme4
Memory: G.Skill TridentZ RGB 2x8GB 3200/14
Storage: Samsung - 850 EVO-Series 500GB 2.5" Solid State Drive 
Storage: Samsung - 960 EVO 500GB M.2-2280 Solid State Drive
Storage: Western Digital - Blue 2TB 3.5" 5400RPM Internal Hard Drive
Storage: Western Digital - BLACK SERIES 3TB 3.5" 7200RPM Internal Hard Drive
Video Card: EVGA - 970 SSC ACX (1080 is in RMA)
Case: Fractal Design - Define R5 w/Window (Black) ATX Mid Tower Case
Power Supply: EVGA - SuperNOVA P2 750W with CableMod blue/black Pro Series
Optical Drive: LG - WH16NS40 Blu-Ray/DVD/CD Writer 
Operating System: Microsoft - Windows 10 Pro OEM 64-bit and Linux Mint Serena
Keyboard: Logitech - G910 Orion Spectrum RGB Wired Gaming Keyboard
Mouse: Logitech - G502 Wired Optical Mouse
Headphones: Logitech - G430 7.1 Channel  Headset
Speakers: Logitech - Z506 155W 5.1ch Speakers

 

Link to post
Share on other sites

The Jay's test is absolutely incorrect: Premiere does not GPU accelerate the export as much if you have a simple clip in timeline with no effects or adjustments to it. Heavy projects with multicam clips, color correction, blending modes, transform parameters modified etc, will bring up the GPU load and there there will be differences. The test is flowed and the conclusion is wrong. 

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×