Jump to content

MmRaZ

Member
  • Posts

    6
  • Joined

  • Last visited

Reputation Activity

  1. Agree
    MmRaZ got a reaction from redteam4ever in All this work... for what??   
    I would love to see them try this with the new AMD 3900X! Maybe they can't get a water block for it yet?
  2. Like
    MmRaZ reacted to dgschrei in All this work... for what??   
    OK I just had to come over to the forum for this:
     
    Mistake #1 using AME.
     
    Whatever I do it is consistently the slowest possible encoder I can get my hands on. If you have any chance of not using it take it.
     
    Now I'm guessing you guys are stuck with it because you want to export Premiere project files. so assuming that is not an option:
     
    Mistake #2: What actually seems to be killing your performance is rendering the videos over the network.
     
    When Linus was testing the 14 core on the installed machine the project was accessed over a network drive . When he was doing the initial tests with the 16 core the video was rendering from C:\....  (I freeze framed the video to verify)
     
    Why is this a problem?
    If the encoder is not optimized for that it might read data in in tiny junks and also write the result out in tiny junk. Every time it does that that operation will suffer the full latency of the network. Now while this might be ~ 1ms in your LAN this still adds up VERY quickly and utterly kills your performance. Hence the way lower utilization on the render server with the 14 core CPU installed.
     
    We had a similar problem in our company fetching all the build results (think tens of thousands of dlls and config files) from a remote server. It actually turned out that it was faster (about 2x-3x) to download a zip file of the build result and then unpack it locally, then it was to just copy the folder, because the latency of all those small accesses was killing the speed and we couldn't saturate the network that way.
     
    So the solution might be to first copy the project to be rendered out locally onto the render machine and then encode it on there. That would get rid of all the latency and might give you a decent speedup. And even if it isn't faster for one job it will definitely increase your overall throughput.
     
     
    P.S.: I still would advise to use some of the coding know how in your company to have someone just whip up an encoding script that just uses ffmpeg for the encoding if at all possible. In my experience that is the most stable option that produces the best results both in performance and quality terms. ffmpeg's h264 encoder x264 also should scale way better onto many cores (especially since you guys render 4k) than the sorry excuse for an encoder Adobe ships with AME.
  3. Like
    MmRaZ reacted to Cyanara in All this work... for what??   
    Did that last motherboard actually get its BIOS checked and updated? When I built a 9900K system on a motherboard with an older BIOS it still worked but the clockspeed was stuck much closer to 4Ghz than 5Ghz until the BIOS was updated.
  4. Like
    MmRaZ reacted to Nystemy in All this work... for what??   
    Having watched your networking debacle the other day out of pure entertainment, then seeing this seems like it couldn't be a better continuation.
    But those SSDs have sure been through a lot throughout the years, I am somewhat surprised that they are still kicking.
     
    Though, just a roughly 10% improvement in render speed? Despite a much better CPU?
    And secondly, can't the program export two or more projects in parallel?
    Otherwise, why not build a set of render servers? So the editors can pick one that is free, and thereby be able to push out videos faster.
     
    Maybe make a fast one for Techlinked (since that is same day delivery. Then the server is free until Techlinked needs it the next day.)
    Then a set of slower ones for all other render jobs. Maybe make some centralized dispatch software that indicates server status and queue and such.
    But knowing that this is far from trivial and Floatplane worked on their solution for it for quite some time.... Then maybe it would be a bit on the "overkill" side of things. Interesting non the less.
×