Jump to content

Virtual EQ vs Addon (e.g. Schiit Loki)

Guest
Go to solution Solved by VoidX,
2 hours ago, rebmem rehtona yb esu ni said:

What's the point of an analog equalizer like the Schiit Loki or JDS labs Subjective 3. They cost quite a lot of money (100$ and 150$) so they gotta have some advantage over a software eq tweaking the signal before it gets converted, right?

There are barely pros on the side of using analog EQs, but first I'll tell you why to use a simple software (like Equalizer APO). Digital EQ can be both a simulation of an analog one, or a mathematically speaking perfect EQ, which can only be achieved digitally. There are multiple "simulations" of an analog EQ, the most common is a biquad filter (same as two RC circuits), which is actually the worst in terms of phase linearity (if we don't count the algorithms in the Wikipedia articles, lol). These phase unlinearities basically mean selectively and increasingly delaying frequencies. This can be corrected to some extent (which could be a topic of an engineering contest, it's a beautiful but complicated field), but most producers actually seem to like EQs that are not phase-perfect, so most digital EQs for production (even the hardware ones by Allen&Heath or Behringer) clone these. I've highlighted "for production", because for playback you don't want to mess with the phase curve, most audio chains will murder it anyway.

 

You want the lowest delay, and the lowest possible distortion (which only digitally is zero), and you want it for free, because you can. But here's one downside: if you integrate an EQ in your system's audio chain, the problem of delays emerge. The first versions of linear phase EQs used FFT, which has linear frequency distribution, but EQs are handled logarithmically (just like human hearing), so you need way more samples for low frequencies. For things like room corrections, even 4096 samples (nearly 1/10th of a second) simply won't cut it. A simple block-by-block FFT will massacre block contacts, and even the most advanced codecs (like Opus), which use FFT for a different purpose, struggle to match parts of the waveform, these can cause clicking, however, mostly inaudible (you can spot them easily as full band spikes in band-limited compressed audio files' spectrum). To fix this issue, you need another block to fade to. Now we're looking at 0.2 second delays, which is unacceptable anywhere. For production, the EQ knows the entire audio it's working on, so it could be pre-processed, but you need this for an unknown input in real-time.

 

So, engineers arrived at hybrid convolution, which is a way to do FFT-based EQ-ing for ultra-low sample counts, even 1, which is basically real-time. FFT is used to create a long (even as long as 65536 samples) delay "plan" with a multiplication (gain/polarity change) at each sample in the future, and all those modifications will result in a no-delay, phase-perfect, ultra-detailed spectrum change. If you want to know how exactly this works, look into the works of Angelo Farina, he is basically the guy who invented modern room correction. There is only one problem: this requires brutal processing power, my room EQ is pushing the limits of my Ryzen 3600X, but it has multiple 4-way crossovers, ~300 variable bands of EQ on everything, and full spatial upmix. Simple stereo EQing will only use 2% of your processor at max, but if you need that performance back for some reason, analog EQ is your friend at the tradeoffs I just wrote.

 

TL;DR: Digital EQ will exactly do what you want at a performance penalty, which might be huge. Analog EQ causes side effects, which you might like, but technically they're errors.

What's the point of an analog equalizer like the Schiit Loki or JDS labs Subjective 3. They cost quite a lot of money (100$ and 150$) so they gotta have some advantage over a software eq tweaking the signal before it gets converted, right?

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, rebmem rehtona yb esu ni said:

What's the point of an analog equalizer like the Schiit Loki or JDS labs Subjective 3. They cost quite a lot of money (100$ and 150$) so they gotta have some advantage over a software eq tweaking the signal before it gets converted, right?

Ehhh, not really.

 

Software has the advantage of not dicking with the phase, which can really mess stuff up. They also offer lots more features.

 

But hardware has lots of advantages aswell:

 

Quick and easy to adjust something.

 

Easy to add into a fully analogue system (turntable, CD system etc.)

 

Easy to patch into a system (more on the Live sound side of things)

 

More fun, tactile etc.

 

Can look nice.

LTT's Resident Porsche fanboy and nutjob Audiophile.

 

Main speaker setup is now;

 

Mini DSP SHD Studio -> 2x Mola Mola Tambaqui DAC's (fed by AES/EBU, one feeds the left sub and main, the other feeds the right side) -> 2x Neumann KH420 + 2x Neumann KH870

 

(Having a totally seperate DAC for each channel is game changing for sound quality)

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, rebmem rehtona yb esu ni said:

What's the point of an analog equalizer like the Schiit Loki or JDS labs Subjective 3. They cost quite a lot of money (100$ and 150$) so they gotta have some advantage over a software eq tweaking the signal before it gets converted, right?

There are barely pros on the side of using analog EQs, but first I'll tell you why to use a simple software (like Equalizer APO). Digital EQ can be both a simulation of an analog one, or a mathematically speaking perfect EQ, which can only be achieved digitally. There are multiple "simulations" of an analog EQ, the most common is a biquad filter (same as two RC circuits), which is actually the worst in terms of phase linearity (if we don't count the algorithms in the Wikipedia articles, lol). These phase unlinearities basically mean selectively and increasingly delaying frequencies. This can be corrected to some extent (which could be a topic of an engineering contest, it's a beautiful but complicated field), but most producers actually seem to like EQs that are not phase-perfect, so most digital EQs for production (even the hardware ones by Allen&Heath or Behringer) clone these. I've highlighted "for production", because for playback you don't want to mess with the phase curve, most audio chains will murder it anyway.

 

You want the lowest delay, and the lowest possible distortion (which only digitally is zero), and you want it for free, because you can. But here's one downside: if you integrate an EQ in your system's audio chain, the problem of delays emerge. The first versions of linear phase EQs used FFT, which has linear frequency distribution, but EQs are handled logarithmically (just like human hearing), so you need way more samples for low frequencies. For things like room corrections, even 4096 samples (nearly 1/10th of a second) simply won't cut it. A simple block-by-block FFT will massacre block contacts, and even the most advanced codecs (like Opus), which use FFT for a different purpose, struggle to match parts of the waveform, these can cause clicking, however, mostly inaudible (you can spot them easily as full band spikes in band-limited compressed audio files' spectrum). To fix this issue, you need another block to fade to. Now we're looking at 0.2 second delays, which is unacceptable anywhere. For production, the EQ knows the entire audio it's working on, so it could be pre-processed, but you need this for an unknown input in real-time.

 

So, engineers arrived at hybrid convolution, which is a way to do FFT-based EQ-ing for ultra-low sample counts, even 1, which is basically real-time. FFT is used to create a long (even as long as 65536 samples) delay "plan" with a multiplication (gain/polarity change) at each sample in the future, and all those modifications will result in a no-delay, phase-perfect, ultra-detailed spectrum change. If you want to know how exactly this works, look into the works of Angelo Farina, he is basically the guy who invented modern room correction. There is only one problem: this requires brutal processing power, my room EQ is pushing the limits of my Ryzen 3600X, but it has multiple 4-way crossovers, ~300 variable bands of EQ on everything, and full spatial upmix. Simple stereo EQing will only use 2% of your processor at max, but if you need that performance back for some reason, analog EQ is your friend at the tradeoffs I just wrote.

 

TL;DR: Digital EQ will exactly do what you want at a performance penalty, which might be huge. Analog EQ causes side effects, which you might like, but technically they're errors.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×