Jump to content

Can we really can't tell the different between hires and standard CD or MP3 audio?

e22big

I was a bit bored, so I've decided to check out some hires music ( 96khz) and subscribe to Tidal to get some more samples, then compared the same section of song to what streamed on Spotify Premium.

 

It sound audibly different. Not sure if that's objectively better, but very clearly different. Especially in a more quiet song (some background instrument for example, sound a lot more clear in Tidal track and offline file, complared to Spotify streaming.) 

 

Are there any explanation for this? I have a fairly good ears but I don't have a super power, were the song on Spotify mastered differently? or was it just compression algorythm? 

Link to comment
Share on other sites

Link to post
Share on other sites

It's the compression. The less compressed a file is, the better it will sound.

I can't explain this in much depth but it's using more bits of data for each second of the audio, leading it to sound clearer basically. 

There's a bit more to it than that but that's the idea

Link to comment
Share on other sites

Link to post
Share on other sites

320khz compression will sound markedly better than 96khz--even to the untrained ear.  The diminishing returns--for me, is between the quality of 320khz mp3 and uncompressed studio audio.  I'm no a discriminating audiophile though, ymmv.

 

And yes, there's a difference between a song that was recorded--true to the studio--and one that has been forced into equalizing loudness.  In the latter case, the "texture" of the music is gone.  And that doesn't matter what compression you have on either one.  It's the reason why many people will tell you music sounded better BITD; back before they started forcing every instrument to be equally loud.

Link to comment
Share on other sites

Link to post
Share on other sites

Lossy audio compression makes use of what you can hear vs what you may not notice, so in limited bitrates it puts more towards the more important bits. It's more noticeable at lower bitrates. For example, 128k mp3 will be noticeably worse than CD. 256k is getting close to CD. I've not tried listening to anything above CD quality so can't comment on that. Years ago when I was ripping my CD collection into mp3, I settled on 320k as good enough, and it was the highest setting I could use at the time anyway.

 

I find the difference in quality to manifest itself as a kinda higher frequency noise, which is hard to put into words. I visualise it as like the blocking you can see in a higher compressed jpeg for example. At least that applies to mp3, I haven't repeated the same with other encoding methods that are around now.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, IPD said:

320khz compression will sound markedly better than 96khz--even to the untrained ear.  The diminishing returns--for me, is between the quality of 320khz mp3 and uncompressed studio audio.  I'm no a discriminating audiophile though, ymmv.

Think you're confusing sampling rate (kHz) with bitrate (kbps). 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, porina said:

Think you're confusing sampling rate (kHz) with bitrate (kbps). 

You're right.  I'm thinking sampling rate--from what I ripped.

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, IPD said:

320khz compression will sound markedly better than 96khz--even to the untrained ear.  The diminishing returns--for me, is between the quality of 320khz mp3 and uncompressed studio audio.  I'm no a discriminating audiophile though, ymmv.

 

And yes, there's a difference between a song that was recorded--true to the studio--and one that has been forced into equalizing loudness.  In the latter case, the "texture" of the music is gone.  And that doesn't matter what compression you have on either one.  It's the reason why many people will tell you music sounded better BITD; back before they started forcing every instrument to be equally loud.

It's 320kbit/s vs 96 khz but I get your point. 

 

Actually, I kinda hate it that way, the instruements get lounder than the even the vocal part sometime, it's sound like someone is trying to sing in a club

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, porina said:

Lossy audio compression makes use of what you can hear vs what you may not notice, so in limited bitrates it puts more towards the more important bits. It's more noticeable at lower bitrates. For example, 128k mp3 will be noticeably worse than CD. 256k is getting close to CD. I've not tried listening to anything above CD quality so can't comment on that. Years ago when I was ripping my CD collection into mp3, I settled on 320k as good enough, and it was the highest setting I could use at the time anyway.

 

I find the difference in quality to manifest itself as a kinda higher frequency noise, which is hard to put into words. I visualise it as like the blocking you can see in a higher compressed jpeg for example. At least that applies to mp3, I haven't repeated the same with other encoding methods that are around now.

I don't know, but the specific section of the track I compared between Tidal max and Spotify is different in a sense that with Spotify, I can like hear that the string section came from the general direction on the right but with Tidal it was super clearly on the right, like someone playing violin right next to my ear. the gap is huge and not subtle at all.

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, e22big said:

I don't know, but the specific section of the track I compared between Tidal max and Spotify is different in a sense that with Spotify, I can like hear that the string section came from the general direction on the right but with Tidal it was super clearly on the right, like someone playing violin right next to my ear. the gap is huge and not subtle at all.

You have to keep in mind that some songs get remastered for Tidal, so they take the song files and make it more 3D, take more advantage of higher res audio with different effects etc.

If you take the same exact mastering of a song, and compare MP3 320kbps, and FLAC, I basically guarantee you won't notice a difference, even on good headphones.

I only see your reply if you @ me.

This reply/comment was generated by AI.

Link to comment
Share on other sites

Link to post
Share on other sites

It also depends on which headphones/earphones you use. Some crappy headphones hides the 'micro details' to be precise.

My DT880 for example reveals any detail without even trying whereas my crappy headphone like the Marshall Mid doesn't.

DAC/AMPs:

Klipsch Heritage Headphone Amplifier

Headphones: Klipsch Heritage HP-3 Walnut, Meze 109 Pro, Beyerdynamic Amiron Home, Amiron Wireless Copper, Tygr 300R, DT880 600ohm Manufaktur, T90, Fidelio X2HR

CPU: Intel 4770, GPU: Asus RTX3080 TUF Gaming OC, Mobo: MSI Z87-G45, RAM: DDR3 16GB G.Skill, PC Case: Fractal Design R4 Black non-iglass, Monitor: BenQ GW2280

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/2/2022 at 6:39 AM, e22big said:

I was a bit bored, so I've decided to check out some hires music ( 96khz) and subscribe to Tidal to get some more samples, then compared the same section of song to what streamed on Spotify Premium.

 

It sound audibly different. Not sure if that's objectively better, but very clearly different. Especially in a more quiet song (some background instrument for example, sound a lot more clear in Tidal track and offline file, complared to Spotify streaming.) 

 

Are there any explanation for this? I have a fairly good ears but I don't have a super power, were the song on Spotify mastered differently? or was it just compression algorythm? 

My theory on this is depending on which music it could be a different master. Back when I used to test this out with older music usually rock. Deezer vs Spotify most music nearly sounded the same just the normal differences I could hear from Spotify premium high quality and flac from Deezer. But sometimes the songs would just be drastically different. My ears aren't the greatest but I can still pick out differences between flac and high quality mp3s most of the time. 

Link to comment
Share on other sites

Link to post
Share on other sites

I (suspect) that Tidal is using a different master of the music than Spotify and that's the main source of the difference, not audio quality.

 

In my experience below 128kbps mp3 is easy to tell. I can still reliably differentiate higher bitrates in ABx blind comparisons, but the differences are subtle and I don't have a strong preference with most musical content. In normal listening (without being able to compare the two versions), for instance, I wouldn't be able to say if a an arbitrary song was 320kbps mp3 or lossless, with the exception of a handful of specific test tracks.

 

I haven't blind tested much opus, but I suspect it's indistinguishable for most musical content (at least for me) above 128kbps.

 

Certain signals will have clearly audible differences even with good formats at very high bitrates. Such signals aren't typically found in music, though they could reasonably be incorporated into music if someone wanted to.

 

On 6/2/2022 at 4:11 AM, porina said:

I find the difference in quality to manifest itself as a kinda higher frequency noise, which is hard to put into words. I visualise it as like the blocking you can see in a higher compressed jpeg for example. At least that applies to mp3, I haven't repeated the same with other encoding methods that are around now.

I like to think of it as "crispy" transients and spatial information: too much definition where there shouldn't be any. A lower frequency (easier to hear for older people) difference I also like to use with certain tracks is noise shaping; almost every compression algorithm will try to preserve more information in the most sensitive frequencies, which can change the "color" of complex broadband signals.

Link to comment
Share on other sites

Link to post
Share on other sites

So, it's possible to tell the difference between something like 320Kbps audio and uncompressed audio. It's very hard to appreciate the difference though. The people who can usually have to LOOK for it and constantly switch back and forth between two tracks.

Let's look at the OTHER ways audio quality can be "increased"

Frequency - going from 44KHz to something like 96KHz on the encoding... this doesn't matter. Human hearing is usually cited as 20Hz to 20KHz (and by the time you're a teenager, you probably aren't hearing much above 18KHz, you loose high frequency hearing with age). You can perfectly encode a 20KHz frequency by taking 2 samples per Hz. Also when sound is mastered, there's often a high frequency rolloff at around 20KHz so the sound information is lost. This is a case where the extra frequency information only really benefits mice. 

Bit Depth - this is essentially the number of steps between a soft sound and a loud sound. 16 bits generally allows someone to hear the difference within a range of 96dB. With dithering (so slight distortion) this can go up to around 120dB. If your room has a noise floor of 20dB. If you take advantage of dithered 16 bit sound you'd need the higher of your music to hit 140dB. 140dB is the point where sound starts to physically hurt. 70dB for prolonged periods is associated with permanent hearing loss. Basically to really appreciate high bit depth music you need to blow your ears out. Also, by around the 123dB range you're nearing the point where electrical noise in even very high end gear starts to dominate.

 

Now, if you're PRODUCING MUSIC, yeah, keep things in as high quality of a format as possible.

 

On 6/2/2022 at 4:11 AM, porina said:

Lossy audio compression makes use of what you can hear vs what you may not notice, so in limited bitrates it puts more towards the more important bits. It's more noticeable at lower bitrates. For example, 128k mp3 will be noticeably worse than CD. 256k is getting close to CD. I've not tried listening to anything above CD quality so can't comment on that. Years ago when I was ripping my CD collection into mp3, I settled on 320k as good enough, and it was the highest setting I could use at the time anyway.

Well encoded 320Kbps audio will be VERY close to uncompressed audio. And yes, modern compression generally takes into account psychoacoustics.
 

On 6/2/2022 at 4:11 AM, porina said:

I find the difference in quality to manifest itself as a kinda higher frequency noise, which is hard to put into words. I visualise it as like the blocking you can see in a higher compressed jpeg for example. At least that applies to mp3, I haven't repeated the same with other encoding methods that are around now.

This is pretty accurate. JPG and MP3 both use FFTs to compress data.
On top of that, the areas with the most distortion tend to be higher frequency sounds.

320Kbps has the quality mostly drop off at or above 19.5KHz... so basically a frequency most people can't hear well anyway.

https://thesession.org/discussions/19642
 

The way I internalize 320Kbps audio is that it's a .jpg file with the compression slider pushed to max quality. I still keep originals for future editing but I can't see a difference between the original and the compressed image, despite the compressed image being much smaller in size.
 

On 6/3/2022 at 11:09 AM, Nimrodor said:

I (suspect) that Tidal is using a different master of the music than Spotify and that's the main source of the difference, not audio quality.

Tidal intentionally adds slight amounts of distortion to make it sound different.





If you want an example of how diminishing returns kick in -

By the time I'm at 196Kbps I don't really feel a big difference anymore. Bit rates below that though... awful. Some is likely that the lower bitrates aren't "well encoded" but still.
 

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, cmndr said:

If you want an example of how diminishing returns kick in -

By the time I'm at 196Kbps I don't really feel a big difference anymore. Bit rates below that though... awful. Some is likely that the lower bitrates aren't "well encoded" but still.

To be fair though the highest quality audio stream Youtube offers with the video is 160kbps opus, so the 192kbps vs 320kbps mp3 isn't quite a fair comparison.

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, Nimrodor said:

To be fair though the highest quality audio stream Youtube offers with the video is 160kbps opus, so the 192kbps vs 320kbps mp3 isn't quite a fair comparison.

4K 60Hz purportedly increases this to 320Kbps. (not sure WHY since audio should be in its own container)
I am admittedly skeptical but it is very easy to hear that something is wrong at lower bit rates in that sample.

 

I'm at the point where, practically speaking, I've concluded it doesn't matter THAT much in most instances. After I reinstalled my OS I forgot to change spotify back to 320Kbps and it was on whatever its default was for ages. I even find myself listening to things (mostly jRPG videogame music) on youtube because I prefer the arrangements a bit more than what I find on spotify at times.

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, cmndr said:

4K 60Hz purportedly increases this to 320Kbps.

It does not, and never did. It's a misconception which stems from Youtube download sites offering 320kbps mp3 downloads. Youtube used to do up to 256kbps AAC with higher video resolutions, but this is basically unavailable nowadays outside of Youtube music. Surround and 3D videos can do higher bitrates, but since the information is spread out over more channels, it's not necessarily an increase in audio quality.

 

image.png.4a13ce77fa1c5b12bc2696cdfcbed9cf.png

 

Loading the video in 4k uses opus 251, which is variable bitrate, 160kbps max. This is the highest audio quality available on most Youtube videos nowadays.

Link to comment
Share on other sites

Link to post
Share on other sites

I used 'purportedly' very deliberately.
I suspect that the people at Youtube AB tested the heck out of things and did all the cost benefit analyses in the world to come up with ~160Kbps as their cutoff figure.

 

A better test might be this - https://www.npr.org/sections/therecord/2015/06/02/411473508/how-well-can-you-hear-audio-quality

I still consider the lowest of the settings "good enough". When I took it I could tell the difference between the lowest and higher bit rate options 5/6 times (that last time I got lazy and went fast) but couldn't reliably tell between high bit rate and lossless. Maybe if I used my HD800s instead of speakers (I have bad things like desk bound and off axis tweeters because hybrid work space) and I rolled off the highs because - listening fatigue.

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

50 minutes ago, cmndr said:

Frequency - going from 44KHz to something like 96KHz on the encoding... this doesn't matter. Human hearing is usually cited as 20Hz to 20KHz (and by the time you're a teenager, you probably aren't hearing much above 18KHz, you loose high frequency hearing with age). You can perfectly encode a 20KHz frequency by taking 2 samples per Hz. Also when sound is mastered, there's often a high frequency rolloff at around 20KHz so the sound information is lost. This is a case where the extra frequency information only really benefits mice. 

The reduction of frequency range is a gradual effect and not all or nothing. You lose sensitivity, but it doesn't mean you don't have sensitivity at all. As a parallel, I've experimented using optical filters to block visible light, but passing near infra-red (beyond 720nm). You can see it! And yes, I did check it wasn't residual visible light getting through. The observed intensities matches near-IR levels, and not visible red.

 

According to sample theory, the maximum frequency you can represent is half the sampling rate. e.g. 44.1 kHz sampling rate of CD quality means the highest frequency you can represent is 22 kHz. If you really did want to represent 22 kHz signals at quality, you'd ideally want to go above that somewhat. Note there are special cases where this is not the case, but it doesn't work for this type of sound reproduction.

 

Filters are needed to prevent aliasing, which are signal frequencies above half the sampling rate getting mapped back below. It is hard to have a perfect filter, and practical implementations need to roll off earlier to have a better response. Sampling at much higher rates makes the filtering a lot easier to do as it can be a lot less aggressive.

 

50 minutes ago, cmndr said:

16 bits generally allows someone to hear the difference within a range of 96dB. With dithering (so slight distortion) this can go up to around 120dB.

Higher bit depth can and does help even if you don't use the theoretical dynamic range. Each bit represents one step in level. More bits means finer steps. The error between step size and ideal value is quantisation noise, and it gets relatively worse as signals get quieter. Dithering is adding noise to help control that noise, as you can influence how that noise sounds - noise shaping.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

@porina - all valid points. As a broad theme I'd want to emphasize the question should usually be "is it appreciably better?" as opposed to "is it perfect?"

Hearing loss at a given frequency isn't a perfect binary. My (potentially flawed) understanding is that the hair cells in the ear sustain damage and then don't activate as readily. As you age you need a much stronger signal at a given frequency to hear a frequency above and beyond the noise floor. When doing A/B tests for frequency I have to use my headphones to detect higher ranges vs my speakers (even though I can measure the frequency from the speakers with a microphone) and I generally have to turn the volume up.

yeah, higher bit depths do improve the number of steps, though practically speaking the improvement is biggest at the lowest amplitudes, essentially the area where hearing is least sensitive.

The general point I put out wrt dithering was that it's "good enough" and yeah, dithering does manifest "noise" (artifacts as a more accurate term?) but it's generally "smooth" enough to give an even transition.

https://en.wikipedia.org/wiki/Ordered_dithering

 

At some extremes dithering (sometimes called stippling) actually is used as an intentional artistic effect.

Beyond the pure signal itself, by the time you've passed a signal through an amp and speakers/headphones to your ears and then to your brain a lot of "smoothing" ends up occurring. The visual analog I'd put out is squinting at pixel art and seeing it smoothed out.

Going back to that top point, the digital encoding (assuming it's not bad) matters so little that other things dominate in the enjoyment factor (what you're actually listening to, any EQ, the speakers/headphones, etc.)
 

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

Some artists/producers will also provide a better master to higher end services like Tidal so that the "stream quality" becomes irrelevant. Same thing that's been done to Vinyl vs CDs for decades.

 

Edit: just seen that someone already mentioned this/ GG no re

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, ShearMe said:

Some artists/producers will also provide a better master to higher end services like Tidal so that the "stream quality" becomes irrelevant. Same thing that's been done to Vinyl vs CDs for decades.

 

Edit: just seen that someone already mentioned this/ GG no re

Less of a feature than you'd think
https://www.audiosciencereview.com/forum/index.php?threads/mqa-deep-dive-i-published-music-on-tidal-to-test-mqa.22549/

 

 

 

Now, there are some arguments for "Tidal gets better remasters" than some other places and I'm open to hearing it but the differences in formats are pretty minimal. I will absolutely say how well something is mastered matters though.

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/2/2022 at 11:39 AM, e22big said:

I was a bit bored, so I've decided to check out some hires music ( 96khz) and subscribe to Tidal to get some more samples, then compared the same section of song to what streamed on Spotify Premium.

 

It sound audibly different. Not sure if that's objectively better, but very clearly different. Especially in a more quiet song (some background instrument for example, sound a lot more clear in Tidal track and offline file, complared to Spotify streaming.) 

 

Are there any explanation for this? I have a fairly good ears but I don't have a super power, were the song on Spotify mastered differently? or was it just compression algorythm? 

Spotify uses Oggs, unless they’ve swapped since I moved to Apple Music, which isn’t very good, probably why. 

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/2/2022 at 2:21 PM, Origami Cactus said:

You have to keep in mind that some songs get remastered for Tidal, so they take the song files and make it more 3D, take more advantage of higher res audio with different effects etc.

If you take the same exact mastering of a song, and compare MP3 320kbps, and FLAC, I basically guarantee you won't notice a difference, even on good headphones.

This is the biggest difference between these "hi-res" songs and the standard recording.
Many songs are poorly mastered. 
Listen to FLAC/256 or 512DSD etc.  They "may" be better, but honestly by far the bigger difference is the mastering.

Even my DAC (which I have had for years) maxes at 32bit 384Kz and you aren't going to notice a difference unless the song has been mastered correctly.
 

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, cmndr said:

Less of a feature than you'd think
https://www.audiosciencereview.com/forum/index.php?threads/mqa-deep-dive-i-published-music-on-tidal-to-test-mqa.22549/

 

Now, there are some arguments for "Tidal gets better remasters" than some other places and I'm open to hearing it but the differences in formats are pretty minimal. I will absolutely say how well something is mastered matters though.

I'm a bit confused... what of my statement does your "less of a feature" apply to?

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, ShearMe said:

I'm a bit confused... what of my statement does your "less of a feature" apply to?

Tidal promotes MQA as being a "huge step up" in quality. There's not really a good practical difference when you go in and check the FFTs and there are reasons to believe it's actually inferior as a format.

Tidal also notes that they get artists and studios to remaster some of their work for them. THIS has tangible benefits, though well mastered music ought to work well on any format with sufficiently high fidelity.

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×