Jump to content

Apple announces lossless Apple Music is coming in June at no added cost

1 hour ago, avg123 said:

but the 40khz sampling rate comes from the The Nyquist–Shannon sampling theorem that says the sampling frequency must be greater than twice the maximum frequency. since the upper limit of human hearing is considered 20khz, that is where the 40khz comes from. to give some headroom 44.1khz was chosen.

the "twice the maximum frequency" is arbitary. 

he could have easily said 3 times the maximum frequency or 4 times the maximum frequency. he decided twice because the sampling at twice is accurate enough for most human hearing. but higher the sampling frequency, higher is the accuracy of the reconstructed waveform. there is no end to it. 192khz is better than 44.1. 384khz is better than 192khz and so on.

and note he says must be greater than twice. never said there is no need for greater. twice is the minimum. people keep bringing it up in arguments when they try to say 44.1khz is more than enough.

44.1khz is considered the minimum. more is definitely better

Hold up. I think you have misunderstood the theorem. Twice the sample rate is not "the minimum accurate enough for human hearing".

A higher sampling rate does not mean you capture a signal more accurately. It just means you can capture higher frequencies. 

 

If you have a 15kHz tone playing then you need a minimum of 30kHz sample rate to capture it flawlessly. A 40kHz sample rate would not capture the 15kHz tone any more accurately than a 30kHz sample rate.

 

Neither Nyquist nor Shannon ever talked about "greater than twice the maximum frequency" or "x2 is the minimum".

Here is the original Shannon theorem:

Quote

Theorem 1: If a function f(t) contains no frequencies higher than W cps, it is completely determined by giving its ordinates at a series of points spaced 1/2 W seconds apart.

 

It doesn't say "you need a minimum of 2x to capture a signal, but higher is better".

It says "at 2x the sample rate you can completely capture the signal without any info being lost".

 

 

Are you perhaps thinking of a digital waveform as a stairstep? And that higher sampling rate results in a smoother curve? Like this:

LG-192-kHz-sampling.jpg.52356f58791f12080617584f0fc7d949.jpg

 

It's a common misconception. The picture above is marketing BS. It's not how audio works*.

 

*(unless you're using a zero-order hold DAC, which is not something used in any device meant for any kind of decent fidelity audio for humans. We use Delta-sigma modulation for that and that do not produce stairsteps regardless of the sample rate or bit depth)

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Kisai said:

Does it matter? The Dell monitor is still the DAC.

My AMD GPU is the audio device, it can only send the formats it can support. My Onkyo is the DAC so what matters is the combination of what my audio device can send and what my DAC can receive, there could be a disparity between them. The devices can tell each other what they support and that resultant is what you see on the pictures you posted however to support what you actually said you'd need to show a limitation of the Realtek device not what your monitor supports, of which I still do not think the hardware device sending it an audio signal is your Realtek simply by the fact you are looking at a DisplayPort device which means is essentially HAS to be your GPU.

 

Yes my monitor has a DAC, it's still not the audio device in Sound in Control Panel as my picture I showed you shows that is the AMD High Definition Audio device.

 

However neither of these have anything to do with your original point you made that I addressed.

 

https://linustechtips.com/topic/1338993-apple-announces-lossless-apple-music-is-coming-in-june-at-no-added-cost/?do=findComment&comment=14740020

 

I still had no idea what audio device hardware you were looking at to be able to evaluate what you said. Far as I was concerned your Realtek audio device was not the audio device for your Asus monitor so what you showed did not support what you said, and it only shows the limitation of your monitors audio and that's it. Turns out this was the case since you now supplied the picture showing this.

 

3 hours ago, Kisai said:

The Intel iGPU is connected to the cheap monitor. the nVidia GPU is connected to the 4K via DP? Are you trying to suggest that a monitor can be driven by something other than a GPU here?

 

https://nvidia.custhelp.com/app/answers/detail/a_id/3646/~/nvidia-hd-audio-driver-supports-a-limited-number-of-audio-formats.

 

Yet multiple posts out there, even on NVIDIA's forum show that they only get 16-bit 48K audio over DP.

No I was suggesting that your Realtek audio device was not the device sending audio to your monitor so what you said and what you posted have nothing to do with each other or validate what you said.

 

You have correctly identified a limitation in your Nvidia hardware and your monitor, how this shows how Realtek hardware is similarly limited it does not. You don't say a Honda Civic has a slow 0-60 time then prove it with a Toyota Corolla.

 

Quote

The problem occurs when the system reports invalid data for the audio formats supported by the display. When this happens, the NVIDIA HD audio driver defaults to the following format

The issue is simply your monitor, that's it.

Link to comment
Share on other sites

Link to post
Share on other sites

my laptop seems to support 24 bit 192khz. i dont if it is accurate or not since the laptop is old. also it has microsoft drivers installed

how do i know if it really supports 24 bit 192khz o not?

Link to comment
Share on other sites

Link to post
Share on other sites

I found a great website to test whether you can tell the difference between lossy and lossless audio or not.

Quote

 

This test will test whether you can tell the difference between a losslessly compressed and lossily compressed version of a song sample, without trying to choose which is which. It does this using an ABX test.

You will be presented with two reference samples (A and B), and a target sample (X). You have to decide whether sample X matches sample A or sample B. You will be administered multiple trials for each of the five tracks used in the Tidal test.

The accuracy of the test will increase markedly as the number of trials increases. Although 5 trials is sufficient to estimate whether you can tell the difference between lossy and lossless, to work out which tracks you can tell the difference on will require 20 trials per sample2.

 

In this test, you will be presented with three samples: A, B and X. A and B are consistent, one lossless and one lossy. Each trial, X is randomly set to either A or B. You have to work out which one it is.

 

ABX High Fidelity Test - Spotify HQ edition (digitalfeed.net)

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, leadeater said:

 

 

No I was suggesting that your Realtek audio device was not the device sending audio to your monitor so what you said and what you posted have nothing to do with each other or validate what you said.

 

That would required turning it on, it's not ON on the desktop. Most laptops I've used simply don't let you configure anything. That goes back to the original point that most of the time, that fancy high end marketing for the motherboard is completely meaningless, as they don't even have a SPDIF output, and the HDMI output doesn't use it.

 

Pretty much every computer out there is using realtek parts for the audio because they're cheap. That says nothing about their quality, and even when they market 24bit/96khz

https://www.realtek.com/en/products/computer-peripheral-ics/item/alc892

 

  • DACs with 95dB SNR (A-weighting), ADCs with 90dB SNR (A-weighting)
  • All DACs supports 44.1k/48k/96k/192kHz sample rate

Compare to the 105 or 109 db the X-fi has.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

... I have no idea for the last 3 pages what you audiophiles are talking about. But would someone recommend/link me to the best possible listening device for the lastest iphone under this lostless "Upgrade" not a whole page of products, but maybe one of each recommendation with overhead and in ear.  

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, lostcattears said:

... I have no idea for the last 3 pages what you audiophiles are talking about. But would someone recommend/link me to the best possible listening device for the lastest iphone under this lostless "Upgrade" not a whole page of products, but maybe one of each recommendation with overhead and in ear.  

If you can wait maybe hang out for an Apple accessory device. If they are offering this then I would have to guess they have something planned in to works to utilize, Apple doesn't generally release something like this without their own plans to use it.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Kisai said:

That would required turning it on, it's not ON on the desktop. Most laptops I've used simply don't let you configure anything. That goes back to the original point that most of the time, that fancy high end marketing for the motherboard is completely meaningless, as they don't even have a SPDIF output, and the HDMI output doesn't use it.

 

Pretty much every computer out there is using realtek parts for the audio because they're cheap. That says nothing about their quality, and even when they market 24bit/96khz

https://www.realtek.com/en/products/computer-peripheral-ics/item/alc892

 

  • DACs with 95dB SNR (A-weighting), ADCs with 90dB SNR (A-weighting)
  • All DACs supports 44.1k/48k/96k/192kHz sample rate

Compare to the 105 or 109 db the X-fi has.

Anything even remotely new would be using ALC898 (110dB/104dB SNR) or newer i.e. ALC1220-VB (120dB SNR). But SNR is only a small part of a wider picture.

 

Like I agree all onboard audio is generally garbage but Realtek devices aren't actually as limited as what you are saying, not in audio format/encoding support. They are bad in other ways. However if you are keeping everything digital and using an external audio device/DAC then it's largely not going to matter how you get it there be it your GPU HDMI/DP or SPDIF or USB or some other way as long as it's pure digital end-to-end to that external device. In such scenario all that is being utilized is encoding so the DAC specs are not applicable at all, neither are any of the other pats of the onboard audio implementation as they simply are not being used either.

 

Edit:

Also onboard audio implementations could already be well on their way, or already are, better than your older X Fi. Many higher end boards are using the best Realtek codecs with ESS ES9018K2M DAC and higher end Opp Amps. I still wouldn't say the overall audio quality of these is "high end" but I wouldn't say the same of your X Fi either. Not trying to be a downer but all of these are on the lower end of the spectrum of actually good.

 

There's a good reason why I still use a real Blu-ray player (Oppo 105D) with real physical discs in my home theatre because everything in that room can support it i.e. Martin Logan speakers and Plinius Amps

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

There's a good reason why I still use a real Blu-ray player (Oppo 105D) with real physical discs in my home theatre because everything in that room can support it i.e. Martin Logan speakers and Plinius Amps

Nice gear..... ♥️

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, lostcattears said:

... I have no idea for the last 3 pages what you audiophiles are talking about. But would someone recommend/link me to the best possible listening device for the lastest iphone under this lostless "Upgrade" not a whole page of products, but maybe one of each recommendation with overhead and in ear.  

fiio btr5/btr3k, bluetooth hifi receiver

kz zsn pro, in ear monitors

sennheiser hd 6xx/650 58x/580, over ear open and closed backs headphones

 

this should be a good starting point

One day I will be able to play Monster Hunter Frontier in French/Italian/English on my PC, it's just a matter of time... 4 5 6 7 8 9 years later: It's finally coming!!!

Phones: iPhone 4S/SE | LG V10 | Lumia 920 | Samsung S24 Ultra

Laptops: Macbook Pro 15" (mid-2012) | Compaq Presario V6000

Other: Steam Deck

<>EVs are bad, they kill the planet and remove freedoms too some/<>

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, avg123 said:

Recording studios record at far higher quality than CD quality. Professional AD/DA converters used in studios support 24bit at 192khz.

Lynx audio and Burl Audio are some popular brands used in professional studios.

They can cost upto $6000

 

This is a separate AD converter

B2 Bomber ADC | Burl Audio

 

This is a AD/DA converter

Aurora(n) - Products - Lynx Studio Technology, Inc.

B16 Mothership | Burl Audio

 

They all support 24 bit at 192khz

 

The recorded audio is actually processed at a even higher 64bit and the end product is then converted to 16bit or 24bit whichever is required.

 

Yes, I am aware of that. Hell, I've owned a Lynx L22 for years and still use it in my lab system (since it needs a PCI port). That said, just because the converters are capable of running at 192 kHz sample rates doesn't mean that people always record at 192k.

 

Regardless, there's a lot of stuff that would require remastering, and a lot of stuff that wasn't recorded at anything higher than 48k or 96k.

 

For a lot of music though, the dynamic range of 16/44.1 isn't a limitation at all. 

Just to keep things in perspective, the usual number given is about 13-14 bits of dynamic range for magnetic tape (obviously that depends on the machine, the configuration, the tape, etc...), and you might get 35 kHz out of a multitrack machine.

 

So by the numbers, a decent 24/96 converter will beat a Studer A827 in every way except sexiness, and a lot of studios back in the day could never have hoped to afford something like that. Back in 1975, none of the machines that existed were able to come close to that performance.

 

I think we can all agree that there are some fantastic records from the analog era. The bigger issue (and why this is good to see) is compression algorithms. 

 

 

Where 24/192 is amazing is for measurements. In fact, converters like the AK5394 (sadly discontinued), the AK5397 and the PCM4220 have been the building blocks for a lot of audio analyzers. 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

As I've mentioned before, the limitation these days is usually not the converter chips, it's in the analog domain or the result of software issues (compression algorithms, etc). I won't touch on the latter since I'm not an SE.

 

192k sample rates should (theoretically) allow for up to 96 kHz frequencies to be reproduced. Aside from being above our hearing range, that's well outside the domain of most power amps and headphone amps. Some can, but a lot of them intentionally roll off at 60k or so.

 

Power amps that are flat to 100 kHz + are often problematic. Some of them break into parasitic oscillation as components age (and blow up), others will blow up if you connect/disconnect the inputs while they're powered up.

 

As I mentioned earlier, I am not aware of any DACs or ADCs that can do much better than about 20 bits of dynamic range.

Link to comment
Share on other sites

Link to post
Share on other sites

23 hours ago, LAwLz said:

I'd also like to add that this thread seems full of people who think they know how audio works, but in reality are just making a bunch of (incorrect) assumptions without having a firm understanding of it.

While it was many moons ago, my study in school for a decent amount of time involved digital processing...which included programming a lightened form of MP3.

 

To clear a few things up though.  Sample rates from ADC, vs format stored/sent vs DAC's should be 3 different topics really.  In this case though, it's the stored digital format, so I still stand by what I said 48 kHz sample rate is likely more than anyone would need.  96 kHz's just starts becoming a waste of space and I still think 192 kHz is pretty much pointless in a stored format.

 

Just as a quick summary of ADC/storage/DAC differences:

ADC - Sample rates above 48 kHz might matter.  Analog filter's can be quite sloppy; e.g. If it was designed to cut off around 22 kHz frequency, it still likely would lower the dB of the 20 kHz by a bit, 21 kHz by a significant amount, and 22 kHz and above by almost all.  The issue is if you chose 40kHz SR to match about what most people could hear you do, the frequencies 20.5 kHz that weren't able to be filtered would present as a 19.5 kHz signal.  IRC 48 kHz was one of the chosen values because it allowed for enough wiggle room.  Once it's digital, you can easily remove frequencies above 20 kHz more precisely.

 

Stored Data - Really if you assume you can hear only between 20 Hz - 20 kHz, then a 48 kHz SR should be able enough (I know it does get more complicated).  I still stand what I said, going from 96 kHz to 192 kHz, I don't really see a point.  (even 48 kHz seems fine to me).

 

A piccolo only produces a note at 4.2 kHz, so the vast majority of frequencies you actually are getting over 10x SR.

 

DAC - While the input SR might be 48 kHz, the electronics to output the signal have to operate at a certain frequency as well (i.e. the digital analog conversion part)...so this is where it can be important to have it higher...but a 48 kHz digital signal can be converted to s 96 kHz digital signal (and use 96 kHz to convert to analog).  Maybe I am overlooking something, but just saying; I'm not approaching this from a novices eye.

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, H713 said:

Just to keep things in perspective, the usual number given is about 13-14 bits of dynamic range for magnetic tape (obviously that depends on the machine, the configuration, the tape, etc...), and you might get 35 kHz out of a multitrack machine.

I think you're being very generous with that bit depth. 

From what I've heard, cassettes were typically only able to achieve 9 bits of bit depth when they were in perfect condition, and if it was some mixtape some friend had made on a cassette deck then you would be lucky if it even reached 6 bits of bit depth.

The 13-14 bit depth was if you had some of the best recording equipment available near the end of the analog era, and applied noise reduction. Then digital tape killed it.

Link to comment
Share on other sites

Link to post
Share on other sites

“Enjoy” may be random.  I’m remembering a story about a 90’s song that when it went from vinyl to CD there was an issue with a toilet flush being heard in the background. Sound mixing is done with the end product in mind. Change the end product and the mix can be off. 

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, lostcattears said:

... I have no idea for the last 3 pages what you audiophiles are talking about. But would someone recommend/link me to the best possible listening device for the lastest iphone under this lostless "Upgrade" not a whole page of products, but maybe one of each recommendation with overhead and in ear.  

Don't bother with lossless, people don't hear the difference. 

 

And there has been double blind tests done on this and since they show that people basically don't hear any difference with a MP3 at 192 kbps (44,1 kHz) and higher end audio the audiophile scene has started to say that the tests don't count. (there was a study where the participants even preferred a classical audio file in MP3 at 128 kbps over the higher end ones).

 

And don't even get me started on the "vinyl is better at audio reproduction" people. 

 

There absolutely is a point of not getting the cheapest Hi-Fi/audio equipment, but diminishing returns kick in fairly quickly. 

 

Also a thing I have gathered from the audiophile community that is pretty obvious in the vinyl and tube amplifiers discussions is that in those cases the interest isn't really in actually being able to reproduce the original audio as good as possible but that the "errors" that some kinds of equipment gives are to many people subjectively a preferred sound. And that is OK, but please stop saying that the sound in those cases are objectively better. 

Link to comment
Share on other sites

Link to post
Share on other sites

I hope this will make Spotify release their lossless audi sooner. I can't wait to switch back to them. Using Amazon music now because of the higher audio quality, but the software experience is just so much worse.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, Spindel said:

Don't bother with lossless, people don't hear the difference. 

 

And there has been double blind tests done on this and since they show that people basically don't hear any difference with a MP3 at 192 kbps (44,1 kHz) and higher end audio the audiophile scene has started to say that the tests don't count. (there was a study where the participants even preferred a classical audio file in MP3 at 128 kbps over the higher end ones).

 

MP3 compression is terrible, and the better your listening equipment and hearing, the worse it sounds. I remember when MP3's started being the de-facto music sharing format in the late 90's and every single one of them had warbling noises from CD rip's, all due to using the same crappy reference encoder. Some people were like, "320kpbs, basically lossless then", no no. I can tell you immediately when I'm listening to an MP3 because the stereo sound has been collapsed, warbles, destroyed high and low frequencies because "nobody can hear it" and other artifacts. Ogg vorbis continued this trend.

 

Unfortunately lossy music always has artifacts to it, almost entirely due to the encoder stripping bits at a filtering stage, which might work for loud audio, but destroys things like podcasts and classical music.

 

So I'm pretty quick to call BS when people go "you can't hear the difference", because not everything is going to be about frequency range or volume. Settling for rubbish headphones or speakers, of course you're not going to be able to hear it. Like I also hear the difference when people use terrible microphones, because they quite literately sound like they're on a 80's analog telephone. 

 

Yeti X - 24-bit, 48khz or 44.1khz Stereo are the only options.

Jabra call center headset - 16-bit 16khz mono. (Tape Recorder quality)

 

Does everyone notice the difference between lossy audio and lossless audio? No, and those people who can't tell, shouldn't be making decisions about it based on their personal experience. You could probably do a double-blind test and find that everyone who has never listened to loud music/been to a concert/club can actually hear the difference.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Kisai said:

MP3 compression is terrible, and the better your listening equipment and hearing, the worse it sounds. I remember when MP3's started being the de-facto music sharing format in the late 90's and every single one of them had warbling noises from CD rip's, all due to using the same crappy reference encoder. Some people were like, "320kpbs, basically lossless then", no no. I can tell you immediately when I'm listening to an MP3 because the stereo sound has been collapsed, warbles, destroyed high and low frequencies because "nobody can hear it" and other artifacts. Ogg vorbis continued this trend.

 

Unfortunately lossy music always has artifacts to it, almost entirely due to the encoder stripping bits at a filtering stage, which might work for loud audio, but destroys things like podcasts and classical music.

 

So I'm pretty quick to call BS when people go "you can't hear the difference", because not everything is going to be about frequency range or volume. Settling for rubbish headphones or speakers, of course you're not going to be able to hear it. Like I also hear the difference when people use terrible microphones, because they quite literately sound like they're on a 80's analog telephone. 

 

Yeti X - 24-bit, 48khz or 44.1khz Stereo are the only options.

Jabra call center headset - 16-bit 16khz mono. (Tape Recorder quality)

 

Does everyone notice the difference between lossy audio and lossless audio? No, and those people who can't tell, shouldn't be making decisions about it based on their personal experience. You could probably do a double-blind test and find that everyone who has never listened to loud music/been to a concert/club can actually hear the difference.

People exaggerate how bad MP3 is and praise lossless too much. Even on stupid expensive hardware. Well encoded MP3 from a lossless quality source will sound so good I can safely bet 95% of people won't be able to tell a difference reliably in AB test. Problem with MP3's is that most of it people get from whatever sources is often multi re-encoded stuff so it shows it has 3320kbps bitrate, but was re-encoded from 128kbps or something.

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, RejZoR said:

People exaggerate how bad MP3 is and praise lossless too much. Even on stupid expensive hardware. Well encoded MP3 from a lossless quality source will sound so good I can safely bet 95% of people won't be able to tell a difference reliably in AB test. Problem with MP3's is that most of it people get from whatever sources is often multi re-encoded stuff so it shows it has 3320kbps bitrate, but was re-encoded from 128kbps or something.

Good enough really does need to be the point where everyone stops, that point might be slightly different for people but.... when you're buying 0.5m pure silver RCA cables at like $2k you are WAY past good enough lol.

 

MP3 audio can sound great, FLAC too, or w/e so just go with whatever it is you want. If you think the FLAC version of something does actually sound better then all the power to you, just don't get ripped off in the process and this is exactly where Apple is actually doing a good job. I've looked at the FLAC market few years ago, hell nah not paying that much per song thank you very much.

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, leadeater said:

Good enough really does need to be the point where everyone stops, that point might be slightly different for people but.... when you're buying 0.5m pure silver RCA cables at like $2k you are WAY past good enough lol.

 

MP3 audio can sound great, FLAC too, or w/e so just go with whatever it is you want. If you think the FLAC version of something does actually sound better then all the power to you, just don't get ripped off in the process and this is exactly where Apple is actually doing a good job. I've looked at the FLAC market few years ago, hell nah not paying that much per song thank you very much.

There were so many attempts and yet MP3 is still the most widely supported format. Same issue as with JPEG photos. They are both inferior to more modern codecs yet nothing can come even remotely close to it. What good is if AAC Lossless or FLAC are superior when they just aren't supported by pretty much anything.

 

I'd love to use HEIC photos as standard, but since only thing that supports it is my iPhone and as preview only on Windows, it's useless. Same issue is with AAC or OGGG or whatever audio format. Or photo formats. N matter how superior or inferior they are, how widely supported they are is the only important factor. Same reason why SMS is still around. It's garbage by all standards, but it's the only way you can reliably send text message to anyone without worrying if they'll be able to see it or not.

Link to comment
Share on other sites

Link to post
Share on other sites

"no additional cost"

 

until they release a "new and improved" version of airpods....with an "optional" cable (because they will have to)

 

Quote

Apple today announced that starting in June, Apple Music songs will be available to stream in Lossless and Hi-Resolution Lossless formats, but lossless audio won't be supported on the AirPods, AirPods Max, or AirPods Pro.

 

find it funny that even their $500 Max's, that CAN be wired, won't even support it...

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Arika S said:

"no additional cost"

 

until they release a "new and improved" version of airpods....with an "optional" cable (because they will have to)

 

 

find it funny that even their $500 Max's, that CAN be wired, won't even support it...

It's funny people mocking Apple and praising Spotify and Amazon Music for inclusion of lossless/higher quality music yet everyone is trapped in the very same problem. Remember, everyone is using and shilling TWS stuff. It's not just Apple. It's Samsung, Sony, HTC, Nokia, Oppo, OnePlus, Realme, Xiaomi etc etc. Everyone has the exact same problem. Mocking and poking Apple for it just makes everyone look like idiots who don't even understand any of it. Inbefore I'm accused of being Apple fanboy, but that's just the reality of the situation.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×