"Malcolm Teas" wrote in message
om...
This ACC format is actually MPEG 4, aka "MP4".
For what it's worth (not much, I know) I find the use of the contraction
"MP4" misleading. "MP3" is really MPEG Layer 3 compression, and using "MP4"
implies incorrectly that there's a "Layer 4" compression being used to
compress audio for AAC, WMA and related formats, when there's not. Also, no
one uses ".mp4" as a file extension, whereas ".mp3" is quite common.
Google agrees with me, showing 15 times as many instances of "MPEG4" and
"MPEG-4" as "MP4". I suppose the subtlety of the distinction is lost on
many people, but hey, what's Usenet for if not to make subtle distinctions?
These formats are known as lossy compression methods. (Except for
AIFF, which is raw data, and I don't know about WAV.)
WAV is a generic format in that it actually comprises multiple audio
formats, each using its own codec. The WAV file header tells the software
what kind of encoding is actually used. But it is almost always either
completely uncompressed (the most common usage) or compressed using a
lossless format (usually some form of PCM). The only lossy method of
compression used with WAV that I'm aware of is simply reducing the sample
size, which IMHO is more properly classified as "downsampling" rather than
"compression".
They save space
by throwing away information that either can be inferred, isn't
necessary, or can be represented more compactly.
Actually, in the above statement only the "isn't necessary" applies to lossy
compression specifically. Compression techniques in general ALL rely on
encoding the information so that the original information can be inferred
from a more compact representation, whether lossy or not.
There is some
necessary, but small, loss in fidelity of the sound. However, this
loss is small enough that unless you have top of the line equipment,
very good hearing, and a trained ear, you won't miss it.
And the engineers working on lossy compression algorithms believe that
eventually, they will have mapped out human perceptual response well enough
that even with good hearing and a trained ear, you still won't miss the
information tossed out.
That's the whole point of how lossy algorithms like MP3 and MPEG4 work.
They identify portions of the audio signal that are not perceived by the
human ear anyway, and eliminate them. By eliminating some the information
content of the signal, they reduce the amount of information that needs to
be compressed, which reduces the total size of the compressed signal.
Examples of things that are eliminated are frequencies considered outside
the range of hearing (or near the edge of the range of hearing), and
portions of the signal that are significantly quieter than other portions
and so which aren't normally perceived anyway.
Of course, they also allow a sliding scale of what gets tossed out. At the
lower bitrates, portions of the audio signal detectable by the human ear
also get tossed out. But prioritization is used to try to ensure that even
in those cases, it's still the least significant portions of the signal.
In the usual use where there is normal amplifiers, speakers or
headphones, background noise, etc. then even a trained ear can't
really hear the difference.
Well, that really depends on the bitrate. It's certainly true that at
192Kbps and greater for MP3, and 128Kpbs and greater for AAC, WMA and other
forms of MPEG4, the difference is nearly imperceptible. But I assure you
that even some random tone-deaf schmoe would be able to notice the loss in
quality when playing back 32Kpbs MP3 (for example). At some point, it gets
so bad, anyone can tell.
Pete