The nyquist sampling theorem is a cornerstone of analog to digital conversion. It posits that to adequately preserve an analog signal when converting to digital, you have to use a sampling frequency twice as fast as what a human can sense. This is part of why 44.1 khz is considered high quality audio, even though the mic capturing the audio vibrates faster, sampling it at about 40k times a second produces a signal that to us is indistinguishable from one with an infinite resolution. As the bandwidth our hearing, at best peaks at about 20khz.
I’m no engineer, just a partially informed enthusiast. However, this picture of the water moving, somehow illustrates the nyquist theorem to me. How perception of speed varies with distance, and how distance somehow make things look clear. The scanner blade samples at about 30hz across the horizon.
Scanned left to righ, in about 20 seconds. The view from a floating pier across an undramatic patch of the Oslo fjord.
*edit: I swapped the direction of the scan in OP
Just a minor addendum, 44.1 kHz wasn’t really ideal for human hearing; it’s not quite double the highest audible frequency for a lot of humans. The frequency was chosen for audio CDs because the data was recorded onto U-Matic video cassette tapes for duplication (if you watched the tapes back connected to a TV it looks like a black and white checkerboard but the squares switch rapidly and randomly) and 44.1 kHz was the most that could reliably fit on the tape. It worked well enough since most of the sounds recorded are under 22 kHz, but it’s a contributor of why some people say CD audio isn’t as good as a high-quality analog recording. A lot of non-CD digital audio like the audio for digital video or higher resolution digital audio prefers 48 kHz or a multiple of that, like 96 or 192 kHz.
Of course there’s some ancient broadcast standard at the bottom of 44.1khz, thanks for the clarification! (I work in film/TV and still struggle with explaining explaining ‘illegal’ values to some clients on certain deliverables)
I don’t have the time to write a well thought out reply, but CDs trump cassettes in terms of both dynamic range and noise floor. Records are worse than cassettes in both categories. I would personally trade a small amount of frequency response for the other two.
That said, people are… well people. Do what makes you happy.
I think the people who make those claims are usually referring to vinyl records, not cassettes. That said, I would not be surprised if the people who claim they can hear a difference are mostly imagining it. Or perhaps it’s something that dates back to the early days of CDs when the equipment and mastering techniques were not as good as they became later in the ’90s/’00s, not something applicable today.
I suspect, but it’s hard to find technical data, that the noise floor and dynamic range of vinyl are worse than cassettes.
You’re right that there’s a strong degree of interplay with other things though. Vinyl can really only be listened to indoors, which allows whomever is doing the mix to assume some things about the environment (it’s likely quiet for example). Portable media really kicked off the loudness war, which mean if you listen to a record (or tape or anything else) from say the 60’s or 70’s it will sound a lot different than the same album “digitally remastered” in the 00s.