PDA

View Full Version : Here’s A Digital Conundrum



Primalsea
28-05-2018, 10:20
If you have an analogue waveform, ie. music, and at roughly every 0.023 microseconds (44,100 times per second) you measure the amplitude of the waveform and assign it to the closest level out of the 16,536 descrete steps that you have available how much as a percentage of the waveform have you actually measured?

Basically this is Redbook standard analogue to digital conversion. It sounds like 44,100 times a second and 65,536 steps in voltage (for 2 volts output thats steps of around 0.03 microvolts) is quite a lot, but when you try to equate that to exactly how much as a percentage of the original waveform you have measured you can’t, can you?

What if we now take the measurement every 192,000 times a second and increase the number of steps to 16,777,216 (192Khz, 24 Bit) we take a lot more measurements but again try to equate this to a percentage of the original waveform I can’t seem to do it?

It seems no matter how many times you take a measurement the vast majority of the orginal waveform disregarded.

This has baffled me.. Anyone?

jandl100
28-05-2018, 10:39
In terms of sampling rate you have captured ALL of it, literally all of it, for frequencies below half your sampling rate, 22KHz in the case of Red Book.
The dynamic quantisation I'm not sure about.

Patrick Dixon
28-05-2018, 10:48
It's a meaningless conundrum; if you sample an analogue waveform that is band-limited to less than half the sampling frequency, then you have captured 100% of the information present.

Analogies are always questionable, but think of a bag full of coins: if you squeeze all the air out of the bag, how much money have you lost from your bag?

Primalsea
28-05-2018, 14:42
I’m not so sure that you have captured all of the waveform for frequencies below half of the sampling rate. I don’t see it as a meaningless conundrum either, as it eludes to the fact that you are not recording a significant part of the waveform.

Draw a rough, squiggly horizontal line across a page and pick a few points and then measure the height of these points from the bottom of the page to the nearest millimetre. How much of that line have you actually recorded? Not a lot. Your measurements are of instance points that have no horizontal width. Your line has a definite horizontal width. Okay, you can take more measurements but they remain just measurements of instances that cannot describe what is happening between each measurement point.

jandl100
28-05-2018, 16:42
I’m not so sure that you have captured all of the waveform for frequencies below half of the sampling rate.

It's not a matter of debate, actually - it's mathematically rigorous. It's the way our universe works. Check out Nyquist theorem.

You'd need to find a wormhole in space-time to enter a universe with suitably different physical laws than ours in order for it not to be the case. ;)

Start here, maybe -- https://en.wikipedia.org/wiki/Sampling_(signal_processing)#Audio_sampling

sq225917
28-05-2018, 18:31
The Nyquist/Shannon sampling theorem is proven. End of story, it's mathematically perfect. That of course doesn't mean that its implementation in silicon and volts back here in the real world is perfect. Signals need reconstructing (with limited tap length filters), levels need quantising (with limited voltage levels) timing needs to be recovered (with jitter imperfect clocks and transmission). There's a whole host of reasons why the function might not perfectly follow the theory.

Of course there's bugger all good science to prove which of these are audible.

Primalsea
28-05-2018, 18:57
No one is saying that Nyquist or Shannon theorem is wrong, when did I say that?

What I am saying is how can you just take it for granted that the information that existed between 2 sampling points was irreverent.

Macca
28-05-2018, 20:12
No one is saying that Nyquist or Shannon theorem is wrong, when did I say that?

What I am saying is how can you just take it for granted that the information that existed between 2 sampling points was irreverent.

Because it isn't really 'information', it's just a rise or a fall in voltage. There isn't really a 'gap between the samples' for anything to be missing from. Think about it that way.

sq225917
28-05-2018, 21:12
Paul, sampling theorem doesn't define the points in isolation, it defines the minimum number of points required to accurately reconstruct the signal curve that passes through any number of consecutive points. forget the dots, its the line that passes through them that were interested in and how accurately that mimics the originally sampled signal.

George47
29-05-2018, 09:02
There are two parts to this problem. If you analyse a music signal you will find it is made up of a lot of frequencies. In fact, you can make up the music signal from a lot of sine waves of different frequencies and amplitudes. If you analyse the signal there will be sine waves at a whole range of frequencies, all at different amplitudes. However, if it is a music signal then there will less and less of the very high frequencies and there will be very little above 20 kHz. So if you then apply a filter to the signal so that nothing above 22 kHz can get through then there cannot be any sine waves above 22 kHz and it is assumed you have not lost any music.

Right now let's digitise. If you wanted to sample a 30 Hz bass organ signal then you could sample at 60 Hz and assume the signal is a sine wave and you can reconstruct it. OK if there is a squiggle on it then that squiggle has to be a higher frequency but that is OK for us as in reality we are sampling at 44.1 kHz, so you can sample finer and finer squiggles all the way up 22 kHz. Ah, but what happens if there is a squiggle on top the 22 kHz frequency? There isn't any because you filtered everything above 22 kHz out.

So music consists a lot of sine waves of different frequencies and amplitudes. If you filter it so nothing above 22kHz can get through then sampling at 44 kHz will reproduce the whole music signal as there can't be any smaller squiggles.

Making that happen in practice is the tricky bit.

Primalsea
30-05-2018, 15:46
Okay, so if I have understood correctly during A/D the line between two samples is not smooth but actually modulated (squiggley) however, the modulation will be due to high frequencies that are far above those of interest and that can be acurately reconstructed during D/A, ie, Aliases 2 times above the Nyquist limit?

George47
30-05-2018, 18:21
Okay, so if I have understood correctly during A/D the line between two samples is not smooth but actually modulated (squiggley) however, the modulation will be due to high frequencies that are far above those of interest and that can be acurately reconstructed during D/A, ie, Aliases 2 times above the Nyquist limit?

Nearly. During A/D you filter signals above 22 kHz as there is very little if any data above 22kHz. Now when you digitise the signal there is only signals up to 22kHz. If you sample the signal at 44 kHz then according to Nyquist sampling everything below 22kHz is digitised correctly. When you then go through the D/A in your DAC the system knows there are not any signals above 22kHz and the original signal is reconstructed with all the frequencies up to 22kH correctly reconstructed.

StanleyB
31-05-2018, 07:00
Nearly. During A/D you filter signals above 22 kHz as there is very little if any data above 22kHz.
There are harmonics of the audio data within 20Hz to 20KHz that extend above 20KHz.

walpurgis
31-05-2018, 07:58
Harmonics are created by and presumably interact with lower frequencies. If they are missing, could that affect the form of the fundamentals?

Patrick Dixon
31-05-2018, 08:01
deleted

Primalsea
31-05-2018, 08:16
I do wonder about transients also. If they begin or end at a point between samples what happens then?

Patrick Dixon
31-05-2018, 08:22
deleted

StanleyB
31-05-2018, 08:23
They get ignored. But they would really have to be very narrow to get to that stage. I can hear tape noise from the original recording of Crazy by Patsy Cline, and also on the original recordings of many songs from Marvin Gaye. That kind of noise is like a transient, but it is still captured during the A to D process.

Patrick Dixon
31-05-2018, 08:23
deleted

walpurgis
31-05-2018, 08:27
That's not exactly an answer helpful to non mathematicians who may appreciate a layman's explanation of the basics.

And large bold is not really necessary (or polite?)!

Patrick Dixon
31-05-2018, 08:30
deleted

jandl100
31-05-2018, 08:31
No, it doesn't appear helpful - but in this case the maths is counter-intuitive - what happens between the gaps, for example?
Well, mathematically, below the "Nyquist frequency", there aren't any gaps - but you can tell people that till the cows come home but unless they have at least some grasp of the math they go with their flawed intuition.

Patrick Dixon
31-05-2018, 08:33
deleted

Patrick Dixon
31-05-2018, 08:34
No, it doesn't appear helpful - but in this case the maths is counter-intuitive - what happens between the gaps, for example?
Well, mathematically, below the "Nyquist frequency", there aren't any gaps - but you can tell people that till the cows come home but unless they have at least some grasp of the math they go with their flawed intuition.

You put it much better than me.

walpurgis
31-05-2018, 08:39
You can't explain these things in layman's terms and unless you are prepared to put in some effect, why should you expect someone expert to entertain nonsensical questions? You don't have to be a mathematician, you just have to put in some effort to research what's already out there.

A patronising tone is not good. The questions you describe as "nonsensical" may be being asked simply because people don't know the right questions or where or how to research the matter.

StanleyB
31-05-2018, 08:43
A patronising tone is not good. The questions you describe as "nonsensical" may be being asked simply because people don't know the right questions or where or how to research the matter.
I would give you five stars for that reply if that was possible :).

Marco
31-05-2018, 08:57
Yeah, I must ask you to re-write/rephrase your post (#19), Patrick, as that's not how we respond to each other here. Everyone (and their opinions) must be treated with tolerance and respect, at all times, no matter how much you disagree with them.

Also, don't presume everyone will want, far less know how, to "understand the maths". I really don't like your rather arrogant and ill-tempered tone on this thread, so please alter it to something friendlier. Cheers!

Marco.

George47
31-05-2018, 09:50
There are harmonics of the audio data within 20Hz to 20KHz that extend above 20KHz.

True and that is why some argue that higher sampling rates are needed. I was trying not to overcomplicate things because some have measured audio signals (at a low level) above 40 kHz which emphasises the need for 96 kHz or 192 kHz for some studio headroom. However, the levels are low and some argue removing them does not impact the music that much. The advantage of understanding the maths is that it is precise and explains it clearly but for those not keen it may have to be accepted that if the audio is limited to 22kHz and is then sampled at 44 kHz then all the frequencies up to 22kHz can be accurately re-constructed.

Patrick Dixon
31-05-2018, 10:21
Yeah, I must ask you to re-write/rephrase your post (#19), Patrick, as that's not how we respond to each other here. Everyone (and their opinions) must be treated with tolerance and respect, at all times, no matter how much you disagree with them.

Also, don't presume everyone will want, far less know how, to "understand the maths". I really don't like your rather arrogant and ill-tempered tone on this thread, so please alter it to something friendlier. Cheers!

Marco.
OK, I won't post in these threads anymore. I don't usually post on these kinds of thread precisely because on the internet everything has to be equal even when it's not. Who the hell needs 'experts' anyway when we can rely on intuition and the will of the people?

Marco
31-05-2018, 10:24
Not posting on these threads isn't really a proper solution though, is it? Why not simply alter your attitude and be nicer to people? :)

That would be a good start. I haven't even read the main content of this thread, simply your rude response highlighted, which regardless of who's right or wrong, in terms of the thread topic, there was simply no need for.

Do you address people that way in real life, Patrick - and if so, what type of response would you expect to receive in return? In this instance, it's not what you say, but HOW you say it. Regardless of how much of an 'expert' anyone is, no-one knows it all.

Marco.

Patrick Dixon
31-05-2018, 11:08
Just who is patronising who here?

Marco
31-05-2018, 11:24
Well, if the cap fits... In any case, this matter isn't up for debate. From now on, if you want to post here, then alter your attitude.

Now, I insist we move on. Any further response from you, other than in relation to the tread topic, will be removed, and you'll be out for a week.

Marco.

Beobloke
31-05-2018, 11:45
Just who is patronising who here?

"patronising whom.."

:D ;)

Primalsea
31-05-2018, 15:22
My thought is that music is a complex mix of tones and excitations similar to gunshot excitations where there is a sudden bust of a broad spectrum of frequencies. Also that the duration tones do not always extend for the full period of their frequency, for instance a 50hz tone only lasting 2 ms where the full cycle of 50 Hz is 20 ms (What happens if it were 10Khz only fora fraction of its full cycle). What happens then in the A/D process? Yes it works well enough that a recording of guitar being strummed while someone claps will be recognisable and thereofore some might say accurate, but what defines accurate?

Macca
31-05-2018, 15:39
My thought is that music is a complex mix of tones and excitations similar to gunshot excitations where there is a sudden bust of a broad spectrum of frequencies. Also that the duration tones do not always extend for the full period of their frequency, for instance a 50hz tone only lasting 2 ms where the full cycle of 50 Hz is 20 ms (What happens if it were 10Khz only fora fraction of its full cycle). What happens then in the A/D process? Yes it works well enough that a recording of guitar being strummed while someone claps will be recognisable and thereofore some might say accurate, but what defines accurate?

As has been said before, it captures the whole waveform, gunshot, clapping, doesn't make any difference what the origin of the sound is.

I can understand wanting to understand how it works, I can't understand the point of trying to second-guess it?

NRG
31-05-2018, 18:26
As said previously the analogue signal is Bandwidth limited. This is an old video but still explains it better than anything else imho. https://m.youtube.com/watch?v=cIQ9IXSUzuM

Skip to 17:24 or watch the whole lot.

George47
01-06-2018, 13:51
My thought is that music is a complex mix of tones and excitations similar to gunshot excitations where there is a sudden bust of a broad spectrum of frequencies. Also that the duration tones do not always extend for the full period of their frequency, for instance a 50hz tone only lasting 2 ms where the full cycle of 50 Hz is 20 ms (What happens if it were 10Khz only fora fraction of its full cycle). What happens then in the A/D process? Yes it works well enough that a recording of guitar being strummed while someone claps will be recognisable and thereofore some might say accurate, but what defines accurate?

If there are higher frequencies than 22 kHz then they get filtered out. Fortunately, there is not too much of that present in normal music. Artificial music (synthesisers) can produce some weird and wonderful noises but they tend not to be too musical and are not used that much. Accurate has been defined here as any frequency up to 22kHz with a dynamic range of 96db. If that does not work for you try 24/96 or 24/192, which should cover every frequency that can be recorded even by modern microphones with a dynamic range of 144db. That is extremely good and even cynics have said that music digitised at that rate sounds exactly the same as a live microphone feed.
Yes, there are frequencies above 96kHz but we can't hear or even sense them.

If you had a 50 Hz tone for 2 mS would you hear it and what would it sound like? You may need to have a reasonable part of the cycle to recognise it as hum.

NRG
02-06-2018, 23:52
I take it we are all good on how this works now?

alphaGT
03-06-2018, 03:12
In reality Paul, sampling music at 44.1 KHz is not perfect. Yes there are flaws. Nothing is perfect. And I’m with you on this, what indeed does happen to samples that fall between the 65K levels? They get rounded off, to the nearest one. As you pointed out, the steps are .03ms apart? Forgive if I misquote you, I can’t see your text as I write. Well half of that is your possible error. It could be .015ms off. If it fell directly in the middle.

Not long ago I drew an analogy on here that a few could appreciate. If you went out and drew a big analog wave form on a fence, who knows, a million times larger than life. And then took a jig saw and cut it out. This represents our analog waveform. If you measured every 10mm in length and drew a vertical Line, and then measured the height of the wave at that point, and wrote all these numbers in your notebook. Then you go back to your shop and lay a sheet of plywood on the bench, and duplicated all of your numbers onto it, then you’d have a bunch of stair steps with flat tops on your plywood, so, you draw a line across the top edges and cut them out with your jig saw, you have a wave form looking top! But, if you carry it to that fence and lay it up against it, you will see that it is not exact, it is very similar, but there are small differences at the top edge. This is a digital recording.

If you take a sheet of plywood and lay it up against the fence, and trace the outline, and then cut it out, then you have a much closer representation of the original waveform, within the width of your pencil. That is an analog recording. So, maybe you increase your resolution, and measure twice as often? Or ten times as often? Yes, each time you’ll get closer to your original waveform shape. But there will always be errors.

The truth is digital recordings are averages of the originals. And while some filter has smoothed all the edges off, and it plays back sounding whole and right, there are tiny inaccuracies. Are these inaccuracies large enough for us to hear? Some say no, any differences that may exist are too small for us to hear. But I say, many people can hear the difference from a digital copy and the analog original. Or we wouldn’t be having this conversation in the first place.

Russell

NRG
03-06-2018, 07:21
It’s seems not! There are no stair steps, it is not contious time, it’s discrete time, there is no ‘in between’ as the analogue signal is bandwidth limited. Please watch the video I linked to earlier.

Macca
03-06-2018, 07:54
It's actually analogue recorders that fail to capture everything, not digital. The same is true in playback. It doesn't matter how good your TT or R2R is, it will have inaccuracies in playback that even a budget CD player will not exhibit.


People confusing accuracy with a sound they like. Just because you like it better does not mean it is the more accurate version!

alphaGT
03-06-2018, 08:23
It’s seems not! There are no stair steps, it is not contious time, it’s discrete time, there is no ‘in between’ as the analogue signal is bandwidth limited. Please watch the video I linked to earlier.

As seen in the video, which forgive me for not watching the entire 22 minutes, clearly shows the steps in the signal on the computer screen. No, we do not see the steps in the final product, because a smoothing filter is built into the D/A converter. Otherwise it would sound terrible. As in my analogy, the edges are smoothed over, to create a more analog looking signal. And with a signal generator creating a very simple signal, it’s easy for the smoothing to recreate the original. But if each sine wave is a different size and shape, it’s a bit more challenging. All his demonstration shows is that the smoothing filter works. And as I said in my analogy, to look at the recreated signal, it all looks very normal, until you put it up next to the original. Music is composed of very complex and irregular signals, and you would really need to zoom in on the very edge of the wave form to see any irregularities, and it would have to be laid over the original signal to see it. So generated signals really don’t do it justice. I’m not saying that digital doesn’t do a fantastic job of what it does, but one fact in life remains, nothing is perfect. Without exception. It was 25 years ago, but I did my thesis in college on pulse code modulation, so I’m not just blowing smoke. I still have a vague idea of how it works.

So Paul has a valid question, what does happen to the information that falls between the samples? And the answer is, they are averaged out. One sample is laid up beside the next, so there is no “time”, between them, but all the information that is collected during that clock pulse is averaged into the next sample. The magic happens because the samples are so close together, so fast, that the given averages are very close to actual.

To see what is different, why it’s different, is simple enough to explain, by using a much lower sampling rate, and bit rate. Telephone conversations are only 8 bits, and far slower than CD, I forget the frequency, forgive me. But it’s easy to hear how low the fidelity is when hearing music over the phone. Or at least how phones were 25 years ago. I imagine a long distance call over a land line hasn’t changed much? Or maybe it has? Anyway, my point is, is there any comparison between hearing music over a telephone compared to a reel to reel tape? I think not. So, they’ve upped the sampling rate and bit rate to a point that it makes a satisfactory version of the original, but it still suffers from the same kind of flaws, just far, far smaller. So there is an argument for high res music files, it’s simply more accurate. It’s not that the reconstructed waveform has stepped edges, that is not the issue at all. It’s that after they’ve been averaged and smoothed, they are not exactly in the same place as the original. And while it’s a very, very good copy, it cannot be exact, back to that thing about nothing being perfect. And the big question remains, can anyone hear these differences? And as I said before, yes, or we wouldn’t be having these discussions in the first place.

Russell

NRG
03-06-2018, 08:38
No it is not a valid question, sorry. There are no steps, watch the video, skip to 17:24 if it helps...

And at 6:00.......

Macca
03-06-2018, 09:39
. And the big question remains, can anyone hear these differences? And as I said before, yes, or we wouldn’t be having these discussions in the first place.

Russell

They can't hear them though can they? Plenty of tests to show that and none that show that they can. In fact people struggle to tell the difference between 16/44.1 and lower sampling and bit rates when they don't already know what they are listening to.


And for higher sampling rates they let you capture higher frequencies, you are not capturing the lower frequencies more accurately.

alphaGT
03-06-2018, 13:09
They can't hear them though can they? Plenty of tests to show that and none that show that they can. In fact people struggle to tell the difference between 16/44.1 and lower sampling and bit rates when they don't already know what they are listening to.


And for higher sampling rates they let you capture higher frequencies, you are not capturing the lower frequencies more accurately.

Can they hear higher frequencies? Most can’t, I’m sure. I recall reading about a test where the engineer played music with content, and harmonics that went as high as 44khz. And while test subjects could not consciously hear the higher information, a brain scan showed brain activity in response to such frequencies. All very interesting, but is it of any use? It’s hard to say.

And if playback was that perfect, then wouldn’t every DAC in the world sound exactly alike? Or at least a lot alike?

Personally I’ve heard some hi res files that I thought sounded better! But, that could be because the engineers creating the file were more conscious of sound quality? Used better DACs and chips? If you are creating a more accurate waveform the bass will be more accurate too, as I described, the smoothing filter is creating the waveform with less error, not just higher content.
Russell

Macca
03-06-2018, 14:16
Can they hear higher frequencies? Most can’t, I’m sure. I recall reading about a test where the engineer played music with content, and harmonics that went as high as 44khz. And while test subjects could not consciously hear the higher information, a brain scan showed brain activity in response to such frequencies. All very interesting, but is it of any use? It’s hard to say.

And if playback was that perfect, then wouldn’t every DAC in the world sound exactly alike? Or at least a lot alike?

Personally I’ve heard some hi res files that I thought sounded better! But, that could be because the engineers creating the file were more conscious of sound quality? Used better DACs and chips? If you are creating a more accurate waveform the bass will be more accurate too, as I described, the smoothing filter is creating the waveform with less error, not just higher content.
Russell

DACs don't sound the same because of differences in engineering quality and because some are tweaked to sound different. DACS equally well engineered and designed to sound as a neutral as possible will all sound the same.


If a hi-rez file sounds different to the exact same file downsampled to 44.1 Khz then it isn't the same file or something went wrong in the downsampling. It's been demonstrated to destruction that no-one can distinguish between them if they don't know which one they are listening to. This also backs up everything currently known about the abilities of our hearing.


The purpose of high resolution audio was to sell the same thing over again. Sometimes you got a different mastering so that was at least something. But is there any point to it? No. Although it did give those who had painted themselves into a corner re digital being unlistenable an excuse to start listening to digital: 'Now we have hi-rez it is so much closer to analogue!'.
Just ridiculous really.

Marco
03-06-2018, 17:37
DACs don't sound the same because of differences in engineering quality and because some are tweaked to sound different. DACS equally well engineered and designed to sound as a neutral as possible will all sound the same.


...providing that they use the same chipsets and technology. Otherwise, for me, no.

In that respect, there are distinct sonic differences between my vintage Sony DAS-R1, which uses multi-bit technology, with TDA-1541s, and a modern DAC using bitstream, with say, Burr-Browns.

Both have their plusses and minuses, but nothing does deep, chunky-sounding bass that you can almost CHEW, like well-implemented TDA-1541s :smoking:

Marco.

Macca
03-06-2018, 17:52
I had a B&O 6500 that used TDA1541, I've got a Technics that uses TDA1540, My Technics SLP1200 uses Burr Brown, that's a multibit player.


I think it's impossible to assign a DAC chip a character unless you can listen to them with everything else unchanged. So different chips in the same application. Since that's impossible unless you build it yourself then any rumination on the sonic flavour that different chips impart is sheer speculation. There could be a lot of differences in 2 cd players besides the chip set.

Marco
03-06-2018, 17:59
Well, I've used/owned a LOT of CD players and DACs over the years with TDA1541s, and all have shared the same characteristic in the bass, to a greater or lesser degree, and I don't think that was an accident - and neither is the fact that no modern DAC I've heard so far does bass like it [although they excel more in areas such as detail retrieval].

Therefore, I guess we'll just have to agree to disagree on this one:)

Marco.

User211
03-06-2018, 18:02
I reckon it is funny what we can lead ourselves to believe about DACs. I think a lot of us are guilty of fooling ourselves into false beliefs about how certain chipsets sound, but there are other far more significant factors like the output stage design and the quality of the components in it.

The only way you can compare chipsets is to have them in the same circuit.

If you haven't made that controlled comparison, all bets are off.

Just had the output stage of my DAC modified and it doesn't sound like the same machine. It simply sounds a lot better. With the same chipset.

Macca
03-06-2018, 18:03
I'm not saying the TDA1540 - or any other chip - does not have a character, just that I would hesitate to be in any way certain about defining it. The B&O 6500 I had didn't do bass different to any other player, not in any significant way, whereas the SLP1200 does, which I ascribe to the power supplies, but again that is speculation.

Marco
03-06-2018, 18:09
That's fine. I'm of a different opinion, as given the experience I've had over the years comparing CDPs and DACs, with and without TDA-1541s, I wouldn't hesitate in defining their character, as outlined. Simply because I'm convinced that's the case.

Plus, as a general rule, the best vintage players usually tend to sound richer/weightier in the bass (which could well be linear PSU-related), than their modern counterparts, which conversely, to my ears, usually sound lighter and 'fresher'/clearer, but ultimately not as musical.

YMMV :cool:

Marco.

User211
03-06-2018, 18:36
That's fine. I'm of a different opinion, as given the experience I've had over the years comparing CDPs and DACs, with and without TDA-1541s, I wouldn't hesitate in defining their character, as outlined. Simply because I'm convinced that's the case.

Plus, as a general rule, the best vintage players usually tend to sound richer/weightier in the bass (which could well be linear PSU-related), than their modern counterparts, which conversely, to my ears, usually sound lighter and 'fresher'/clearer, but ultimately not as musical.

YMMV :cool:

Marco.

Exactly my point, bro. You're convinced it's the case, but you don't know it is.;)

I'm not claiming he's right, but a DAC designer with some repute says he's tried countless chips and he reckons they don't vary in sound much, quoting a 10% allocation in the overall sound quality of a DAC. I tend to believe him. But nothing more than that.

Marco
03-06-2018, 18:50
Yup, I agree. However, pretty much all the opinions we hold in audio, we're only convinced that we're right about; we don't know for sure. Therefore, you simply have to trust your own judgement on this matter, the same as with any other, which I'm happy to do.

Plus, with DAC chips especially, much of what you hear is down to how well they've been implemented into a circuit. TDA-1541s, in that respect, are notoriously more 'fussy' and expensive to optimise. After all, you can only judge how good something is or isn't, or discern meaningful differences, if you've heard whatever is being judged at its best :)

Who knows whether the designer in question achieved that or not with all the chips he tested?

Marco.

User211
03-06-2018, 18:51
Same planet post, and I think the only right thing to think. Respect.:)

StanleyB
03-06-2018, 19:56
I'm not claiming he's right, but a DAC designer with some repute says he's tried countless chips and he reckons they don't vary in sound much, quoting a 10% allocation in the overall sound quality of a DAC. I tend to believe him. But nothing more than that.
Hmmm :rolleyes:

Gazjam
03-06-2018, 20:47
Stan,
I know you've a long awareness of Spartan FPGA chips, and that's (IMO) where the good DAC stuff is at nowadays.


Any headway on that?

Marco
03-06-2018, 20:52
Not heard of those.....

Marco.

Gazjam
03-06-2018, 21:01
Don't buy an off the shelf chip for your Dac, be different and programme your own?
I know Stan's had interest this...

Chord Dave uses an FPGA

https://www.xilinx.com/products/silicon-devices/fpga/spartan-6.html

Marco
03-06-2018, 21:09
Interesting - looks like some real progress has been made there, with great potential :)

Marco.

alphaGT
03-06-2018, 22:09
Exactly my point, bro. You're convinced it's the case, but you don't know it is.;)

I'm not claiming he's right, but a DAC designer with some repute says he's tried countless chips and he reckons they don't vary in sound much, quoting a 10% allocation in the overall sound quality of a DAC. I tend to believe him. But nothing more than that.

10% is quite a lot in terms of audio. But the reality is there are a dozen popular D/A chips out there, some cost more than others, and DAC designers have their favorites. All this is based on what’s inside the chip, not the analog circuits that follow. So engineers choose their favorites based on some criteria? If this waveform was so perfect that no one could possibly hear any difference, then it wouldn’t make any difference which chip you chose.

Russell

Marco
03-06-2018, 22:13
10% is quite a lot in terms of audio. But the reality is there are a dozen popular D/A chips out there, some cost more than others, and DAC designers have their favorites. All this is based on what’s inside the chip, not the analog circuits that follow. So engineers choose their favorites based on some criteria? If this waveform was so perfect that no one could possibly hear any difference, then it wouldn’t make any difference which chip you chose.


Precisely, Russell. That's what I mean by "implementation". It's the analogue circuits that follow, which make the biggest difference to the sound you hear!:)

Marco.

Yomanze
04-06-2018, 08:38
The DAC chip makes a huge difference IME. An analogy is like it being the cartridge, and the output stage being the phono stage.

The only reason multibit DACs aren’t really around anymore is because they are so expensive to produce, but look around and there are many current production units using them.

Yomanze
04-06-2018, 08:43
Don't buy an off the shelf chip for your Dac, be different and programme your own?
I know Stan's had interest this...

Chord Dave uses an FPGA

https://www.xilinx.com/products/silicon-devices/fpga/spartan-6.html

Or do what Soekris and ECDesigns are doing with FPGAs and design a discrete resistor array i.e. true multibit.

Marco
04-06-2018, 13:01
The DAC chip makes a huge difference IME. An analogy is like it being the cartridge, and the output stage being the phono stage.

The only reason multibit DACs aren’t really around anymore is because they are so expensive to produce, but look around and there are many current production units using them.

Yup, and to my ears they generally sound better than bitstream. So, for reference, could you link to any standalone DACS or CDPs produced now that are multi-bit? :)

Marco.

Macca
04-06-2018, 13:22
Schitt do a multibit DAC, not sure which one it is though.

Yomanze
04-06-2018, 14:39
Yup, and to my ears they generally sound better than bitstream. So, for reference, could you link to any standalone DACS or CDPs produced now that are multi-bit? :)

Marco.

DACs:

Audial Model S - https://www.audialonline.com/model-s/
Metrum Amethyst - https://metrumacoustics.com/product/amethyst-by-metrum-acoustics/ (and I think all Metrum DACs too)
EC Designs Mosaic - https://www.ecdesigns.nl/mosaic-uv.html (discrete DAC design)
Soekris DAC 1541 - http://www.soekris.dk/dac1541.html (discrete DAC design)
Audio Note DAC 4.1x - http://www.audionote.co.uk/products/digital/dac_4.1x_01.shtml (IIRC all Audio Note DACs are multibit)
Zanden Model 5000 Signature - http://www.zandenaudio.com/product/m5000.php
Schiit Yggdrasil - http://www.schiit.com/products/yggdrasil (some reports of glitching, might have been a faulty unit though...)

CD Player:

AMR CD-77 - http://www.amr-audio.co.uk/html/cd_individual.html

alphaGT
04-06-2018, 16:47
Precisely, Russell. That's what I mean by "implementation". It's the analogue circuits that follow, which make the biggest difference to the sound you hear!:)

Marco.

That is the exact opposite of what I just said. The engineer has designed his analog circuit, this doesn’t change, but he decides which chip he will put in front of it based on some criteria, he has a favorite, for some reason. If you ask him why he chose this chip, he will say, “because it sounds better”. He will rarely say, “it’s the cheapest and they all sound the same”.

Does the analog circuit have an even greater effect on the sound? Most likely. But that does not mean that the chip has no effect.

Russell


Sent from my iPad using Tapatalk

Macca
04-06-2018, 17:41
I suspect that cost, ease of implementation and availability are the three main factors that influence the choice of DAC chip.

Maybe with what reviewers and enthusiasts are saying is the current front runner bringing up the rear. No salesman wants to have to flog last year's DAC chip after all. You want to be selling what people are saying is the latest and greatest, or alternatively something from the distant past that has become legendary for no clearly established reason, like the TDA1540/41.

Jimbo
04-06-2018, 18:13
...providing that they use the same chipsets and technology. Otherwise, for me, no.

In that respect, there are distinct sonic differences between my vintage Sony DAS-R1, which uses multi-bit technology, with TDA-1541s, and a modern DAC using bitstream, with say, Burr-Browns.

Both have their plusses and minuses, but nothing does deep, chunky-sounding bass that you can almost CHEW, like well-implemented TDA-1541s :smoking:

Marco.

Err FPGA chips can do.:)

Marco
04-06-2018, 20:24
Lol... Nothing I've heard so far;)

Marco.

User211
04-06-2018, 21:53
I had a Wadia using XILINX FPGAs more than 22 years ago.

I thought it sounded pretty poor after a couple of weeks with it - it was a dealer loan.

Just saying, like:D Whether the sound had much to do with the FPGAs I cannot say.

User211
04-06-2018, 23:11
Just remembering more - it was an X64. I think the FPGA was simply doing the filtering.

I remember it as being clear and detailed but it had a quite an artificial sound to it. Poor is a bit harsh, really.

It was actually about 26-27 years ago. The DAC was introduced in 1990.

Gazjam
05-06-2018, 18:48
FPGAs still need to be programmed, and all the magic works in software.
Like choice of Dac chip not determining final sound quality, its all in the implememtation.

NRG
05-06-2018, 21:49
I think there is huge (potential) misunderstanding of what field programmable gate array FPGAs are vs a dedicated DAC chip or ASIC (application specific integrated circuit).

An FPGA offers benefits to small developers and companies because they are flexible and a cost effective way of producing an integrated circuit device to perform a certain function....in low volumes. No tooling is required to make them (buy off the shelf) so costs are relatively low. The downside is they are not fully customisable and require in many instances external circuitry to add the required functions.

ASICs on the other hand are designed to perform a specific function with little external circuits and be produced in large volumes for cost effectiveness. Tooling and development costs are much higher though and therefore the required investment....but they are much denser and can implement more than one function, also the ‘real estate’ required (physical PCB space) is typically much less.

But here’s the important bit...neither approach is better than the other in terms of sound quality for the end user! Just because some low volume manufacturer starts touting an FPGA in their spec. does not mean it is going to sound any better than the latest Bur Brown or ESS chip.

User211
05-06-2018, 21:52
I think there is huge (potential) misunderstanding of what field programmable gate array FPGAs are vs a dedicated DAC chip or ASIC (application specific integrated circuit).

An FPGA offers benefits to small developers and companies because they are flexible and a cost effective way of producing an integrated circuit device to perform a certain function....in low volumes. No tooling is required to make them (buy off the shelf) so costs are relatively low. The downside is they are not fully customisable and require in many instances external circuitry to add the required functions.

ASICs on the other hand are designed to perform a specific function with little external circuits and be produced in large volumes for cost effectiveness. Tooling and development costs are much higher though and therefore the required investment....but they are much denser and can implement more than one function, also the ‘real estate’ required (physical PCB space) is typically much less.

But here’s the important bit...neither approach is better than the other in terms of sound quality for the end user! Just because some low volume manufacturer starts touting an FPGA in their spec. does not mean it is going to sound any better than the latest Bur Brown or ESS chip.

True, you just need some half decent VHDL guys who know what they're doing to get a decent result.

Gazjam
06-06-2018, 19:46
True, you just need some half decent VHDL guys who know what they're doing to get a decent result.

Check out Ted Smith....