fiogf49gjkf0d
manisandher wrote: |
Oh yes, I've tried this for sure. There is a difference between the three modes; DIG_IN, REFWCLK and MASTER. The first two seem to add something in the HF that I can imagine many people judging as being 'extra resolution' and therefore better. It's not unpleasant, but I find it distracting. |
|
Interesting. I always thought that following component must be slaved to the leading component not wise-versa, unless an external separate master clock used for everything. Since I use for play Lavry Gold that does not offer clock output (I use Pacific only to record) I never bother to consider to slave my DAC, or to slave my Lynx cared from my DAC. Perhaps I need to re-listen my files with Lynx slaved from Pacific in Master mode as you do. When I did my listing of Lavry vs. Pacific I always was running Lynx own clock and Pacific own clock (I thought it would be too absurd to let Lynx to slave Pacific as I feel Pacific clock shall be much better, presumably). Also, do not forgers the Model One with which I made all of my Lavry vs. Pacific experiment has only 1X clock output, so what I was running 88K files I had reference only for one wire. I spoke many times with Lynx and with Pacific people about the mastering and slaving at “half-speed” and they assured me that it is an OK mode to operate. I might be so but I did not feel comfortable as when I run high resolution sampling rate monitor on both lines of the 88K in dual-wire mode then I clearly saw that the rate changes in each wire with slightly different tempo. That was the ONLY reasons why I bought the Model Two – to run 88K with 88K clock during A/D. Still, 99.99999% of my files are my own FM recording, the source that is HF challenged to begin with, and it has nothing above 16K. So, to have “added something in the HF” might be something that I would not even recognize. Anyhow, following you advise I will excrement more with slaving Lynx from Pacific during D/A.
Still, there is a catch in all those arguments. It is absolutely unknown how you slaving interface is implemented and it is very much that the specific implantation of YOUR or MINE or SOMEBODY ELSE slaving interface would be something that would descried “quality”. I for instance have no idea how my Lynx card behave what it slaved from external clock when it does not record. I also do not know if my WaveLab playing software uses in this mode the Lynx’s clock or the Pacific’s clock. Theoretically the WaveLab shall read the Lynx’s clock and Lynx shall be synchronized from Pacific. Considering that WaveLab read data first I wonder how much in this come from pure imagination. I wonder which devises runs a “delay” of data until the WaveLab get own sampled from Pacific…
manisandher wrote: |
Thanks for the link. It's interesting that Dan Lavry doesn't see the point in using an external clock unit, believing the DAC's internal clock to be the best. Even when you have many units that you need to sync together, Dan seems to feel that daisy-chaining them is the best way. |
|
Yes, Dan is a proponent of idea that digital interfaces that case just clock data bring more jitter and more problems than “properly implemented” internal clocks. He might be right or might be wrong, most likely he is driven by patronage for his products but he, with his experience on the subject, is the voice that very much shall not be discarded. In fact there is a lot of rational in this view of his. I personally think that there is no right or wrong in this subject but it all depends the level at which the given topological solution is implemented. I am very much not the person to judge how “properly” slaving or re-cloaking mechanisms were made. Unquestionably Dan Lavry, Michael Pflaumer, Michael Ritter, Daniel Weiss, Ed Meitner and few others are and I let them to argue the point. In case of Lavry I was very much “sold” with his very sexy idea of “properly made multibit”. You might read about it in here:
http://www.lavryengineering.com/white_papers/DA924m.pdf
… starting from the page #9 where he described the theory of operation.
Also here is another relative reading:
http://www.lavryengineering.com/white_papers/jitter.pdf
I do not know if Lavry strikes the idea of D/A slaving just to promote his ideas that he used in his DA924 or it is how he truly feel. It is difficult to say when we talk about manufacturers as the always have an agenda…
manisandher wrote: |
In any event, it looks like I'm stuck with using modes 2. or 3. above for 176.4/192KHz sample rates. This is certainly not a disaster... until many more downloads that I'm interested in are recorded and offered exclusively at these rates (probably not very likely). |
|
Well, I am not 176.4/192KHz – minded person and I also have very low expectation for industry will be furnishing the right quality 176 files. (I do not care about 48X sampling rate at all). When I talk about “right quality” I mean that audio people are fools and when they see 176KHz sampling rate then they automatically assume some king of “quality”. The industry takes now and will take in future a full advantage of that simplicity. In my world the definition of quality is not sampling rate but the absence of the intermediate idiots who “master” the files. I would rather prefer to have a raw (never was resaved!!!) file at 44KHz then an edited file at 176KHz. Digital can’t be edited. Any activation of after-conversion DSP is installs kills the file – but it looks the Morons who are and will facilitate us with music do not “get” it…
manisandher wrote: |
But another reason why I'd like to use the Model Two in MASTER mode at these rates is that I think it uses a non-oversampling filter. When talking about the Model Two's ADC capability, Ritter wrote:
""If it's going to be a 176.4 or 192kHz DVD-Audio release, then we will not decimate that signal; we use a proprietary filter [non-oversampled] optimized to that sample rate. If it's going to be 88.2/96 kHz, we use 2:1 decimation, and once again we use a filter optimized to that frequency. But in both high-resolution settings, the Nyquist frequency is high enough that we don't use the 'dynamic decimation' process that becomes necessary when we go down to 44.1 or 48 kHz." |
|
Yes, you said about it before and it is a cool to know. I did not know that Model Two shuts down oversampling at 4X. Frankly I personally fees that it shall shuts down any post conversion filtration 176kHz. As Ritter said the sampling rate is so far from auditable range that why to use any filtration at all. You see, Pacific DAC is multibit. (reportedly, as it does not measured as “pure” SAR multibit). The “regular” Delta-Sigma of the dynamic Delta-Sigma (they call if Delta-Sigma multibit) after conversion has a wide-band nose doe to DC. Not with SAR multibits. Those babies have completely reconstructed signal and noise only at sampling rate minus 20kHz or double 20kHz, I do not remember exactly. So, if our DAC runs at 176kHz then it will out noise at ~140kHz… who cares…. It would be very noise to run this signal out with no filtration of any kind or to put in there a first order L filter. My initial sentiment was that Pacific does something like this. I thought about it what I saw that Pacific provide the XLR-to-XLR adapter with chokes in them as some of the high feedback SS equipment will not be able to handle high intensity 140kHz noise and will oscillate. I think that the same trick shall be used at 88kHz, but this is another story… Anyhow, Ritter said that they use another “proprietary filter”, probably lower order but I would like do not see in there any filters at all…
Then caT
"I wish I could score everything for horns." - Richard Wagner. "Our writing equipment takes part in the forming of our thoughts." - Friedrich Nietzsche