| Search | Login/Register
   Home » Playback Listening » The open project: a lateral cross-injection. (26 posts, 2 pages)
  Print Thread | 1st Post |  
Page 1 of 2 (26 items) Select Pages:  1 2 »
   Target    Threads for related reading   Most recent post in related threads   Forum  Replies   Views   Started 
  »  New  Preamplifiers: keys to mystery. (Lamm L1, L2)..  In analog domain...  Audio Discussions  Forum     2  63355  11-14-2004
03-27-2013 Post does not mapped to Knowledge Tree
Romy the Cat


Boston, MA
Posts 10,166
Joined on 05-28-2004

Post #: 1
Post ID: 19152
Reply to: 19152
The open project: a lateral cross-injection.
fiogf49gjkf0d
A few days back I visited an audio person who demonstrated to me his experiments with extreme lateral   deviation of souses. Regardless if I agree or not what he was doing or trying to do the demonstration has an unexpected result.

One day I was driving home from work and I asked myself: could the imaging that we get from out stereo to be improved but in different ways? The key for my thinking was my familiarity with Lamm L1-L2 preamps that did some amassing imaging trick the no other preamp could approach. Do not get me wrong, there are much more interesting preamps, my current is one of them but not of the preamps known to me do that imaging trick. I use to write about it in past.

So, if you remember I proposed that Lamm in his L1 ad L2 preamps does some intentional or not intentional cross-injection of phases, mixing some signal from one channel to signal with other. It is not what I know or something that Lamm told me – that is pr my speculation as it has similar effect with sound I heard when cross-phasing was done in headphone amplifiers.

Looking what my lately visited person did and considering some results I got from Lamm electronics I wonder: of t would be interesting to introduce lateral phase cross-injection in Macondo. Macondo is phenomenally easy for such experiments as it has multi-amping and to do it very simple.  All that I need to do it take 4-5 itch MF driver, attach it to frame and source it from another channel. I do a pair of small monitors the I can use for that purposes.

For sure such of acoustic cross-channel injection will kill something and here is the key (I think) for the whole application. The cross-channel injection need to be very smart cross-phase injection what the injection needs to be very limited by band and very precisely dialed by amplitude. I think I will start with 300-2000z hand and with minuses 12dB. The time alignment of such injection is also might be the factor but I have no opinion about it as now.

So, I declare the Club’s open projects. I have outlined the idea, feel free to experiment and to post your results. I presume that somewhere in during experiment one of us might (or might not) come across a configuration that might take the whole notion of imaging, instruments positioning,  distance to the first reflective boundaries to a very new dimensions. 

The Cat


"I wish I could score everything for horns." - Richard Wagner. "Our writing equipment takes part in the forming of our thoughts." - Friedrich Nietzsche
03-27-2013 Post does not mapped to Knowledge Tree
Paul S
San Diego, California, USA
Posts 2,664
Joined on 10-12-2006

Post #: 2
Post ID: 19159
Reply to: 19152
Another Approach for a Correct Presentation
fiogf49gjkf0d
I don't think this is what you had in mind, Romy, but I do find it interesting in the context of this thread.  It is the late James Bongiorno's (not un-complicated) idea for "correct" playback.  I spoke with him about it a couple of times.  JB said there are lots of problems with the way regular stereo is "allocated" and processed during the recording process, and he adamently insisted his device is the only way to correct the problems and correctly present the acoustic space, etc.  There are "reviews" of the item that are linked from the SST site.  One thing that came up (unexpectedly) in a review was that CD and LP were not affected "the same" when processed through this device.  ???

http://www.ampzilla2000.com/trinaural_manual.html



Best regards,
Paul S
03-31-2013 Post does not mapped to Knowledge Tree
Paul S
San Diego, California, USA
Posts 2,664
Joined on 10-12-2006

Post #: 3
Post ID: 19162
Reply to: 19159
Qol-ly Whomper
fiogf49gjkf0d
Here is an active (I think), "analog" (they specify) product that is said by its promoters to develop something good by way of "phase" information that otherwise remains buried in recordings.  While the "specifications" and marketing are vague (to say the least), the promoters do offer some sort of "nearly no risk" guarantee (qv) to encourage curious punters.  Since most specs and hyperbolic claims are bullshit to begin with, their absence alone should not troubling.

http://www.bsgt.com/wp-content/uploads/2011/05/qol_signal_completion_stage.pdf

Scant information here; mostly a sort of fishy smell.

http://www.bsgt.com/wp-content/uploads/2011/05/qøl™_owners_manual2012_oct.pdf

The owner's manual is where the phase-related idea is literally espoused.


Maybe someone who ponied up for a PP would also take a calculated risk on this and report back?


Paul S
03-31-2013 Post does not mapped to Knowledge Tree
Romy the Cat


Boston, MA
Posts 10,166
Joined on 05-28-2004

Post #: 4
Post ID: 19163
Reply to: 19162
What is necessary here is
fiogf49gjkf0d
Thank, Paul, interesting links.  What is necessary here is to do some critical listening with the idea proposed above. It would not e any measurements but just subjective listening. I am not a huge to of do it on line level and I think the speaker level with dedicated amplification is more elegant way to go. It might be played not only with amplitude cross-injection   but with injection of difference between righ and left channel. You can bridge the plus outputs of right and left amps and drive from it the injection channels – it will have the delta signal, you can revise the polarity for another channels and you have ready to go test environment. Just add a high impedance volume control in order do not screw ups the signal to the main channels and you are in game. I wish I could experiment with it but nowadays I have too much on my plate, someday I certainly will look into it.


"I wish I could score everything for horns." - Richard Wagner. "Our writing equipment takes part in the forming of our thoughts." - Friedrich Nietzsche
03-31-2013 Post does not mapped to Knowledge Tree
JJ Triode
Posts 99
Joined on 09-12-2007

Post #: 5
Post ID: 19164
Reply to: 19163
Similar ideas
fiogf49gjkf0d
The old Dynaquad system involved using the L-R difference signal, but produced at speaker level across a pair of rear speakers (L-R in one and R-L in the other.)

Decware has one amp (the "Taboo") that can be operated in what they call "lucid mode" where there is cross-feedback between L and R channels.  I guess this is not an option if you are not using feedback to begin with.

There are some tricks with room-acoustic treatments that also enable the user to send some reflected sound diagonally across the listening room to the opposite side.

All these things affect imaging or "the sense of acoustic space" but how to use them to serve the purposes of the music is of course the real and hard question.
03-31-2013 Post does not mapped to Knowledge Tree
Paul S
San Diego, California, USA
Posts 2,664
Joined on 10-12-2006

Post #: 6
Post ID: 19165
Reply to: 19164
Messing With the Audio Third Rail
fiogf49gjkf0d
The fact is, typical "stereo" L and R separation is mostly an engineering hodge-podge to begin with, and any further messing with it will be very much settled by ear, whether at line or at speaker level.  Yet James Bongiorno set out to "solve" the problems with a practical application of his own, proprietary, theoretical solution to the mess he insisted was/is inherent in "stereo" recording and playback.  And "phase issues" are what he meant to address.

The "theory" is confusing, anyway, whether adding "parts" of left to "parts" of right, and vice-versa, or "simply" adding together/blending the left and right channels to make a center channel.  I have planned for some time to have a main, multi-driver center channel, and I even have a 3rd MA-9S2 sitting there, ready to drive it, and I will do this sooner or later.  Meanwhile, I have been told repeatedly by folks who should know that there is more to summing left and right channels "correctly" than just tapping a channel and/or strapping leads together, whether at line or at speaker level.  In fact, I first approached James Bongiorno about designing a transformer-based stereo channel sum-er.  But by then B was already convinced that he had discovered the only viable answer to this in his Trinaural processor, and he literally would not "waste [my] time and money" on a "non-solution" like I wanted.  He did offer to build a Trinaural processor for me, and he also offered me a money-back guarantee; but at the time it just seemed like "too much" to me.

Sadly, although I have listened intently to all the experts, I have yet to hear a simple explanation for the "channel thing" that I really understand.  Apparently, generic stereo phase "issues" make any sort of summing mix-and-match a fairly wild hit-and-miss proposition, due to electrical and (ultimately) audible summing and cancellation that might or might not be wholly predicted or predictable (unless you're James Bongiorno...).  Naturally, I won't let reality nor concerns about it stop me from trying a center channel, and of course I will tell all here, apropos.


Paul S

03-31-2013 Post does not mapped to Knowledge Tree
zztop7
Edmonds, WA
Posts 40
Joined on 11-02-2012

Post #: 7
Post ID: 19166
Reply to: 19165
Easier on the drivers
fiogf49gjkf0d
Paul S wrote:"typical "stereo" L and R separation is mostly an engineering hodge-podge".    I do not disagree with that statement.  I do feel the major advantage of stereo is not "stereo", but the separation of parts of the information in the signal to the drivers.  Therefore, less information is dumped on each driver.  If a driver has 50 pieces of info. to handle in a given time vs. 100; the 50 pieces should be handled more cleanly and accurately.  So the analogy would be MONO= 2 drivers each handling the same 100 pieces ///  STEREO = 2 drivers each handling 50 pieces of the total signal of 100.  [I do know there is varying degrees of overlap, and 50/50 is not a perfect real world scenario.]   So much of Romy's research & work has been based on dividing up the signal to where it belongs.  Each driver only doing its' specialized area with the correct pieces of info.     Contributors please correct anything wrong in the above statement.  Best to all,  zz
04-01-2013 Post does not mapped to Knowledge Tree
Paul S
San Diego, California, USA
Posts 2,664
Joined on 10-12-2006

Post #: 8
Post ID: 19167
Reply to: 19166
Channeling Channels
fiogf49gjkf0d
ZZ, clearly, no single driver or pair of drivers can reproduce the range and power of an orchestra, and each of us deals with this according to our own personal demands and resources.  The concern of this thread is trying to juggle/work with "phase" to induce/produce better playback, and for Romy, multiple, DSET "channels" are a given.  Since you brought it up, it is all the more ironic in this context that multiple drivers often exacerbate "phase issues" even as they allow for frequency-specific, "specialty" drivers to do each what it does best in terms of rote sound reproduction.  In this thread, the discussion includes the idea that multiple-drivers-per-channel might be further/better exploited by "cross-pollenating" parts of what is embedded as "fixed" L and R channel information in typical stereo program material.  Just to be clear, Romy also uses the term "channel" to mean a certain frequency band that is separately routed at or near line level in his "multi-channel" DSET amps, which would ostensibly better "facilitate" the cross-pollination of parts of the L and R stereo channels.

Best rgeards,
Paul S
04-01-2013 Post does not mapped to Knowledge Tree
decoud
United Kingdom
Posts 247
Joined on 03-01-2008

Post #: 9
Post ID: 19168
Reply to: 19167
Wood for the trees
fiogf49gjkf0d
Is it not strange that we are so fixated on spatial separation in sound? Instruments in an orchestra are spatially separated because you can't have one musician sitting in another's lap, not because the music demands it. Music is intrinsically time- not space-varying. So insisting on spatial separation is an insistence on reproducing an aspect of performance that is incidental to the essence of the music: in an ideal world you would not have it. 
Of course there *are* known frequency/space interactions in human audio perception: for example, people in whom brain damage causes an inability to attend to the left of visual space tend also to be poor in attending to low frequencies. So perhaps this is the mechanism these devices are trying to exploit.
04-05-2013 Post does not mapped to Knowledge Tree
el`Ol
Posts 225
Joined on 10-13-2007

Post #: 10
Post ID: 19180
Reply to: 19168
Andrea von Salis
fiogf49gjkf0d
I only heard such a quadrophonic system with a special recording, no idea what the conversion software can do for stereo basis broadening of conventional stereo recordings, but might be worth a try (with the Tannoys as rears).
http://www.andreavonsalis.com/
04-05-2013 Post does not mapped to Knowledge Tree
steverino
Posts 367
Joined on 05-23-2009

Post #: 11
Post ID: 19181
Reply to: 19168
Spatial separation an inherent part of music
fiogf49gjkf0d
I have to disagree with the notion that spatial separation is an artifact of music stand geometry and of no material importance to musical expression. Composers themselves have written for specific spatial groupings (eg, Gabrieli, Bartok, Henry Brant to name a few). The orchestra itself is arranged spatially to allow groups of instruments to be heard to best advantage. Louder instruments are placed to the rear so they don't overpower softer instruments. In addition, orchestral harmonic balances would be lost if there was random or even too narrow placement on the soundstage. It would also be a problem if the different instruments had too great a spatial separation as the harmonies would not blend appropriately.

BTW on a different topic and thread thanks for your noting of the lithium iron battery development. I wasn't aware that they had progressed as far as they had.
04-06-2013 Post does not mapped to Knowledge Tree
decoud
United Kingdom
Posts 247
Joined on 03-01-2008

Post #: 12
Post ID: 19182
Reply to: 19181
Necessity vs choice
fiogf49gjkf0d
The point is that spatial separation is not necessary in music in the way it is necessary for visual representation. A single musical instrument is -- more or less -- a point source, and it need be no less expressive for that, including in relation to harmonic blending, whereas no-one could ever do a one dimensional painting (though of course one might do a piece of one dimensional installation art).  Yes, composers might use the spatial separation that an orchestra enforces on them, but they do not have a choice here, and to the extent to which they do have a choice they do not do that much with it. For example, I do not know of any significant composer who shuffles the positions of individuals in an orchestra dynamically to an expressive end, would move a single performer around a stage, etc. And imagine for a minute that someone came up with a way of generating the sound of an orchestra from a single point: would people really *necessarily* speak of this as a defect? On the contrary, I can imagine many telling us what a revolution it was, the grand unification of music, all of sound channelled through the single portal of God, etc.
If you are interested in LiFeP04 technology, I have found the kit these people http://www.servovision.com/ make cost-effective and reliable, though I am some way off building something that could run a full range melquiades. It is, nonetheless, now very possible.       
04-06-2013 Post does not mapped to Knowledge Tree
Romy the Cat


Boston, MA
Posts 10,166
Joined on 05-28-2004

Post #: 13
Post ID: 19183
Reply to: 19168
Fixated on spatial separation?
fiogf49gjkf0d
 decoud wrote:
Is it not strange that we are so fixated on spatial separation in sound? Instruments in an orchestra are spatially separated because you can't have one musician sitting in another's lap, not because the music demands it. Music is intrinsically time- not space-varying. So insisting on spatial separation is an insistence on reproducing an aspect of performance that is incidental to the essence of the music: in an ideal world you would not have it.
  
I have written a long replay to the post above but accidently deleted it from my MS Word document. That sucks and I do not feel to write it again. Still the subject is important and feels to briefly comment about it, interrupting my deck re-building project….

I disagree with everyone regarding the spatial separation. I do not think it is unimportant and I do not think that it is important as “steverino” feels. The fact that many composers and musicians do use spatial location as an expressive tool is irrelevant to me.  If Mahler insisted that brass section in M3 has to sit off the stage then does it mean that if my playback can’t handle it (let pretend it is mono) then I am not able to hear the M3 at home? 
The point that I am trying to make is that in audio there are two spatial separations.

Fist is the reflection of original performing event. This is a complicated subject as practically no one nowadays records properly: poly-microphone techniques and barbaric editing in most of the cases make the original spatial separation if not irrelevant but for sure less imperative. You can search for years an orchestra where first violins are able to enter with chairs escalating delay but even if you find them then rest assured that it will be killed by multiple microphones.

The second spatial separation is truly audio trick and from some perspective it shall not be relevant. The trick is that as irrelevant it is the ability of playback to make that trick says a LOT about   the quality of playback.  The little dirty secret of audio is that we do not truly recognize amplitude but we do recognize phases.  It is not about the “recognition” but rather an impact that audio doe to us – it is not amplitude but phase centric. So, here is that purely irrelevant audio trick – the ability of playback to do spatial separation become like some kind of certification of quality. Of cause there are many other evidences of quality but surprisingly the spatial separation is very very handy and very easy to use. It is like pH test of water in spa – if pH is off then all kind of problems comes….

The Cat


"I wish I could score everything for horns." - Richard Wagner. "Our writing equipment takes part in the forming of our thoughts." - Friedrich Nietzsche
04-06-2013 Post does not mapped to Knowledge Tree
steverino
Posts 367
Joined on 05-23-2009

Post #: 14
Post ID: 19184
Reply to: 19183
Just to be clear
fiogf49gjkf0d
My point is not that separation per se must always be maintained but that the character of the music determines the degree of optimum separation or co-location. For example the first violins are grouped together for a reason; so the individual string tones blend. The location of the second violins is more discretionary but certain compositions benefit from one location or the other (the two possible sites are in back of the first violins or across from them on the right side of the stage. Two different sonic effects.  The melodic, harmonic, dynamic and instrumentation attributes created by the composer always imply a certain kind of spatial arrangement in live performance to reproduce them satisfactorily. Even on a recording an artificial arrangement should be based on those factors  in addition to any directions from the composer, producer etc. It's called mixing.
04-06-2013 Post does not mapped to Knowledge Tree
Romy the Cat


Boston, MA
Posts 10,166
Joined on 05-28-2004

Post #: 15
Post ID: 19185
Reply to: 19184
The art of audio thinking.
fiogf49gjkf0d
 steverino wrote:
My point is not that separation per se must always be maintained but that the character of the music determines the degree of optimum separation or co-location. For example the first violins are grouped together for a reason so the individual string tones blend. The location of the second violins is more discretionary but certain compositions benefit from one location or the other (the two possible sites are in back of the first violins or across from them on the right side of the stage. Two different sonic effects.  The melodic, harmonic, dynamic and instrumentation attributes created by the composer always imply a certain kind of spatial arrangement in live performance to reproduce them satisfactorily. Even on a recording an artificial arrangement should be based on those factors  in addition to any directions from the composer, producer etc. It's called mixing.

Steverino, yes, that all understood but I was taking about slightly different things. Present that there is no spatial intend in music. Let pretend that we have a single flute plays in anechoic chamber, I know it is disgusting but let pretend it for methodological reasons. Even then we still have audio payback do phases processing and the virtual imaging of spatial reconstruction will be there. So, what I advocate is that we shall not confuse the musical or actual spatial information with spatial information as audio debugging tool. Sure they are related but I think that “art of audio thinking” is to be able to separate between importance of spatial reality and specifics of spatial reconstruction of reality.


"I wish I could score everything for horns." - Richard Wagner. "Our writing equipment takes part in the forming of our thoughts." - Friedrich Nietzsche
04-06-2013 Post does not mapped to Knowledge Tree
steverino
Posts 367
Joined on 05-23-2009

Post #: 16
Post ID: 19186
Reply to: 19185
Dazed and confused
fiogf49gjkf0d
I'm sorry Romy usually I follow your train of thought but I'm lost on this one. There is a spatial arrangement that is optimal for the musical content. I think we agree on that. Then there is the actual spatial arrangement whether live or mixed.  Then there is the audio system reproduction of that actual spatial information whether anechoic or reverberant. But you say that should not be used as an audio debugging tool? Because we can't map it precisely from the stage to the speakers..?? But wouldn't that issue be avoided by artificially mixed material? Or am I missing everything?
04-07-2013 Post does not mapped to Knowledge Tree
Romy the Cat


Boston, MA
Posts 10,166
Joined on 05-28-2004

Post #: 17
Post ID: 19192
Reply to: 19186
My view.
fiogf49gjkf0d
 steverino wrote:
I'm sorry Romy usually I follow your train of thought but I'm lost on this one. There is a spatial arrangement that is optimal for the musical content. I think we agree on that. Then there is the actual spatial arrangement whether live or mixed.  Then there is the audio system reproduction of that actual spatial information whether anechoic or reverberant. But you say that should not be used as an audio debugging tool? Because we can't map it precisely from the stage to the speakers..?? But wouldn't that issue be avoided by artificially mixed material? Or am I missing everything?
  
Well, what was saying that spatial capacity of playback has nothing to do with spatial arrangement that is optimal for the given musical content. You see, playback is brainless dead substance that has no knowledge or understanding of musical content or spatial need for played musical content. Playback has however own spatial capacity that has absolutely nothing to do with music you play. Pretend that we play on our playback ONLY test signals then you will see that different inhalations would demonstrate completely different spatial capacity. In my view when we talk/think about spatial capacity of playback we need to forget about spatial load of music and let the playback-only to do the spatial tricks only for sake of playback. We can talk to each other stories how clarinet could magnificently bind string sections located at opposite sides of orchestra but it will not help us to figure out why for instance Convergent Audio Technology preamplifiers, being fine preamplifier has problem to position vertical image properly. CAT preamps have that vertical parabolic curve which position violin section on right and left but mid image elevates vertically, and it look like it is not depending from acoustic system setup. Operating by musical categories it does not make sense. So I do feel that a playback has to be given some strictly audio test where spatial capacity would be validated. It might be playing some spatial y charged music where playback would demonstrate its ability to deal with it. Then the most interesting things happen: each playback has own way to deal with space. Pay attention: no one would argue with musical spatial information but we all would have different opinion and different experiences how our current or imaginary playbacks would deal with space. The interpolations by playback of the spatial factor has nothing to do with spatial intend in music and therefore I do not feel that it is possible to prove the importance of spatial character in audio by stating that live music has a lot of spatial intend.


"I wish I could score everything for horns." - Richard Wagner. "Our writing equipment takes part in the forming of our thoughts." - Friedrich Nietzsche
04-07-2013 Post does not mapped to Knowledge Tree
steverino
Posts 367
Joined on 05-23-2009

Post #: 18
Post ID: 19194
Reply to: 19192
Ok maybe I understand you even when I don't
fiogf49gjkf0d
I think my statement   " Because we can't map it precisely from the stage to the speakers..??"     was on the right track. Yes there is a distortion of spatial information from performance to speaker in the same manner that a curved shiny ball distorts a painting reflected on it. I think the only slight exception would be a recording of musicians playing in the same room between where the playback speakers stand. Playback then should have a fairly close fidelity of spatial information on a high fidelity system.
04-07-2013 Post does not mapped to Knowledge Tree
Romy the Cat


Boston, MA
Posts 10,166
Joined on 05-28-2004

Post #: 19
Post ID: 19195
Reply to: 19194
Try to hear Lamm L1/L2 for start.
fiogf49gjkf0d
 steverino wrote:
I think my statement   " Because we can't map it precisely from the stage to the speakers..??"     was on the right track. Yes there is a distortion of spatial information from performance to speaker in the same manner that a curved shiny ball distorts a painting reflected on it. I think the only slight exception would be a recording of musicians playing in the same room between where the playback speakers stand. Playback then should have a fairly close fidelity of spatial information on a high fidelity system.

Well, frankly I am less concern about spatial information as imaging of instruments and sections. Sure it is important but I’ll rather interested in different way to deal with that information. It is hard to explain but if you try the bypass test with Lamm L1/L2 you might understand what I mean as it is VERY different from anything else out there. Still, I consider the most interesting not the  spatial information from imaging perspective but pure spatial information without imaging. I mean let say a sting quarter is playing then the locations of both violins, alto and cello would be imaging but the virtual proximity between the players and the boundary of the performing space is pure no-imaging space.  Lamms preamps did nothing in that direction and I would like to experiments with it worth my idea or cross phase injection. You see, I think that if we use some kind of smart cross-phase injection and turn the injection channel toward to the wall then we mish create some kind of condition where something might happen. It would be nice to play with delay channels why we do it but they we would end ups with those 200 drivers… :-)


"I wish I could score everything for horns." - Richard Wagner. "Our writing equipment takes part in the forming of our thoughts." - Friedrich Nietzsche
04-07-2013 Post does not mapped to Knowledge Tree
steverino
Posts 367
Joined on 05-23-2009

Post #: 20
Post ID: 19196
Reply to: 19195
No lamm to bleat at me
fiogf49gjkf0d
I don't have Lamm, but I think conceptually there is no real difference between imaging the location of the players and imaging their non location. Sound waves generated by the musicians are traveling in the space between them and even if it weren't (as in a pause), some sound information however low level should be present at every point of the performance space. How we (or the mics) hear the space between the players or between them and the room boundaries, if any, all qualifies as imaging doesn't it? It's just that people tend to use other terms such as ambiance or hall reflections to discuss the imaging Not associated with the location of the performers. That's why mic'ing of each player can sound like simultaneous mono recordings arranged in stereo space.
Page 1 of 2 (26 items) Select Pages:  1 2 »
   Target    Threads for related reading   Most recent post in related threads   Forum  Replies   Views   Started 
  »  New  Preamplifiers: keys to mystery. (Lamm L1, L2)..  In analog domain...  Audio Discussions  Forum     2  63355  11-14-2004
Home Page  |  Last 24Hours  | Search  |  SiteMap  | Questions or Problems | Copyright Note
The content of all messages within the Forums Copyright © by authors of the posts