Rerurn to Romy the Cat's Site


In the Forum: Horn-Loaded Speakers
In the Thread: Macondo’s lowest channel.
Post Subject: My ways to mimic Reality.Posted by Romy the Cat on: 3/13/2011
fiogf49gjkf0d

I am trying to keep the discussion about midbass horns out of ULF thread, I guess if Rakesh would like he can create a separate thread dedicated to his specific installation and the design ideas. I would like to keep the content of the posts more or less related to the subject of the thread.

Very briefly. I never had doubts one midbass horn or two. Sure the proximity of the horns is liability and I well know how they need to be position to be “proper”. There we have the real word of making the horns that would not destroy living hood and become invisible in room. This is where I feel any estimates and predictions fails- you build what you want to build and then deal with consequences. The monophonic position of my midbass horn doe has own toll but it is imposable to share what is right and what is wrong- all negatives and positive very much mixed together in one presentation. Also, the system is very much optimized to play with this type midbass horns the upperbass horns made do not work linearly but they a bit overshoot.  The result is that upperbass horns take over the attention from midbass horns and listening the installation without ULF you clearly hear the midbass notes as they are coming from front-located upperbass horns. It is very-very nice and in a way unbelievable. Even without using the ULF the location of midbass horns appears to be not identifiable. Dose the midbass horns semi-mono location has an impact to the “width” of playback. Yes it dose and I did clerlay expressed it in posts what I played with idea of width modulation channels. However, the width channel that I described was not because the midbass horn but because some upper MF cancelations, the cancelations that ULF has fixed. Still, the midbass horns are where they are and they are where I would like them to be. In my view there is no need in prediction of sonic results and to use those predictions as stimulation or contra-encouragement for building the horns itself. A midbass horn is like a child – it is perfectly possible that you son will become a serial killer. Still, it is not what you think when you consider to have a child. You conceive the concept and then do your best in order you son do not become a serial killer. The very same with such a large architectural project as midbass horn – it will be the result of your steering to the most part.

Now back to the ULF topic. I was thinking about use of R and L delta (I have the sum and stereo delta in my R&S multiplex decoder) but very fast discard this idea. To use phase injection means to interact with the original signal. It does have benefits but it also degrades “direct” sound as well. How to assess what kind injection is beneficial and what kind sound deterioration is still acceptable? I am staging on position that if the main signal is compromised in ANY way then whenever is being done is not usable.  The Macondo capacity is very high and it is very “clean” of you know what I mean. You hear pretty much what you hear with good headphones only much more in terms of tone, space and imaging. For instance I have Dorrough modulation meter that has 2meg input impedance. You understand that adding 2mR cable to 16R speaks has to have absolutely no impact. In fact I use in on my MF channel and I see no difference. However, to use the same meter on midbass channel does affect sound very negatively. Now, to add to my system another devise with 2-3 active stages that would do phase prosing I feel will be too damaging and I would like to keep the path as sort and non-compromised as possible. Do not forget the a few years back I was not able to make a 0 gain buffers that would be transparent enough.

I do feel that what Robert propose might be experimented and use but NOT at the design stage. The design concepts need to relay upon dealing with straight signals. Then, after the straight operating playback is set, then one can see what else might be done and how different intentional additions work against the non-compromised signals.

In the end - a comment about “reproduction of the subaudible signal at the same level as the audible bandwidth”. It is not necessarily what it is.  There is no same level, hyper-elevating level or elevated level. When we hear live sound then we have no “level of space” and we subconsciously get the messages about “space” from naturally-long reverberation time, from visual aspect and from another sensors.  Recordings does not have this information, the listening rooms has near close decay time is necessary (The Symphony Hall in Boston has drop at 60dB for 1.5 seconds), not to mention that ULF information is severely distorted on recording. So, the idea is do not correlate the ULF messages with auditable signal but to separate them and to run ULF at the level that creates SIMILAR sensation as it happens during live event. The keys in it to have ULF do not affect the audible signals, to have a feasibility of ULF level, to have front of ULF time-aligned, to have the ULF leading edge as compressed and sharp as possible, to have a room that will be able to dissipate the ULF decay evenly. So, it is not about levels in terms of dB equalization but rather about of equity of perception.  I do not even know what the objective level of my ULF. My ULF is higher than it would be with a linear single driver that would 20Hz-20kHz bandwidth but who said that linearity is a part Reality? I so not even mention that in audio we do not deal with  Reality but we deal with very barbaric ways to mimic Reality…

The Cat

Rerurn to Romy the Cat's Site