Rerurn to Romy the Cat's Site


In the Forum: Horn-Loaded Speakers
In the Thread: Some horn writing by Thomas Dunker.
Post Subject: Some horn writing by Thomas Dunker. Posted by Romy the Cat on: 1/14/2006

All below is written by Thomas Dunker

Yeah, "sound consistent independent of signal level" is a very precise way to sum up my whole speaker philosophy. Anything that is assumed to be constant, but which actually changes with signal dynamics is what I call "dynamic nonlinearities". Dynamic nonlinearities of all kinds are practically a characteristic of electrodynamic speakers. A buddy of mine says, if a speaker diaphragm moves, it produces distortion. The trick then, is to help the transducer radiate maximum acoustic power with minimum "effort". The complexity and interdependence of speaker nonlinearities is such that I consider it futile to correct any of this "after the fact". It can only be minimized.

Thanks for the responses. I knew I'd heard that Shindo does field coil mods, but I doubt that what he charges is within the limits of my audio budget. Right, machining a new center pole is something I'd have to pay someone to do anyway, but I might as well have it done here in Norway.

Actually my current "project" (which hasn't quite made it to the drawing board yet) specifically involves "reverse engineering" the WE555 and 15A horn system. I don't mean replicating it, I just have to understand how they pulled it off, I mean the unbelievable EBP, 100Hz with a 2" dome compression driver, everything. It's been bugging me for years. No matter how hard I dig into the horn theory, I just end up with more questions. Wente and Thuras impress me more and more the more I learn, going on the eleventh year.

I've been pursuing this hunch that the axial length of a horn in relation to the cutoff wavelength has got to make a difference to the wavefront propagation within the horn in the first octave above cutoff, and ever since the Shearer system happened the aim has been to make horns as short as possible for easy time alignment. It's like the length of a horn is often left to chance or ideas like, "let's
see, if I just quadruple the throat area I can get away with half the length, how nice" But the wavelength at the cutoff frequency is still c/lambda, you can't just cut THAT in half? Something *else* is different in this picture.

Someone please explain this to me, I may be dumb, but I find this particular thing really bizarre.

The question comes down to "how much of a horn on the driver" and it's not just about the cutoff frequency.

There's something special I hear in longer horns with larger mouth to throat area ratios, that just isn't there in shorter horns with a smaller mouth to throat ratio, and I want to find out why and what and pursue it a little further. Any ideas on this would be very welcome. I know they all say longer horns have more "horn sound", as if that explains any of what I'm talking about.

If anything is obvious from reading 'too much' horn theory, it's that they all leave me thinking "But what about..." something or other. There's always simplifications, contradictions, approximations, assumptions, omissions etc. I therefore keep finding myself reading ten different papers simultaneously trying to fill in as many gaps as possible, usually ending up even more confused.

Recently I've been reading a very basic yet exceptionally useful and thought-provoking article by Victor Brociner, "The why and how of Horn Loudspeakers" from Audio, March/Mune 1971, highly recommended.

Only got a sloppy photocopy with some pages missing, if someone has

a copy, I'm very interested in, uh, another copy.

Too late for any more typing tonight,

***********************************************************

When I first got into horns I very soon realized how reduced diaphragm excursions, reduced input power etc. serve to reduce distortion, and improve 'dynamic linearity' hence 'useful dynamic range' (as one would have defined it for amplifiers etc.). Realizing that these relationships apply universally to all moving coil/electrodynamic speaker units, and since I was working on a low distortion high efficiency direct radiator array bass system, I started gathering research papers, articles etc. on all conceivable types of distortion known to take place in speakers. The more I read, the more obvious the conclusion that dynamic problems are mounting progressively as one moves away from a high efficiency design. Philosophy. This is by no means difficult to prove and explain.

Some of the initial inspiration wrt. distortion in speakers came from reading articles and JASA papers by Paul Klipsch, who also always insisted that distortion in speakers generally is inversely proportional to efficiency. As my own studies progressed, however, I've come to view Klipsch's explanations for this to be somewhat simplistic. Or, rather that he could have argued more convincingly by pointing at the fact that something like a dozen significant nonlinear factors are "forcibly" linearized in high efficiency small excursion designs. The cumulative effect of simultaneously reducing nearly all these dynamic nonlinearities in a fell swoop makes for a perceptible improvement in clarity, articulation, dynamics, effortlessness etc. Klipsch, however, tended to emphasize reduced intermodulation distortion in horns which is ONLY due to reduced diaphragm velocity. This distortion mechanism exists independently of nonlinear parameters in the driver itself, as this generation of nonharmonic sidebands is due to the doppler effect and is proportional to the product of peak diaphragm velocity and bandwidth. It is therefore not related to efficiency or driver nonlinearity as such. Neither does it explain the improved dynamics and other advantages of high efficiency speaker design

I therefore had to supplement the work of Klipsch with a much broader selection of theory on the numerous other excursion and power/current dependent nonlinearities. It very soon turns out that most of the T/S parameters used to describe drivers are small signal parameters only valid ("constant") for small excursion amplitudes and low input power. An engineer designing speakers using T/S based modeling and simulations might never give any thought to the nonlinear nature of these parameters. Looking at modern trends in speaker design this indeed seems to be the case.

I think it's relatively easy to show that all these problems begin with low radiation efficiency, which must be made up for by increasing diaphragm velocities and excursions. Once you have a large excursion design, efficient motor design becomes impossible both because of the consequences of reduced radiation efficiency AND because of the increased input power necessary to make up for a less efficient motor driving a less efficient radiator. The consequence of this is a huge increase in distortion and a huge drop in efficiency.

This said, the dynamic problems and distortion of such a driver will be reduced by reducing the power input and excursion amplitude, by which the acoustic output is reduced acordingly. But if a large number of such drivers are placed in an array the combined diaphragm area can be made quite large, improving radiation efficiency, raising the system efficiency considerably, and dividing the applied power between a large number of drivers, reducing dynamic problems related to voice coil temperature rise, flux modulation etc.

There's just no end to the dynamic problems that resulted from insufficient initial radiation efficiency. Every subsequent attempt to "compensate" for the consequences of poor radiation efficiency further reduces efficiency and further increases distortion.

It's funny how the limited power of 1920s power amplifiers motivated speaker designers to focus hard on efficiency, with the serendipitous result that a very low distortion speaker technology resulted. Reading the paper by Wente and Thuras on the design considerations leading to the 555 driver, one realizes that they investigated and understood most of the crucial factors governing dynamic linearity and summed it up in a small list of criteria for high fidelity speakers. If you look at modern "hi-fi" speakers and the amps that drive them, it's seen that all these criteria are systematically violated, the ultimate result of high power amps making efficient speakers "obsolete". The only low distortion high efficiency speakers left in the industry are found in the pro sound industry, but these speakers are usually being pushed so hard that a lot of their sonic advantages in terms of dynamics and low distortion are compromised due to the extreme acoustic power outputs and high power input required.

Anyway...you bring up the subject of reduced momentary efficiency due to momentary voice coil heat rise. This is a mechanism very similar to that of an electronic dynamics compressor, where the amplifier gain is progressively reduced as the input signal amplitude increases. You can plot the efficiency vs. power input characteristic for a low efficiency speaker and do the same for a high efficiency speaker and the relationship becomes very obvious even for continuous signals. In actual speakers the voice coil has a thermal time constant that depends on the dimensions and design of the voice coil, so that woofers typically have a greater time constant than mids or tweeters, resulting in different dynamic compression characteristics
for different frequencies. This gets very messy when you think about musical transients containing wide band energy.

It gets even worse, in speakers that are supposed to get all their damping from back-EMF shorted by the low output impedance of the power amp, as this short circuit current is limited by the voice coil resistance, and the available momentary 'damping force' being inversely proportional to the square of the momentary voice coil resistance. For a system aligned for optimum transient response/damping based on T/S parameters, the transient response progressively worsens with increased power input.

Also, a worst case scenario would involve passive crossovers facing a dynamically variable speaker impedance, making the crossover frequencies bobbing up and down with voice coil temperature shifts.

One could go on forever to explain the ultimate consequences of low efficiency speaker design, but it all really just boils down to establishing a situation where ideally constant parameters are made as constant as possible within the largest possible dynamic range.

Wrt. horn length vs. cutoff frequency, I'm still looking into it. Occasionally the concept of horn cutoff frequency is explained by the horn length being too short for several wavefronts (how many would do?) to form in the horn, screwing up the stable forward wave propagation and that this causes mouth reflections. But that's not the same as saying that the mouth perimeter and the flare rate defines the cutoff frequency. Depending on the selected throat area, for a given mouth area, the axial length of the horn will be defined by the throat are rather than considerations pertaining to wave propagation. This makes no sense to me.

In the Brociner article I mentioned, the theoretical ideal horn is 'developed' starting with an infinitely long pipe of constant area, assuming that the driver has to be forced to form plane wavefronts. Then the pipe is reduced to one having a finite length sufficient for plane wave propagation, and he goes on to theorize about coupling this pipe to one having a larger area, and adding increasingly larger pipes until a large "mouth area" is reached.
He then goes on to show that the number of steps can be made infinite by making each "pipe section" infinitely short, and arrives at a horn having a smooth taper from throat to mouth.

However, throughout the article he makes the point that an ideal horn should have essentially plane waves not just at the throat, but also at the mouth in order propagate waves as a very long cylindrical pipe would. This is of course impossible except with an infinitely long horn. A step in the right direction would seem to be a long horn with a very gradual taper. This is what we see in the old WE horns, where it turns out that the axial length is close to a wavelength at 100Hz in the case of the 15A horn. The 13A horn has a smaller mouth, but is longer still. There's got to be some explainable reason why they went to all that trouble making the horns so long. If they wanted to make a pure bass horn, they might have used a paper cone on a shorter horn, but these horns have good output as much a 5-6 octaves above cutoff, which is quite amazing to say the least.

One can imagine designing a horn with a flare rate corresponding to a cutoff frequency considerably lower than that defined by the mouth circumference. This would result in a much longer horn with less wavefront curvature at the mouth. An inevitable consequence is reduced dispersion of the radiated sound, but how might such an approach make it feasible to use a driver like the 555 down to 100Hz? To me there seems to be some aspect of the load on the driver at low frequencies (first octave above cutoff) that relates to the mode of wave propagation as governed by horn length, but I can't really explain it.

I have not read a great deal of waveguide theory, but I suspect that part of the explanation might be found by considering the horn as a waveguide, also per Brociner's explanation.

Oh, and on the assumed "horn sound" of longer horns, Keith Holland writes in "Round the Horn" that this is due to something like "temporal distribution of reflections". But are these "reflections" only a matter of "mouth termination" (Impedance mismatch at the mouth), or does the length vs. "waveguide operation" matter just as much? Does the wavefront curvature at the mouth affect the degree of "mouth reflections". Measurements reveal that wavefronts are considerably "flattened" after they have passed the mouth perimeter. In the same article, Holland says that a conical 'waveguide' (with a very "abrupt" mouth termination) of limited length on the other hand has no "horn sound". I feel like calling Keith Holland some time just to ask what this all means

***********************************************************
When I first got into horns I very soon realized how reduced diaphragm excursions, reduced input power etc. serve to reduce distortion, and improve 'dynamic linearity' hence 'useful dynamic range' (as one would have defined it for amplifiers etc.). Realizing that these relationships apply universally to all moving coil/electrodynamic speaker units, and since I was working on a low distortion high efficiency direct radiator array bass system, I started gathering research papers, articles etc. on all conceivable types of distortion known to take place in speakers. The more I read, the more obvious the conclusion that dynamic problems are mounting progressively as one moves away from a high efficiency design.
philosophy. This is by no means difficult to prove and explain.

Some of the initial inspiration wrt. distortion in speakers came from reading articles and JASA papers by Paul Klipsch, who also always insisted that distortion in speakers generally is inversely proportional to efficiency. As my own studies progressed, however, I've come to view Klipsch's explanations for this to be somewhat simplistic. Or, rather that he could have argued more convincingly by pointing at the fact that something like a dozen significant nonlinear factors are "forcibly" linearized in high efficiency small excursion designs. The cumulative effect of simultaneously reducing nearly all these dynamic nonlinearities in a fell swoop makes for a perceptible improvement in clarity, articulation, dynamics, effortlessness etc. Klipsch, however, tended to emphasize reduced intermodulation distortion in horns which is ONLY due to reduced diaphragm velocity. This distortion mechanism exists independently of nonlinear parameters in the driver itself, as this generation of nonharmonic sidebands is due to the doppler effect and is proportional to the product of peak diaphragm velocity and bandwidth. It is therefore not related to efficiency or driver nonlinearity as such. Neither does it explain the improved dynamics and other advantages of high efficiency speaker design

I therefore had to supplement the work of Klipsch with a much broader selection of theory on the numerous other excursion and power/current dependent nonlinearities. It very soon turns out that most of the T/S parameters used to describe drivers are small signal parameters only valid ("constant") for small excursion amplitudes and low input power. An engineer designing speakers using T/S based modeling and simulations might never give any thought to the nonlinear nature of these parameters. Looking at modern trends in speaker design this indeed seems to be the case.

I think it's relatively easy to show that all these problems begin with low radiation efficiency, which must be made up for by increasing diaphragm velocities and excursions. Once you have a large excursion design, efficient motor design becomes impossible both because of the consequences of reduced radiation efficiency AND because of the increased input power necessary to make up for a less efficient motor driving a less efficient radiator. The consequence of this is a huge increase in distortion and a huge drop in efficiency.

This said, the dynamic problems and distortion of such a driver will be reduced by reducing the power input and excursion amplitude, by which the acoustic output is reduced acordingly. But if a large number of such drivers are placed in an array the combined diaphragm area can be made quite large, improving radiation efficiency, raising the system efficiency considerably, and dividing the applied power between a large number of drivers, reducing dynamic problems related to voice coil temperature rise, flux modulation etc.

There's just no end to the dynamic problems that resulted from insufficient initial radiation efficiency. Every subsequent attempt to "compensate" for the consequences of poor radiation efficiency further reduces efficiency and further increases distortion.

It's funny how the limited power of 1920s power amplifiers motivated speaker designers to focus hard on efficiency, with the serendipitous result that a very low distortion speaker technology resulted. Reading the paper by Wente and Thuras on the design considerations leading to the 555 driver, one realizes that they investigated and understood most of the crucial factors governing dynamic linearity and summed it up in a small list of criteria for high fidelity speakers. If you look at modern "hi-fi" speakers and the amps that drive them, it's seen that all these criteria are systematically violated, the ultimate result of high power amps making efficient speakers "obsolete". The only low distortion high efficiency speakers left in the industry are found in the pro sound industry, but these speakers are usually being pushed so hard that a lot of their sonic advantages in terms of dynamics and low distortion are compromised due to the extreme acoustic power outputs and high power input required.

Anyway...you bring up the subject of reduced momentary efficiency due to momentary voice coil heat rise. This is a mechanism very similar to that of an electronic dynamics compressor, where the amplifier gain is progressively reduced as the input signal amplitude increases. You can plot the efficiency vs. power input characteristic for a low efficiency speaker and do the same for a high efficiency speaker and the relationship becomes very obvious even for continuous signals. In actual speakers the voice coil has a thermal time constant that depends on the dimensions and design of the voice coil, so that woofers typically have a greater time constant than mids or tweeters, resulting in different dynamic compression characteristics for different frequencies. This gets very messy when you think about musical transients containing wide band energy.

It gets even worse, in speakers that are supposed to get all their damping from back-EMF shorted by the low output impedance of the power amp, as this short circuit current is limited by the voice coil resistance, and the available momentary 'damping force' being inversely proportional to the square of the momentary voice coil resistance. For a system aligned for optimum transient response/damping based on T/S parameters, the transient response progressively worsens with increased power input.

Also, a worst case scenario would involve passive crossovers facing a dynamically variable speaker impedance, making the crossover frequencies bobbing up and down with voice coil temperature shifts.

One could go on forever to explain the ultimate consequences of low efficiency speaker design, but it all really just boils down to establishing a situation where ideally constant parameters are made as constant as possible within the largest possible dynamic range.

Wrt. horn length vs. cutoff frequency, I'm still looking into it. Occasionally the concept of horn cutoff frequency is explained by the horn length being too short for several wavefronts (how many would do?) to form in the horn, screwing up the stable forward wave propagation and that this causes mouth reflections. But that's not the same as saying that the mouth perimeter and the flare rate defines the cutoff frequency. Depending on the selected throat area, for a given mouth area, the axial length of the horn will be defined by the throat are rather than considerations pertaining to wave propagation. This makes no sense to me.

In the Brociner article I mentioned, the theoretical ideal horn is 'developed' starting with an infinitely long pipe of constant area, assuming that the driver has to be forced to form plane wavefronts. Then the pipe is reduced to one having a finite length sufficient for plane wave propagation, and he goes on to theorize about coupling this pipe to one having a larger area, and adding increasingly larger pipes until a large "mouth area" is reached.

He then goes on to show that the number of steps can be made infinite by making each "pipe section" infinitely short, and arrives at a horn having a smooth taper from throat to mouth.

However, throughout the article he makes the point that an ideal horn should have essentially plane waves not just at the throat, but also at the mouth in order propagate waves as a very long cylindrical pipe would. This is of course impossible except with an infinitely long horn. A step in the right direction would seem to be a long horn with a very gradual taper. This is what we see in the old WE horns, where it turns out that the axial length is close to a wavelength at 100Hz in the case of the 15A horn. The 13A horn has a smaller mouth, but is longer still. There's got to be some explainable reason why they went to all that trouble making the horns so long. If they wanted to make a pure bass horn, they might have used a paper cone on a shorter horn, but these horns have good output as much a 5-6 octaves above cutoff, which is quite amazing to say the least.

One can imagine designing a horn with a flare rate corresponding to a cutoff frequency considerably lower than that defined by the mouth circumference. This would result in a much longer horn with less wavefront curvature at the mouth. An inevitable consequence is reduced dispersion of the radiated sound, but how might such an approach make it feasible to use a driver like the 555 down to 100Hz? To me there seems to be some aspect of the load on the driver at low frequencies (first octave above cutoff) that relates to the mode of wave propagation as governed by horn length, but I can't really explain it.

I have not read a great deal of waveguide theory, but I suspect that part of the explanation might be found by considering the horn as a waveguide, also per Brociner's explanation.

Oh, and on the assumed "horn sound" of longer horns, Keith Holland writes in "Round the Horn" that this is due to something like "temporal distribution of reflections". But are these "reflections" only a matter of "mouth termination" (Impedance mismatch at the mouth), or does the length vs. "waveguide operation" matter just as much? Does the wavefront curvature at the mouth affect the degree of "mouth reflections". Measurements reveal that wavefronts are considerably "flattened" after they have passed the mouth perimeter. In the same article, Holland says that a conical 'waveguide' (with a very "abrupt" mouth termination) of limited length on the other hand has no "horn sound". I feel like calling Keith Holland some time just to ask what this all means.

**********************************************************

Could be that Jean-Michel is on vacation (I haven't had much time for e-mail the past few days, so haven't read the JoeNet mail either).

If I am to understand Jean-Michel's theory (and the resulting horns) the way he expects it to be understood, it would be that the "design" of the horn mouth makes "all the difference" when it comes to reflections, and consequently if reflections at the mouth can be *eliminated* as a matter of design, there would be no requirement for the axial length of the horn in terms of preventing reflections making it back to the throat, thus permitting a shorter horn. This all sounds very nice in theory, but I wonder...

I know that Jean-Michel has devoted a lot of time to the study of wavefront expansion and propagation in the horn. This is a very difficult subject (at least to me...) in terms of judging a horn design. Except in very rare cases, we don't get to *know* precisely how the waves actually propagate towards the mouth and where/how they "let go" of the horn, and how this might differ greatly with the wavelength of the sound. For instance, there is no evidence that a round tractrix horn is capable of radiating hemispherical waves from the plane of the mouth, although the assumption of "spherical wave" radiation being unique to, and only possible with tractrix horns now seems to be something of a "proclaimed truth", and a very questionable one at that...

I am thinking of the papers by Newell, Holland and Fahey ("prediction and measurement of the one-parameter behavior of horns", and "round the horn"), where, in accordance with other experimental evidence, it is found that the wavefronts just outside the horn mouth typically assume the shape of "flattened spherical caps". This can be even when the wavefronts have a considerable (greater) curvature just before "letting go" of the horn. Voigt assumed the wavefronts to have a constant radius of curvature from throat to mouth, and wavefronts being perpendicular to the inner horn walls, and this produced the tractrix curve. I've been listening to tractrix horns myself for ten years, and I'm not saying they "don't work", just that they are based on some assumptions that aren't necessarily all correct.

One of these assumptions appear to be that the wave "tears loose" at the extreme end of the physical horn, i.e. in the plane defined by the physical mouth perimeter, AND that the curvature of the wavefront at this point is defined by the "opening angle" of the mouth. If this were to be the case, and for all frequencies transmitted, very wide and constant (with frequency) dispersion from horns just wouldn't be a problem, and it would all simply depend on the horn mouth geometry.

This clearly is not the case. One reason for this, as Jean-Michel points out, is that the "mouth" is not actually a well defined point along the horn axis. It would seem that for frequency independent constant dispersion, a starting point would have to be establishing conditions for the "wavefront" having a constant curvature at the point where it is radiated into free space, and for this to happen at a well defined point near the horn mouth - for all frequencies/wavelengths in question. A further requirement would be that once the wave has "let go" of the horn it is allowed to propagate with the same curvature as that defined by the horn, again, for all frequencies/wavelengths within some predefined frequency band. These particular requirements would seem to be met in modern "wave guides" and constant directivity/constant dispersion horns, which are frequently less than ideal "horn loads" for the driver except at higher frequencies, more than anything often having properties similar to those of conical horns, typically restricting their use to higher frequencies (which is where wide dispersion is the most difficult to achieve).

Recently I have spent a great deal of time examining the design of the early wide range horns from Western Electric, a generation of horns that was made "obsolete" 70 years ago and clearly not very well understood except by their creators. Their exceptionally wide range response is primarily due to the use of a small compression driver being used down to unusually low frequencies (and well below the driver's resonant frequency as well). However, having very large mouths, how could these horns have reasonably good dispersion as much as 5-6 octaves above cutoff? These horns would seem to embody very careful design meeting tough challenges at both extremes of the frequency band they were designed to cover. To avoid excessive diaphragm excursions at the lowest frequencies, the horn would have to have a very high resistive throat impedance, and low reactive impedance right down to the "minimum frequency". There is evidence that these horns were refered to as exponential horns, BUT that Edward Wente specified for the area expansion of the assumed *curved* wavefronts to follow an exponential law, not merely that the horn would have an exponentially increasing cross section (assuming plane waves). Therefore, horns such as the 15A, 13A etc. must have a "modified exponential" expansion taking the wavefront curvature into account. They would therefore appear to expand more slowly than a "plane wave exponential horn", since the increasing curvature of the propagating wave contributes a term of area increase in addition to that provided by the increasing cross section of the horn.

Therefore, a "true exponential horn" is something other than a horn with exponentially increasing *cross section*. Its cross sectional expansion would seem to follow something more in the direction of a "hypex", which would also serve to maximise throat resistance and minimise throat reactance at extreme low frequencies approaching the cutoff frequency. Considering that these horns were used as full range speakers, the axial length did not have to be considered a big problem, rather, I think the length was as "necessary" as every other aspect of these horns. The most obvious problem with a long horn with a very slow initial expansion is harmonic distortion due to the nonlinear compression characteristic of air. But this distortion is of a simple and predictable nature, unlike most of the distortion produced in the driver. Conceivably, the air compression induced harmonic distortion could be canceled out to a great extent by making it complement the distortion of a single ended triode output stage in the amplifier powering the driver, since the nonlinear characteristic of a triode is quite similar to that of air's nonlinear compression/rarefaction characteristic.

In this picture, dynamic nonlinearities and mechanisms responsible for intermodulation distortion would compromise the performance of the driver/horn speaker much more than the simple and pure even harmonic distortion introduced by the long throat, which in itself has a far higher subjective audibility threshold than odd harmonics and IMD.

The latter, odd harmonics and IMD primarily relates to nonlinear mechanisms in the driver, and increases with increasing diaphragm excursions. Odd harmonics are symptomatic of symmetrical nonlinearities, such as that of the diaphragm suspension and to a varying degree, the magnetic field in the gap, which depending on pole piece geometry may be more or less symmetric. At any rate, an underhung voice coil has no business moving out of the gap and into nonlinear fringes of the field. If this happens, the coil is too long and/or the excursions are too great.

Intermodulation distortion would seem to be among the worst problems in a wide range single driver horn speaker, since all forms of IMD increase with proportion to the upward bandwidth. The most significant source of IMD in a compression driver is probably the variation of the air volume between diaphragm and phase plug. For LF applications this poses a challenge, that was elegantly solved in the WE 555 driver by making the diaphragm to phase plug clearance increase from the center of the diaphragm to the edge, making the change in volume smaller at large excursions than if the clearance had been constant across the whole surface.

Well, I am digressing, but "everything relates to everything else"...

How then, about the length of these horns, in terms of desired wave propagation/dispersion AND avoiding severe throat impedance fluctuations at the lowest frequencies...? There is fortunately some data available on the mouth dimensions and length of some of these horns. The mouth circumference is typically 1.2-1.6 times the length of the horn. If we assume that the "mouth cutoff" is placed some way below the "useful cutoff" of the horn (where we'd place the high pass crossover freq. in a multiway system), we see that for this frequency, the horn is about as long, or slightly longer than the wavelength
of the lowest frequency transmitted.

An "oddity" with these horns is also the "flare reversal" at the point where the curled and therefore suitably flattened horn straightens out and the transition to the "mouth" is made. Here, there is a quite sudden increase in vertical wave expansion, whereas the gentle horizontal expansion continues without abrupt change. This causes wider dispersion in the vertical plane due to diffraction of the wave emanating from the narrow, flattened "throat". The exact same principle has been used for decades in constant directivity horns, and before that, in "reverse flare" horns of various makes.

As with the CD horns, this feature makes it tempting to divide the horn into a "throat section" and a "mouth section".

In the WE horns, most of the horn's length is in the flattened "throat" part, and it would seem that even this length of the horn is made nearly equal to a wavelength at the lowest frequency (somewhere around 3-3.5m or close to a wavelength at 100Hz). Clearly, this constitutes a kind of wide band wave guide that prevents even the high frequency wave components from "leaving the horn" prematurely, and assures that all frequency components (wavelengths) undergo the same slow, controlled area expansion and give them a reasonably fixed degree of curvature at the point where the vertical expansion begins to increase.

There's more to be considered, not least the effects of curving the horn upon the wave propagation, as the wavefronts are tilted due to unequal inner and outer horn wall lengths. If this is also taken into consideration when computing the horn for "true exponential wavefront area expansion" (it would have been!), and at the same time minimizing the effects of the bends upon the HF response, it is seen just how much work must have gone into designing these horns - in the pre-computer era!

Yes, these horns were unusually long and had a low initial flare rate due to their low cutoff frequency (and possibly a hyperbolic term in the expansion profile), large mouth and small throat, but it is quite easily seen that everything from bass horns to tweeters today quite typically are considerably shorter than the cutoff wavelength. Tractrix horns don't really have a "flare constant", they're computed from the mouth backwards, and the throat area is defined by the choice of driver, and for a tractrix horn with a given mouth size, the length of the horn is therefore given by the throat dimensions. And in most other cases it is always made a big priority to keep horns as short as possible! Regardless of the cutoff frequency/wavelength?!?!?!?!

I am sure "horn sound" can be eliminated by removing "most of the horn", but that does give you less of an impedance transformer and more of a direct radiator. Why did we we want horns in the first place?

All the classical horn theory is very clear on this: There is the theoretical infinite horn, in which no reflections take place and the impedances taper off without any ripple. The infinite horn is infinitely long and has an infinitely large mouth, we learn.

Maybe it's only Leo Beranek who really "takes the bull by the horns" in stating that:

"If the horn is a number of wavelengths long and if the mouth circumference is larger than the wavelength, we may call the horn "infinite" in length."

(L.L. Beranek, "Acoustics", 1954, page 269)

This says something about the *combination* of horn length and mouth size, but it is left for the the reader to interpret the meaning of this. I'm still thinking...

**********************************************************************

I'm doing some driver/horn mechanic/acoustic impedance / mass reactance calculations for different horns and compression drivers. As a starting point, I would like to compare some standard JBL systems. Here's what I'm looking for:

For JBL 1.75" and 4" diaphragms:

*Moving mass of titanium diaphragm with 8 and 16 ohm coils
respectively

*Moving mass of aluminum diaphragm with 8 and 16 ohm coils
respectively

*Moving mass of phenolic diaphragm with 8 and 16 ohm coils
respectively

*Info on suspension compliances and linearity for different
(diamond, tangential, half roll) suspensions.

Also looking for the diaphragm-phase plug spacing and coil winding height vs. gap height for JBL drivers like 2420, 2470, LE85, 2440/2441, and 2482.

For comparison, the same data for the Altec 288 and 291 series drivers would also be interesting to me.

Also want to get in touch with anyone who might have experimented with converting stock large pot JBLs like 2440/2441 and 2420/2470 to using field coils. I hear there are some people in Japan who rebuild 375/2440s to using field coils, but I imagine DIY would be my only economically viable option...

Ever since I got into high efficiency speakers ten years ago, I have intensely been studying the relationships between efficiency, driver load conditions, input power and resulting distortion, as the huge improvement of dynamic linearity in small excursion systems as well as resistive damping to me represent prerequisites for improving the useful dynamic range, overall linearity and transient response of electrodynamic speakers IN GENERAL. To me, there seems to be no way to get to this situation but to make good radiation efficiency a first priority, either by horn loading or by SERIOUSLY increasing the cone area of direct radiator systems. I have had encouraging results using arrays of 16 8" woofers (in a "Heil AMT" type configuration) in each of my dipole bass systems, which came about as a result of looking for a reasonably compact direct radiator system having high efficiency, dynamic¨linearity, low distortion and subjective transient response approaching that of horns. With arrays of multiple smaller cone drivers, an equivalent larger driver with "impossible" qualities (stiffness of a single driver, but with the mass and area of the combined number of drivers) as well as the obvious advantages of increased radiation efficiency and reduced cone velocity and excursion amplitudes, reduced input power per driver etc. I was forced to give up the idea of bass horns at the time due to a lack of space, but the large area direct radiator array route proved a worthy alternative both subjectively and from a theoretical point of view.

Above 500 Hz I've been using horns (adaptations of Bruce Edgar's tractrix midrange horn from Speaker Builder to mate with different compression drivers) in different experimental configurations. The Edgar horns have some problems due to my imperfect round-to-square transitions. I'm now looking into designing some new midrange horns, exploring some old but possibly poorly understood approaches.

That'll have to serve as an introduction, while I wanted to add some comments on subtle distortion phenomena relating to different magnet types. I'm convinced beyond any doubt that the often reported subjective (i.e. audible) advantage of alnico over ferrite is explainable in terms of the very different magnetization characteristics that these materials have. The subject is treated in Jean Hiraga's monumental book on speakers, "Les Haut-Parleurs". An acquaintance of mine who's designed speaker drivers and industrial magnet systems for decades ran some simulations on one of his actual designs (some Scan-Speak driver, can't recall which) comparing alnico and ferrite in an "all else being equal" comparison, found that for a given voice coil current, ferrite produced several times the amount of distortion compared to alnico.

The concept of magnet "stiffness" or "internal impedance" might be appropriate, if one looks at the magetic flux as "current" and the magnet as a "flux source" power supply or battery.

"Flux modulation distortion" was known and understood by Edward Wente and Albert Thuras in the mid 1920s, and in their criteria for high fidelity speakers specified that the ratio of gap flux density to voice coil flux variation due to signal current should be as large as possible. Modern speakers, with low efficiency and low gap flux densities have got the situation reversed, with a large amp-turns product in the voice coil producing the required force, in a relatively low flux density gap. A voice coil surrounded by and surrounding pole piece iron can produce considerable flux densities in this situation, in the same order of magnitude as the gap flux itself, and ferrite only makes matters worse, introducing considerable distortion. As usual, the problem is far worse in low efficiency drivers, as are most distortion problems found in speakers. However, the fundamental mechanisms are the same in ALL
electrodynamic drivers.

I am no longer EVER surprised whenever I find that "obsolete" technology turns out to have advantages over more modern designs, which comes as no surprise knowing that a lot of "obsolescence" is due to concerns about cost-efficient manufacturing, convenience and other 'non-technical' factors.

Could the same be true for field coils, could field coils, from some point of view have advantages over, at least *most* permanent magnet materials in terms of performance in speakers? I haven't seen any comparative measurements, but apart from losses and nonlinearities in the iron flux path present in any case, a field coil should be able to deliver a flux as constant as the amp-turns product, and its self inductance strongly opposing any flux change in the (field) coil, unlike PMs, where the rate of flux change in the magnet itself upon modulation from the voice coil flux depends on the magnets's operating point on the magnetization curve and the curve's slope/curvature at that point. Of course, the degree of saturation of the iron and its magnetic properties also come into play, but I'm quite amused that there is absolutely no reason to suggest that field coils have any disadvantages versus even the best PMs apart from the need for a power supply (a small price to pay, some would say), and that on the contrary, FC magnet systems may have distinctly reduced distortion compared to a PM system where everything else is unchanged. This could very well be expected to affect difficult to quantify qualities like "articulation" or "detail" that are often only revealed by listening to music through the speakers.

Oh well, comments would be welcome in any case. I haven't had the opportunity to compare for myself, but the idea of field coils appeals to me. I realize that it's been discussed here before (just recently popped in to check this forum out) so maybe some of you guys have some experimental knowledge to share?

Rerurn to Romy the Cat's Site