Rerurn to Romy the Cat's Site

Playback Listening
Topic: Some midweek morning stimulation…

Page 1 of 1 (18 items)


Posted by Romy the Cat on 04-26-2013
fiogf49gjkf0d

There are many conversations out there about the fact that many music people do not “get” audio.  In most cases this is correct. They are many conversations, we had some of them at this site and the reasons of the phenomena that were brought are all valid. I would like to bring another aspect to this subject that never was expressed.

We all know that music notation deals with moderation of pitch – the key signature - or raising or lowering a note by a half-tone. This is what musicians call sharp or flat and it there from beginning of times. There is more complexity in key signatures but the basics implies that if a musician plays “too sharp” then he or she hits a slightly elevated pitch of intended tone. There is nothing wrong with it, however in audio it is way more complicated.

We need to understand that a note A for instance is not 440Hz pitch but a complicated time/amplitude parabola with summit located at 440Hz. If the instrument is tuned to 440Hz, the musician take A and the summit of parabola is not at 440Hz then we say that the musician too sharp. This is how music people recognize as they operate in the world where the shape of the pitch-rolling parabola is fixed in most of the cases (some interments and human voices are able to modify it is a degree). In audio however, we have very little control over the pitch itself but we have practically unlimited control over the harmonics and therefore we can easy alter the profile of the parabola with which tone can roll to its pitch.

Music people mostly do not get it as this option is not available to them. The harmonic signature or the parabola’s profile for musicians is factored in the design of musical interments and into playing techniques. There are some techniques that allow musicians to a degree play with harmonics but no were nears as wide as we do in audio.

For musicians to have the options that we have in audio would be totally ridicules. You can see some kind of trumpeter is bringing his 3-4 versions of D trumpets to play some Baroque piece and he swears that each of them have own tonal infliction. Of cause his swears are right do, but the Trumpeter use one trumpet at time to play. Pretend the same musician is sitting in Mahler orchestra with his 6 C trumpets and blow each note on different trumpet because each of them deliver different type of brightness. Sound absurd? Well, this is what we have in audio.

In Audio playback moderates harmonics very aggressively and what the most annoying  is that the rate of harmonics moderation fluctuates with the given playback setting, the dynamic range, with sound rate change and with zillion of other factors, some of them as ridicules as the state of the local power grid…  As the result a playback not only can screw up the pitch (many of them do) but it will screw up harmonics in non-acoustical way and by doing that it will change the subjective perception of sharpness or flattens of the tone.  I have see when a playback that playback that fasten the harmonics was   perceived by music people as played too high. Interesting that music people do not feel it “too sharp” but they fell it “like too sharp”. The reason is that the pitch reference in their brains do suggests them that the pitch was accurate but they still feel that “something” is not right. The problem with them that this “something” doe not exists under normal circumstances in life sound, so they do not know how to react to expedited harmonics. How the expedited harmonics or prolong harmonics affects listening experience is another subject that I would like do not touch in here

I need to admit that there are ways for musicians to moderate harmonics event by paling the same interment. The string players could fake harmonics but those techniques are not used all time and more consider as delicacy sound effects. In audio we could easily implement a playback decision that would permanently make all music to be played in perceived “sharp” mode or to make music to be perceived “sharp” in specific dynamic range, or to go sharp in specific octave…

Rgs, Romy the Cat

Posted by Paul S on 04-26-2013
fiogf49gjkf0d
Funny you should bring this up, since I was just thinking (today) about how some of my favorite conductors seem so adept at presenting a narative or "thread" that can "unify" otherwise-discursive material, say, Brahms, for example.  Apart from other necessary gifts, I think this ability is based on an exceptional ability to "hear the piece", even by looking at the score, and I know this talent/capacity is shared among musicians, to a greater or lesser degree.

Although we sometimes speak of a piece of hi-fi gear as "intelligent", what we usually mean is that it is not "locked" into a particular mode of expression but it has a wide range of expression, and I would want this to include pitch.  For just one example, it is often via pitch that we discern the passing of the "narrative torch" from one instrument or section to another.  Without a coherent expression of pitch, we are literally lost in the musical woods.  Of course, our ability to moderate a system's pitch should never eclipse the system's ability to render smart, coherent Music.  I am too tired now to be certain, but I think this means that relative pitch - at least - should remain +/- absolute.

BTW, I hope everyone gets that the definition of "pitch" here has been broadened to include harmonics and tone.


Paul S

Posted by Romy the Cat on 04-28-2013
fiogf49gjkf0d

 Paul S wrote:
Although we sometimes speak of a piece of hi-fi gear as "intelligent", what we usually mean is that it is not "locked" into a particular mode of expression…

Well, to make audio elements to be adaptable is very complex but possible. The beigest problem in this is not the technical disability of audio to be adaptable, or to demonstrate deferent behavior under different let say dynamic of frequency conditions.

The beigest problem is that audio hoodlums in their unfortunate majority do not recognize it as an issue.

Even to acknowledge that audio do not handle specific type of music signals appropriately do require quote high level of listening intelligence. When I mean “to acknowledge” I do not means to say that it does not sound good but to identify what specifically is not right. The 99.9% of audio people not only enable to do it but they also consider that it (believe or not) politically incorrect to think about sound in term of dissatisfaction.

For those few who do acknowledge that something is specifically wrong it is very hard to figure out why it is so. The reality is that in most of cased there is no answer “why”. The answer “why” exists only in functional systems that do something correct and therefore there is an answer “why” they do something wrong. Audio by nature is incorrect functional system by nature and therefore there is no definitive answer “why”. You can give very precise answer why some voltage is not there, or some pressure is not there or some any another measurable parameter is not there but you not necessary will be able to give an answer why some tube, soldering point, cable direction or cartridge VTA change the subconscious emotional or esthetical feedback a listener get from listening. 

I gave a very simple example as we are not taking about the full emotional feedback as the Morons love to whore about in audio reviews. I am talking about very definitive and very minute fragments of sound retroaction what a specific micro-sensation, under a very specific narrow playback operation condition do not make to the listener. Pretend that you have a very specific reaction of let say “punished sentimental kindness” to a very specific musical fragment. Let say it to be the clarinet and viola duet in the second movement of Penderecki’s Clarinet Quartet. You get a recording and your playback delivers the “similar” feeling at 90dB but does nothing to you if you play it softer. Where will you start? Most of audio people do not even acknowledge the problem but if you do they what would be the audio action you can make to address the problem? The most complex would be how to fix that “punished sentimental kindness” at soft volume but do not hurt anything other characteristic behavior in the same dynamic and the same frequency range.  It is very complex and there is no definite answer in this or even developed knowledge in this. To make the playback to react dynamically  and to adopt own operational paradigms in respect to incoming music is truly very complicated and very few in audio are able to operate at this level, not only the level of accomplishments but the level of demands.

I would say that nobody ever in audio was able to operate at the described level of accomplishments and I am quite confident that I know about all more or less serious audio movements even made public. As in physics the Unified Field Theory formed by Einstein does not have an accepted agreeable conclusion in audio there is no unified audio design ideology. It is not only no audio design ideology but there is no unified audio assessment principles and many other basic postulates with wish any sensible discipline could not exists.  Sometime in past I had a conversion with a guy who involved somehow in music and did composing, playing, conducting. I explained to him that his music field from some perspective is much less complicated then audio field. It might sound ridicules for some uninformed people but in reality it is exactly what it is. In music there are definitive pre-existing answers and definitive way to reach the answers. You go study music, perhaps do it for years and years and if you are a talented person and work hard then you become a good instrumentalist. If you in addition to it is an intelligent and sensitive person you will become in addition a good musician. If you an artistic person then you might become a few chose from your musical feels who push the field forward from one or another perspectives. In all of that there are well established and universally recognized aspects of what you do and what consequences of your actions are. It does not happen in audio.  There is no structure learning, training, educating or evolving or reference point in audio.  There is no even methodology available of any kind, so to immensely complex field with zillions of unknown and with no unified way to deal with even know and with no even accepted language to take about ether known and unknown….

The Cat

Posted by steverino on 04-28-2013
fiogf49gjkf0d
This is a very interesting issue you raised. I would say that before audio designers could proceed in such a textbook manner they would have to be able to recreate original sounds in any given environment. (I'm limiting the discussion here to acoustic instruments and unamplified voices to avoid more complications.) The original sound event would be re-generated within the audio system so that what comes out of the speakers would be identical to the original sounds at some "ideal" listening spot before they hit the mics. The problem with audio is that the acoustical sounds are altered when they go through the mic and a regular audio system could never surpass the mic feed. A perfect audio system would be transparent to the mic feed and would also know how to correct the changes that occurred to the sound when it passed through the mic. To do that it would obviously need advanced AI functionality and be able to recreate the sounds in some component which would then be amplified and passed to the speakers. The speakers would have to be able to project all the direct and ambient information. To be practical the system would also have to be able to translate the soundfield of the original music event for any given listening room. Currently all that audio designers can do is make all kinds of compromises and hope that the resulting sound is pleasing to enough people that they can sell the component regardless of its lack of fidelity to the original sound. Compromises are necessarily an art form not a science.

Posted by noviygera on 05-01-2013
fiogf49gjkf0d
I've been thinking about this matter for a while but could not find a better excuse to formulate my theory then in this topic. My simple theory is that if we reduce our concept of "music" the more fundamental concept of "sound" as point of reference than we can easier visualize of what audio system should do. Lets say that we are trying to reproduce a "sound" true to it's original form with all the harmonics, timbre, tone, whatever you like to call it, then that reproduction must be consistent in the spectrum of sound coming out. It will not be accurate to the original, as any sound system will never reach the recreation of the live event or "original sound" but the sound system's coloration or sound signature should be consistent in it's entire range of reproduction. What will vary is the scale of the sound coming out but the signature will be consistent.

That is why I could never even theoretically understand how one can expect to reach the above goal when you have, for example, different types of drivers (plastic midrange, aluminum compression tweeters, paper woofers) all made on different principals of different materials, and to try and expect them to harmonically match. This would go back to Romy's example of a trumpet player having different trumpets for all notes. However, a better example to see why this is not possible would be to take the "sound" as a point of reference rather than a musical recording.

So if we record the sound of someone spitting, not even a live concert event, we can screw up the reproduction of that sound by inconsistently reproducing it though harmonically unmatched sources (or speaker drivers) and space (time alignment). The way I see to reduce this mismatch is to have harmonically matched speaker sources (drivers and enclosures) and by this I mean all of them being variation of one type of a driver and having them spatially matched. Example: drivers of same design, same material, only of different scale.

So by reducing the intended final reproduction result to the reproduction of only "the sound" we can better visualize what is required. Agree or disagree?

Posted by steverino on 05-01-2013
fiogf49gjkf0d
Maybe I'm not following novygera but isn't that what electrostatic speakers already do? I mean the full range ones. I thought some of Magnepan's smaller models only had quasi ribbon drivers also. I agree that it is not likely that one type of speaker driver composition can Reproduce all the distinct sounds from a given recording the way its done now. The question is whether some AI audio component would be able to figure out how to Generate the sound with particular characteristics that would result in the speaker emitting the source sound exactly. It's difficult to state simply so I apologize for being confusing. The AI component would be generating a different version of whatever audio source material entered into it. It would be like a notation program playback in some sense. You would give it the Beethoven Symphony 9 score and tell it to play it back  like Stokowski performed it at a certain concert and it would know how to do that and tailor the sound for your listening room. Currently notation programs like Finale or Sibelius already have moderately sophisticated playback options of scores.

Posted by rowuk on 05-01-2013
fiogf49gjkf0d
Well, as I do play trumpets professionally and own much more than 6 instruments, I thought that I would chime in.

First of all, Romy, this is one of the most lucid descriptions of pitch that I have ever read! It also raises a point for discussion:
If I play a concert A in the staff the perceived pitch is 440Hz (443 in Germany, 415 if we are playing standard historically informed baroque music). The fundamental on the trumpet is an octave lower. We hear the first harmonic as it is MUCH stronger (look at the size of a trumpet bell and you know why 220Hz, the fundamental is so much softer......). The rest of the harmonics are balanced slightly different based on the instrument and player. What my point is, when we play concert A in the staff a percentage of the sound is reproduced by the speaker responsible for the 220Hz fundamental, perhaps another speaker for the first overtones: 440 (2x fundamental), 660 (3x) and 880Hz (4x), a third for some higher overtones 1100 (5x), 1320(6x), 1540(7x) and so forth. That means that depending on the note that I play, the reproduced structure can vary a great deal depending on which speakers are used and their phase relationship. If I play A above the staff in as my example above, the lowest speaker is no longer involved in the reproduction of my "pitch". If the microphone used in the recording is close to the trumpet, we have a much different tone balance than if it is further away.

In music we have additional issues. When two instruments play different notes at the same time, sum and difference tones are created. I guess this could be called acoustical intermodulation. In any case, it is often a desired trait and many brass sections go as far as to match the instruments to maximize the effect! This accounts for a lot of the brass section sound and essentially spreads the required frequencies over an even larger range. 440Hz(A) and 660Hz(E) as pure tones without harmonics create 220 and 1100 Hz as audible sum and difference tones. Strong overtones also produce audible "sideband" frequencies further increasing the frequency response, phase, and spatial demands - even if only two instruments are playing. Major chords create sidebands that are harmonically related to the chord - there is little dissonance. Minor chords create sidebands that are NOT harmonically related. This creates additional "drama" or "tension".



On a side note, I teach trumpet and "weak" players often "sound" sharp - even if a tuning device "proves" that their tone is centered at the correct pitch. I use acoustical intermodulation to teach my students how to tune relative to whatever group or player is considered "dominant".

On a second note: in a symphony orchestra the oboe gives the tuning note. This has to do with the overtone structure that makes that instrument very easy to understand "pitchwise".

Their is an excellent treatise on intonation written by a former horn player of the Chicago Symphony Orchestra. I strongly recommend it for anyone that really has any interest in "pitch":

http://www.cherryclassics.com/cherry/leuba-chris-study-musical-intonation-chris-leuba-former-principal-horn-the-chicago-symphony-method-book-for-brass-instruments-p-2128.html

Posted by Romy the Cat on 05-01-2013
fiogf49gjkf0d
 noviygera wrote:
… if we reduce our concept of "music" the more fundamental concept of "sound" as point of reference than we can easier visualize of what audio system should do. …. … this is not possible would be to take the "sound" as a point of reference rather than a musical recording.
Noviygera, yes, I understand what you are saying and I disagree with you. The reason why I degree with you is because you made a conceptually-metrological mistake in your modeling of the situation, or your model juts doe not reflect the reality. If we presume that we deal with "sound" as point of reference, the point that is abstracted from music then I do agree with you that it has very little usability. The mistake you make is that the abstract sound that you are talking about does not come alone but it comes bound with listener perception. It is not just the abstract sound but a duplex of abstract sound and the objective consequences that this abstract sound has to sound consumer. If we look into it as combination-only then musicality of recording become not the mandatory moderating force but complementary force, surly the complementary force that bring a whole new meaning to the process of listening. Nevertheless this also allows to use the abstract sound (and I spoke about it a lot in past) as mandatory gate-keeping entity that might or might not benefit the musicality of recording or life performance. I can give a lot of illustrations of the said….
 rowuk wrote:
Major chords create sidebands that are harmonically related to the chord - there is little dissonance. Minor chords create sidebands that are NOT harmonically related.
H, I did not know it. That is VERY interesting, I need to think about it. Very, very interesting!!!
 rowuk wrote:
On a side note, I teach trumpet and "weak" players often "sound" sharp - even if a tuning device "proves" that their tone is centered at the correct pitch.
Excellent illustration of my original point! I wonder if you detected any differences in perception when your student sounds “fictitiously sharp” in major chords vs. minor chords?

Posted by rowuk on 05-01-2013
fiogf49gjkf0d
As a matter of fact with weak players it is hard to tune at all. On the other hand, the popularity of brass band or school concert band music is surely based on the masking caused by overtone rich instruments causing enough sidebands to mask serious problems! You cannot get away with as much with strings or a "slightly" out of tune piano......
Perhaps there is a certain amount of pitch tolerance based on the overtone structures of specific instruments.

Posted by Paul S on 05-01-2013
fiogf49gjkf0d
Herman, I agree with the root idea about "dissimilar" materials for drivers being a problem with respect to "pitch" as this applies over the audio bandwidth, and here are some more thoughts about this subject.

With Robin's explications, it is easy to "expand" this concept of pitch into "scale", which can then be thought of as individual instruments and sections, alike.  It is especially easy to understand the problem of "system varigation" in terms of one instrument that is covered by several drivers, even though any one driver will, in fact, exhibit similar/related problems as its range is extended.  Which is to say, the problem with the hi-fi reproduction of pitch has to do broadly not only with driver pitch potential but also with "non-linearity" that is part and parcel of hi-fi, from source to speaker, not to mention the room(s), and setting aside the listener, for now.  It would also be interesting at some point to further explore the ideas of "sum and difference" that Robin raised, both in terms of pitch and in terms of phase, since these are not only related but they are very much at the heart of all this in terms of play and playback, both.  As for "AI", it seems highly unlikely that some omnicient "algorithm" will ever operate apart from problems inherent somewhere else in the recording/playback/reception chain/system.

Close listening to Music via hi-fi definitely tends to turn up faults with both, I think, and not least with "pitch".  For years I have suffered from the relentless elevation and upward striation by hi-fi of what I will term here "native pitch".  And lately, with the electricity so bad here, I am driven nuts by both the absence of mid-down power and color and "ghosting" in the center that includes a sort of washing out of "voice density", which must also be related to native pitch.

Robin's last post speaks to a subject I have considered well for hi-fi, namely that of spreading out problems.  As Robin hints, although there is certainly a price to pay in terms of "precision", if it is done well enough in the "right" ways it can be a viable alternative to being precisely wrong.


Best regards,
Paul S

Posted by rowuk on 05-01-2013
fiogf49gjkf0d
Actually Paul, I firmly believe that close listening to the MUSIC keeps us from listening to the warts of playback. The more the musical message grabs us, the easier it is to let go. Some of that has to do with the fact that there is some pretty fine recordings before technical perfection. St. Petersburg with Mravinsky is a fine example of what I mean. Record clicks, wow and flutter and primitive recording techniques do not disturb anything. I particularly enjoy Shostakovichs symphonies played by this band. Even in the loudest sections, pitch and pace set standards often not met today - for the engineers and musicians alike. The solo trumpet player by the way only had ONE TRUMPET for everything that he played.
I have also given a lot of thought and ear to the concept of only paper, only mylar, only carbon fiber and have to say, it ain't the gun - it is he who pulls the trigger. I have heard finely integrated systems that mix technologies and not so integrated sounding even with consistent cone/amp technologies. Hats off to anyone who achieves a major degree of integration. If their personal recipe is rice paper from Katmandu, so be it. That still should not stop those who use space age techniques. I do not consider "linearity" to be inherent by limiting choice of material. I think linearity has more to do with finding program material that YOUR setup does well. If that happens to match your listening habits, then fine.

Posted by Paul S on 05-01-2013
fiogf49gjkf0d
Robin, I guess I got off on the wrong foot when I phrased the first paragraph as I did.  I meant to agree with Herman about "dissimilar" drivers, but I also meant to continue thereafter with other thoughts on the initial topic, musical vs. audio pitch, rather than bolstering a case for similar drivers.

Sure, great play carries the day, musically, and this might be well appreciated despite a bad system.  And more power to any listener who has virtually unlimited choices of sources of musical pleasure.


Best regards,
Paul S


Posted by Amir on 05-02-2013
fiogf49gjkf0d
 Romy the Cat wrote:

As in physics the Unified Field Theory formed by Einstein does not have an accepted agreeable conclusion in audio there is no unified audio design ideology. It is not only no audio design ideology but there is no unified audio assessment principles and many other basic postulates with wish any sensible discipline could not exists.


In physics we model objects like mass, fields and ...  all are external objects.
In audio we need translating human reaction in low and higher level of perception to objects like driver, tubes , designs and ... here is a complicated world , it's our mind.
I think audio is more complicated than physic.

I have simple theory in my mind that i think it's not bad.
I think audio could do better if it react more linear at micro levels.
I have 2 parameter:
1.energy transfer speed
2.linearity

linearity have two dimension:
1.micro linearity
2.macro linearity

in mathematics we could divide a function to basic functions like taylor http://en.wikipedia.org/wiki/Taylor_series
i say f(X)=kx , k in ideal condition of audio system should be constant and in real world k is not constant and is more complex.
k is transfer function (for each frequency). if transfer function express by taylor series i think value of each basic function is not so important and the better response is to have less higher degree.
example:
f=a+a1x+a2x2 is more simple than f=b+b1x+b2x2 + b3x3

micro linearity means less higher basic function and macro linearity means less average unlinearity.
http://www.hifi.ir/?p=1764

tubes amplifiers have 1% THD and transistors have 0.001% THD (it means transistor amp is more linear in macro) but if we describe k with taylor we see tubes have less higher basic functions and are more linear in micro.
see here:
http://www.hifi.ir/?p=1746
http://www.hifi.ir/?p=1821


i do not claim and i just think the right way in audio design is making components more linear in micro then trying to speed up their response and the last parameter to upgrading should be macro linearity.

excuse for bad english
 


Posted by rowuk on 05-04-2013
fiogf49gjkf0d
Hi Amir,
your english is just fine.

I think that there is an additional factor that your formula needs. Sense of pitch is not determined acoustically by "Q" unless we are talking about pure sine waves. When we add the overtones found on real musical instruments, funny things happen. Depending on the note played, the first harmonic may not even be a perfect octave - as wind instruments for instance change their acoustical length based on frequency. A trumpet consists of a cylindrical and flared portion. The flare determines the acoustical length of the instrument. If you have a friend that plays trumpet, have him blow a few notes on his instrument and then do the same on a piece of garden hose. The garden hose cannot be used for anything "tonal" even although mathematically it represents a "best case" situation

When we hear a well played instrument, the sound is modelled by the player and the result is pleasing. In a speaker, we have "issues" in phase and relative loudness of each of the speaker drivers used. Also the directivity of the driver can cause a shift in how the sound is reproduced. Spatial distribution also separates the overtones from the fundemental, which requires a given listening distance to allow the sound wave to integrate.

I guess, what I am saying is that even if each driver behaved as a perfect piston and was perfectly integrated to another driver, we would still have issues that make the "cook" responsible for the end result - not the absolutely quality of each ingredient. In this respect, I believe that Audio like cooking is an art form rather than a science. Fast food is the science of eating: calories, salt and sugar, repeatability, profitability. Many audio dealers are very much like McDonalds in this respect. For this type of reproduction, I thing the math can become VERY accurate!

Posted by Amir on 05-04-2013
fiogf49gjkf0d
 rowuk wrote:
Hi Amir,
your english is just fine.

I think that there is an additional factor that your formula needs. Sense of pitch is not determined acoustically by "Q" unless we are talking about pure sine waves. When we add the overtones found on real musical instruments, funny things happen. Depending on the note played, the first harmonic may not even be a perfect octave - as wind instruments for instance change their acoustical length based on frequency. A trumpet consists of a cylindrical and flared portion. The flare determines the acoustical length of the instrument. If you have a friend that plays trumpet, have him blow a few notes on his instrument and then do the same on a piece of garden hose. The garden hose cannot be used for anything "tonal" even although mathematically it represents a "best case" situation

When we hear a well played instrument, the sound is modelled by the player and the result is pleasing. In a speaker, we have "issues" in phase and relative loudness of each of the speaker drivers used. Also the directivity of the driver can cause a shift in how the sound is reproduced. Spatial distribution also separates the overtones from the fundemental, which requires a given listening distance to allow the sound wave to integrate.

I guess, what I am saying is that even if each driver behaved as a perfect piston and was perfectly integrated to another driver, we would still have issues that make the "cook" responsible for the end result - not the absolutely quality of each ingredient. In this respect, I believe that Audio like cooking is an art form rather than a science. Fast food is the science of eating: calories, salt and sugar, repeatability, profitability. Many audio dealers are very much like McDonalds in this respect. For this type of reproduction, I thing the math can become VERY accurate!



for a single sine wave (like 1khz) we have one unlinear transfer curve { like (output voltage)/(input voltage) curve}.
it's response depends level of input voltage.
for single frequency input signal the output signal is not single frequncy, it has harmonics that each harmonic has it's amplitude and phase. number of these harmonics and complexity of their phase and amplitude is dependent of taylor series order of transfer curve.

for audio band (all frequency from 20-20khz) for each frequency we have a special transfer curve.
at first look in audio measurement we use fourier transfer function (https://en.wikipedia.org/wiki/Fourier_transform) and transfer function says in each frequency what's the amplitude and phase. these two curves should be linear in ideal condition and fourier function model the system  if we think system is linear .

in real world because system is not linear we should not see fourier transfer function result and forget sub harmonics of unlinear curve.
i think we need to see coherence of curves (Vo/Vin for each frequency) from 20hz to 20 khz.
this means if at 100hz taylor series of transfer curve is A tand in 1khz taylor series of transfer curve is B , how much A and B are similar.
we should define a coherence function for audio band that analysing that curve help us.

stereophile measurement is not perfect but even in some curves we could see difference of micro and macro linearity.
Lamm , Audio Note , Hovland , Lavry seems to be better in micro linearity and boulder , Soulution , weiss are better in macro linearity.
90% parameters of audio reviewers in describing sound is focused on macro linearity and these shows they give golden  award to amplifiers like soulution.

in past lamm had an article about his view about linearity but now i could not find it on web?!!






Posted by rowuk on 06-10-2014
fiogf49gjkf0d
 Romy the Cat wrote:
 rowuk wrote:
Major chords create sidebands that are harmonically related to the chord - there is little dissonance. Minor chords create sidebands that are NOT harmonically related.
H, I did not know it. That is VERY interesting, I need to think about it. Very, very interesting!!!
 rowuk wrote:
On a side note, I teach trumpet and "weak" players often "sound" sharp - even if a tuning device "proves" that their tone is centered at the correct pitch.
Excellent illustration of my original point! I wonder if you detected any differences in perception when your student sounds “fictitiously sharp” in major chords vs. minor chords?

Actually, when we look at the natural trumpet as played in Bachs days, we have a very interesting situation. The tones played are called partials, because of the way that the resonance is divided up in the instrument. The lowest tone - one wavelength in the horn, is called the fundamental, an octave higher is 2 wavelengths in the instrument, the next note is 3 wavelengths in the instrument and is a an octave and fifth above the fundamental. 4 wavelengths is two octaves above the fundamental, 5 is a third above that, 6 wavelengths or partials is an octave above the fifth, 7 wavelengths is a very flat 7th and 8 wavelengths is the third octave above the fundamental.
The point that I am making here is that this "natural" instrument, when played in an ensemble has all of the notes line up mathematically and acoustically correctly. Major chords produce essentially no dissonant resultants. Modern instruments with valves are much shorter and we are stuck with the length of the valve slides to adjust intonation and sum and difference tones. The opportunity to get it "wrong" is far greater!

Posted by steverino on 06-10-2014
fiogf49gjkf0d
Let's be precise that the overtones (harmonics) from most musical tones contain All the notes of the scale at some point in the series. The major third is there and the minor third is there. The difference is simply their relative position in the series. The existence of these overtones is generally much more of an issue for bass instruments than treble instruments which makes sense if we plot out where the overtones series falls on the piano. If we are concerned with the trumpet, the high C note (C6) has its first overtone at C7 and third overtone at C8, the highest note of the piano. We haven't even gotten a major third overtone yet and we are beyond the limits of the piano. If we play the low C (C4), then the first overtone is C5, then the fifth, then the third overtone is C6 followed by the major third (E6) then the fifth (G6) again. This time the major third will be well within the range of the piano. Of course if we look at the cello or string bass then the  entire harmonic series will be present in the range of the piano.

This is a well known issue of orchestration when composing in the minor scales since bass notes can emit audible major third overtones that conflict with the minor third in the I, IV and V chords based on  the minor scale. There are techniques to minimize the problem but it is a basic reason why historically speaking minor scales were viewed as more dissonant than major scales. There is also the issue of percussion instruments such as bells which sometimes do Not have a typical overtone series but have noisy or chaotic overtones. These have to be carefully orchestrated if used in a tonal composition. Another related issue are the so-called inverted chords where the bass note of the chord is not the lowest note played, e.g. a C E G triad with E or G in the bass, termed 6/3 and 6/4 chords. In such inversions the overtone series does not map so easily as the major chord in root position.

Despite all these nice and tidy theoretical considerations, the fact is that people have accepted more and more dissonant relations quite readily despite some initial turmoil. In the Middle Ages even the major third was considered a slight dissonance. By the time we get to Wagner the overtones series at any point in time is a cacophony of dissonant intervals sweetened with octave duplication.

Posted by tuga on 06-11-2014
fiogf49gjkf0d
 Amir wrote:
i do not claim and i just think the right way in audio design is making components more linear in micro then trying to speed up their response and the last parameter to upgrading should be macro linearity.


Hello Amir,

Could you define "micro" and "macro" linearity in "layman" terms.
I am interested in your theory but my ignorance leaves me standing at the door…

Cheers,
Ric

Page 1 of 1 (18 items)