Back to home page
Other Electronic Musician articles
by David (Rudy) Trubitt
This article originally appeared in Electronic Musician, © September 1991.
Few would deny the big advantage of samplers and sample-playback units: Traditional synthesizers rarely reproduce an acoustic instrument more convincingly than a sample of the actual sound. But sooner or later the advantage becomes a limitation: Even with a variety of performance controls to add expression, the soul of the sound is static. Samples of the same pitch with different performance articulations can mask the problem, but still fail to provide any insight into the nature of the instrument. In other words, having a piano and forte horn samples won't get you a mezzo toot, not matter how hard you try to cross-fade the two.
What's missing is a link to the physical characteristics of the instrument itself. Physical modeling uses a mathematical description of an instrument to bridge the gap between the sound an instrument makes, and why it makes it. Physical modeling is not new (related work dates back to the sixties) but the current crop of digital signal processing (DSP) chips have made the necessary computing power available at a reasonable cost.
To get an idea of the state of physical modeling I eagerly paid a visit to Stanford University's Center for Computer Research in Music and Acoustics. (For those who haven't heard, CCRMA is the birthplace of FM synthesis among other things.) CCRMA's physical modeling research is based on waveguide synthesis, a technique developed by Julius Smith. A waveguide is a tube (much longer than it is wide) down which a wave travels. It resonates at frequencies who's wavelengths are related to it's dimensions. If the waveguide bends, changes diameter or intersects with another waveguide, additional resonances are formed. In waveguide synthesis, mathematical models of waveguides and oscillators are used, but their behavior is consistent with their real-world counter-parts. "Real" instruments can be simulated by a computer model of of interconnected waveguides and oscillators. Perry Cook, a Research Associate at CCRMA explains: "A clarinet, to it's first approximation, is a cylindrical bore with a non-linear oscillator (the reed). If you don't mess it up with evil things like tone holes, you can actually do a pretty cheap simulation of a clarinet."
Cheap in terms of computation time, that is. Despite advances in DSP performance, waveguide synthesis is only occasionally capable of real-time performance, with more complex models requiring anywhere from 2 to 10 minutes of computation to produce 1 minute of music. But chips keep getting faster, and what's in the lab today could be in your studio tomorrow. Most of the work at CCRMA is done using NeXT computers, who's built-in Motorola 56000 DSP chip provides an excellent development system.
A unique aspect of CCRMA's work is the development of a graphic "workbench" program for experimenting with physical models. The illustration shows the main screen of Perry Cook's SPASM program (Singing Physical Articulatory Synthesis Model). As you might guess, SPASM simulates a human singer. An oscillator models the glottis (vocal chords) and provides a basic vocal waveform. Our complex vocal cavity is modeled by eight individual waveguide chambers. Each waveguide's diameter is changed to create the basic vowel sounds, while a moveable noise source creates sibilance and consonants. With the model in place, "animating" the vocal tract creates speech or song. SPASM sings with a classical style, and is capable of producing a wide range of vocal tones.
Perry Cook has created another workbench for the creation of brass instruments, which are well suited to the waveguide model. "A brass instrument is the prime case of a waveguide," Cook explains, "because large parts of it are cylindrical, and it's extremely long compared to its width. To change notes you just add hunks of tubing, just like patching in new piece of waveguide." A mass-spring-damper oscillator models lip position as a function of differential pressure, and by controlling the tension of the lip a variety of trills and falls can be simulated with tambers highly suggestive of brasses from tuba to tiny-trumpet. Cook reiterates the advantages of waveguide synthesis: "You don't need a bunch of memory like you do with a sampler to stash these models in. You really only need about one period of memory, because that's the round-trip delay time within the instrument. What is the memory of a trombone? It's the length of it."
CCRMA's Technical Director Chris Chafe is also a cellist, and he's devoted considerable energy to model bowed string instruments. Of course, the better your understanding of the instrument, the better your chance of making a convincing model. Chafe describes his joint research with physicist Bob Schumacher: "Bob knew from some studies he'd done that the kinds of things that distinguish a real vibrating system from current synthesizers is it's unpredictability at a very micro-time level. Given a straight note you get changes in the wave shape from one period to the next."
It turned out that these irregularities were not random, although their frequency was lower than the fundamental pitch under analysis. Intertwined within the noise were sub-harmonics. "In a bowed string, the string alternates between sticking to the bow and flying back. A noise happens when the string is flying back. The fact that you're pulsing noise into a highly tuned resonator means that a previous noise reverberates and will affect the noise that happens sometime in the future, over a few periods. That's where the sub-harmonics come from." (Sub-Harmonics also exist within wind instruments where higher pitches resonate at fractions of the instrument's total length, but still have the full length of the horn to traverse.) Adding synchronized noise to the model improved the realism of the sound significantly. "Pulsing the noise during the slipping part of the bow sounds right and gives you sub-harmonics," Chafe continues. "You also don't need an extra control signal for the noise, like a finger on a "noise" wheel, because it's there as part of the physical model."
Chafe tested his cello model with a familiar ritual--tuning it up. "The hardest thing in the world was to make this thing go in tune," he explains. "Every time I had a bug in my program it would express itself as being slightly out of tune!" Additional refinements are hampered somewhat by the lack of real-time control over the model. "Let's say I did a more complete cello," Chafe explains, "and wanted to hear how it responds to a different amount of rosin. It would be very tedious to run that experiment without having a knob that said 'rosin coefficient' to turn until you got it right. It's all very intuitive when listening, but poking numbers into equations is not an easy way to learn."
Nevertheless, Chafe is optimistic about the future of waveguide synthesis and physical modeling. He expects to see commercial applications of the technology appearing soon . "It's technically feasible now, and we'll see this stuff brought to market in short order, I'm sure. The benefits are pretty obvious to players that like to get expressive control. As of yet, no one's finished a piece of music using this approach. It's all kind of speculative, a lot of test tones--I think we're just at that point where there will start to be some interesting music made."
Those interested in further reading should start with "On the Oscillations of Musical Instruments", by McIntyre, Schumacher and Woodhouse in the Journal of the Acoustical Society of America, November 1983 and "Musical Applications of Digital Waveguides," by Julius O. Smith, Stanford University Depart of Music Report, STAN-M-39, 1987.