I used to play drums a lot. I never got that good, more or less starting around the age of 16, and stopping around the age of 30. But I like to say that I never really started or stopped, because I’ve been trying to create rhythms by pounding my hands on desks/tables/whatever nonstop since I was a little kid. Now, I spend a lot of time typing at a keyboard. So you can imagine how excited I was when I found out that, not only are virtual drums available, but many can be directly accessed through most common Web browsers.
Virtual Instruments on the Rise
Aside from drums, I also love guitar, piano and synthesized audio. Sound is basically made up of waves (perceivable fluctuations in atmospheric pressure) that can be combined with one another to form more rich and complex sounds. Audio synthesis is a process of finding the right frequencies (through oscillations or by other artificial means) and combinations of waves (and sometimes filters) in order to create — or synthesize — desirable new sounds. And synthesized music incorporates synthesized sound into musical systems; such as the western chromatic, diatonic and pentatonic scales.
Music — like animation, narrative and many other disciplines — is a temporal art form. It is generally presented as a sequence of sounds, with some sense of regulated timing (rhythm), as well as transition (melody) and concurrence (harmony) of pitch/frequency. And since timing is fundamental to most music, latency is a key issue to any musical device. Generally speaking, the lower the latency, the higher the usability of the instrument.
Music technology pioneer, Max Mathews, once said that “there are no theoretical limitations to the performance of the computer as a source of musical sounds.” Throughout his lifetime, he posited and maintained that the computer will soon become a universal musical instrument — capable of emulating any sound or audio environment, while also able to create completely new and original audio events and experiences.
The Future of Music Production
The primitive essentials of sound are called sine waves — pure tones that cycle and fluctuate at a given frequency. Audio synthesis can be additive or subtractive: the first method combines pure tones to achieve unique sounds, and the second filters a source’s sound waves. In other words, a good analogy for additive might be sculpting with clay, while subtractive is more like carving out of wood.
Much of computerized music’s evolution is shared with that of MIDI electronics and protocol. The standard (which is an acronym for Musical Instrument Digital Interface) originally facilitated musicians with a means by which to create digital data that represents the fundamental qualities of given musical phrases, such as pitch and timing. These MIDI files can thus be played through different synthesizers or MIDI-compatible applications, delivering the fundamental musical data through variable outputs.
During its early life, MIDI was quite restricted, as it could only interact with a limited number of devices and its function was narrow in scope. Over time, however, MIDI plugins and extensions have enabled the standard to adapt to many different new situations and environments. This has brought the standard much further into the mainstream than it had ever been before.
As audio technology migrates from aging hardware interfaces (such as mixing boards and analog VU meters) into our computers and mobile devices, musicians are relieved of the burden of having to work with excessive equipment, but are also increasingly deprived of the ease of use that traditional instruments tend to provide. This has prompted some movements toward new hardware interfaces that can easily connect with digital workstations such as laptop and desktop computers. (To learn more about the evolution of the music industry in the digital era, see From Vinyl Records to Digital Recordings.)
Beyond ease of use, ease of collaboration is key to digital music’s evolution. This is a big part of what makes Web Audio API one of the most significant digital audio innovations in the recent past. This Java-based audio solution incorporates an interface called AudioContext, in which processes are scripted and translated to an output as precisely timed synthesized audio.
The API is still young, but is also very promising — not just to audiophiles, but to programmers and Web developers as well. Compared with older, embedded audio formats, Web Audio API delivers very low latency within the interface. Many different virtual musical instruments are already being developed, and it is believed that Web Audio API will find increased use in online gaming.
While acoustic music will likely never completely go away, audio synthesis in the digital space will continue to play an increasingly significant role in modern recording, production and delivery of digital audio. Web Audio API is accelerating digital audio’s progress by facilitating new DAW interfaces, platforms and audio solutions through enhanced usability and accessibility.