The workbook provides step-by-step activities for classroom-based and independent project work, covering the skills and techniques used in modern music production. The activities are supplemented with basic concepts, hints and tips on techniques, productions skills and system optimisation to give students the best possible chance of passing or improving their grade. The book is includes screenshots throughout from a variety of software including Cubasis, Cubase SX, Logic and Reason, though all activities are software- and platform-independent.
This title deals with both the practical use of technology in music and the key principles underpinning the discipline. It targets both musicians exploring computers, and technologists engaging with music, and does so in the confidence that both groups can learn tremendously from the cross-disciplinary encounter. The Routledge Companion to Music, Technology, and Education is a comprehensive resource that draws together burgeoning research on the use of technology in music education around the world.
Rather than following a procedural how-to approach, this companion considers technology, musicianship, and pedagogy from a philosophical, theoretical, and empirically-driven perspective, offering an essential overview of current scholarship while providing support for future research. The 37 chapters in this volume consider the major aspects of the use of technology in music education: Part I. Examines the historical and philosophical contexts of technology in music. This section addresses themes such as special education, cognition, experimentation, audience engagement, gender, and information and communication technologies.
Part II. Real Worlds. Discusses real world scenarios that relate to music, technology, and education. Topics such as computers, composition, performance, and the curriculum are covered here. Part III. Virtual Worlds. Explores the virtual world of learning through our understanding of media, video games, and online collaboration. Part IV. Developing and Supporting Musicianship. Highlights the framework for providing support and development for teachers, using technology to understand and develop musical understanding.
Noise can have a variety of qualities that are usually described using colors, such as white noise very harsh and grating and pink noise still noisy, but more pleasant.
Representative waveforms for white noise and pink noise are given in Figure 2. These are only representative, because noise does not have a predictable amplitude pattern. For example, the loudness of an accented note rises more quickly from silence and to a higher maximum loudness than a note that is not accented.
A note that is staccato will have a quick rise and a quick fall off at the end. Articulation is not just limited to musical notes: the loudness of the non-musical sounds around us also changes over time. A thunderclap has a sudden jump in loudness a b Figure 2. The duration for each is about the same as the period of a Hz periodic waveform.
A motorcycle roaring toward you has a long, slow increase in loudness followed by a long slow decrease as it passes you and roars on. Each of these sounds has its own articulation. When loudness was discussed above, it was related to the amplitude of the individual cycle of a waveform, whose duration is quite short: the period of A the tuning A is just over 0.
The changes in loudness referred to as articulation are taking place over much larger spans of time. An eighth note at quarter equals is 0. In other words, you could fit over cycles of a waveform whose frequency is Hz into that eighth note. Even at the lowest frequency that humans can hear, 20 Hz, the period is 0. The physical property that is related to articulation is referred to as an amplitude envelope because it contains or envelops many repetitions of the waveform see Figure 2.
To represent the amplitude envelope, we will continue to use the waveform view amplitude vs. This envelope also has its roots in analog synthesis and is widely found in various forms on hardware and software synthesizers. Only the top of the envelope is usually shown because many waveforms are the same on the top and on the bottom. The frequency is extremely low—20 Hz—so you can still see the individual waveforms within the envelope.
The difference in amplitude between the peak and the sustain levels reflects the degree of initial accent on the note. A strong accent will have a greater peak and a greater fall off from the peak to the sustain level, whereas a note that is attacked more gently will have a lower peak and a smaller difference between those two levels.
The sustain segment is the most characteristic segment for this envelope model. Only instruments in which the performer continuously supplies energy to the instrument by blowing, bowing, or some other means, will have such a sustain segment. These instruments include flutes, clarinets, trumpets, trombones, violins, cellos, and organs. The release portion of the envelope reflects how the sound falls away from the sustain level to silence.
With a struck or plucked instrument, the performer initially imparts energy to the instrument and then allows the vibrations to damp down naturally. Examples of this type of instrument include drums, cymbals, vibraphones, guitars, and pizzicato violins. The duration of the attack segment reflects the force with the instrument is activated—how hard the string is plucked or the drum is hit. It can also reflect the materials that impart the initial impulse. A drum hit with a big fuzzy mallet will likely have a somewhat longer attack than one hit with a hard mallet, and a guitar string plucked with the flesh of the finger will have a somewhat longer attack than one plucked by a pick.
The duration of the release portion of the envelope is related to how hard the instrument was struck or plucked: the harder the strike, the longer the release. The size and material of the vibrating object also impact the release: a longer string or larger drumhead is likely to vibrate longer than short strings and small drumheads. Of course, these envelope models are not necessarily mutually exclusive. In struck or plucked notes, the release of one note can overlap the attack of the next, but the envelope will still be articulated for each note.
Many hardware and software synthesizers do not have separate controls for these two envelope types. Exponential line segments are common in many synths. In this, rhythm is similar to melody, which also consists of multiple notes. In addition, rhythm is often perceived in a hierarchical fashion with individual events combining to form beats and beats combining to form meter.
At the level of a group of notes, aspects of rhythm can be seen in the waveform view by identifying patterns in the attacks of the notes, referred to as transient patterns. The term transient is used because the attack-decay portions of an envelope form a short-term transition from no sound to the sustain or release of a sound. Some sound types, such as drums, form patterns that have strong transients and no sustain, whereas other sounds, such as slurred woodwinds, brass, or strings, form patterns in which the transient can be difficult to see.
Viewing transients in the waveform view involves even more zooming out than with amplitude envelopes see Figure 2. The analysis of transients as a pattern of beats and bars is a standard feature in many recording programs and is generally referred to as beat detection see Figure 2. This process allows you to manipulate audio as separate logical chunks, the same way you can manipulate notes.
The vertical lines indicate identified beats. Courtesy of Digidesign, Inc. Sound that consists of clearly defined transients in a regular pattern is easier for humans and for software to parse into beats and bars. Legato passages in strings, winds, and brass where the notes are not re-attacked, and hence have fewer transients, are more difficult for software that relies solely on transient detection to parse.
However, our perceptual system is more successful here because, in addition to transient detection, we can also bring pitch detection and other techniques to bear on the problem. Software that also utilizes such a multi-faceted approach will be similarly more successful. This table will be further refined in the following chapter.
That representation is useful in many instances, but it falls short with regard to timbre. The physical property related to timbre found in the waveform view is the waveform itself.
However, the small collection of standard waveforms sine, triangle, sawtooth, square, pulse is of limited use when discussing real-world timbres. A better representation of sound for investigating timbre is the spectrum view. To understand the spectrum view it is useful first to consider the more familiar overtone series.
The overtone series represents the frequencies that are present in a single note, including the fundamental, and is usually shown using traditional music notation see Figure 3. This traditional notation is somewhat misleading, because it implies that every note is really a chord and that all the frequencies are distinct pitches. There is really just one pitch associated with an overtone series—the pitch related to the fundamental frequency. However, there are some useful features about the frequencies in the overtone series that can be seen with traditional notation.
For example, for brass players, the fundamental frequencies of the pitches available at each valve fingering or slide position are found in the overtone series that is built on the lowest note at that fingering or position. However, the terms harmonics and partials are also used. The distinction between these terms is subtle. From Holmes, fundamental frequency plus the first overtone, the second overtone, etc. The term harmonic is somewhat ambiguous.
It could refer to the frequencies above the fundamental: the fundamental, the first harmonic, the second harmonic, etc. However, it could include the fundamental as the first harmonic, so the series would be: first harmonic fundamental , second harmonic, third harmonic, etc. The term partial implies that all of the frequencies in a sound are all just parts of the sound: first partial fundamental , second partial, third partial, etc.
This term has some distinct advantages in that not every sound has frequencies that follow the overtone series, so the term partial could also be applied to frequencies of those sounds as well, whereas the terms overtone and harmonic only apply to sounds that follow the overtone series. Most of the sounds in the world are inharmonic, but many of the sounds that we are concerned about in music, such as sounds made by many musical instruments, are harmonic.
Inharmonic sounds will be discussed later in the chapter. This text will primarily use the term partial to describe frequencies in a spectrum and number them accordingly with the fundamental being the first partial see Figure 3.
To see what the relationships are between the frequencies in the overtone series, we need to find some point of reference in this notational representation. Since this overtone series is based on A, it includes the familiar tuning A as the fourth partial, which has a frequency of Hz. The only other fact we need to know is that octaves have a frequency relationship of 2 to 1. Table 3. From Table 3.
Applying this principle to the other partials in this overtone series, you get the frequencies given in Table 3. If the fundamental is more generically given as some frequency f, then the partial frequencies are 2f, 3f, 4f, 5f, 6f, 7f, 8f, and so on. If you were to play G5 on a piano, the frequency would be Nevertheless, the frequency relationships found in the overtone series have inspired many different approaches to tuning.
If we continue to look at the overtone series in traditional notation, it is possible to derive ideal ratios for other intervals as well. A perfect fifth is present in the overtone series as the relationship of partial 3 to partial 2, giving a ratio. A perfect fourth is found between partial 4 and partial 3, giving a ratio. A major third is found between partial 5 and partial 4, giving a ratio, and a minor third is found between partial number 6 and partial number 5, giving a ratio.
It is important to note that these are ideal relationships. In practice they can present some difficulties. One such difficulty is that it is possible to leap to the same note by different intervals and end up with contradictory frequencies.
As a classic example of this, if you start with one note and go up repeatedly by a perfect fifth proceeding through the circle of fifths until you reach the beginning note several octaves higher, and then do the same by octaves, you reach different frequencies. Starting on C1, the lowest C on the piano with C4 being middle C , you can get back to the pitch C by going up twelve fifths, and to that same C by going up seven octaves.
This is C8, the highest C on the piano. The results of this are shown in Table 3. Using the interval ratios for an octave of , multiplying by 2 generates the frequency for each successive octave. There are a number of other discrepancies to be found by going from one note to another by different intervals, and overtone series built on different fundamentals can generate different frequencies for what is nominally the same note.
However, in isolation, intervals formed from the ideal ratios are said to sound more pure than any of the compromise tuning systems that have been developed. A variety of such compromise systems have been proposed and used over the centuries, including Pythagorean tuning, meantone intonation, just intonation, and equal temperament.
In equal temperament, the ratio between every semitone is exactly the same, so each interval, regardless of the starting note, is also exactly the same. In performance, many performers and conductors will adjust their tuning of chords to partially compensate for this error.
For example, a performer holding the major third of a chord will often play it slightly flat relative to equal-tempered tuning to more closely approximate the pure intervals generated by the ideal ratios.
This brief discussion has really only scratched the surface of tuning issues, both historical and contemporary. There are many books and websites devoted to various tuning systems, particularly just intonation, and a number of contemporary composers have utilized various systems in their works.
In addition, many hardware and software synthesizers contain resources for variable tuning. To discuss timbre more generally, it is necessary to abandon traditional music notation altogether and use the spectrum view of sound. The spectrum view represents sound as a graph of frequency vs. The spectrum view for the overtone series starting on A2 is given in Figure 3. The amplitudes of the frequency components are from a spectrum analysis of a trombone note.
Figure 3. The fundamental is A2 Hz and 16 total partials are shown, with the ellipses indicating that the partials continue. The relative amplitudes of the partials are based on a recording of a trombone. The frequency data is the same as in Figure 3. The amplitude axis of this view differs from the amplitude axis of the waveform view in that the amplitudes in the spectrum view show the amplitude of each individual partial, whereas the amplitude in the waveform view is the overall amplitude of the sound.
To see how a spectrum changes over the course of a note or sound event—and it can change quite a bit—you would have to look at successive spectrum views that would provide a time-lapse view of the spectrum. There are variations on the spectrum view that allow three dimensions to be shown at once: frequency, amplitude, and time. One is the spectrogram view, which gives time vs.
Another of these spectrum view variations is the waterfall spectrum, which typically shows frequency vs. To understand these basic waveforms in some more detail, we can look at their spectra. The sine wave is the simplest possible waveform, having only one partial: the fundamental see Figure 3. By itself, the sine wave has a pure, pale timbre that can be spooky in the right situation. The triangle wave contains only the odd partials 1, 3, 5, 7, etc. The square wave also contains only the odd partials, but in greater proportion than the triangle wave, so it sounds brighter than the triangle wave see Figure 3.
In the octave below middle C it can sound quite a bit like a clarinet, which also has very little energy in the even partials in that register. A guitar can also produce a tone like this by plucking an open string at the twelfth fret. Timbres built from a square wave can be quite penetrating. The sawtooth wave contains both even and odd partials see Figure 3. All four vowels are shown left to right. The different amplitudes of the partials for each vowel are shown clearly. As a result, the sawtooth wave is bright and nasal, which allows it to penetrate well when combined with other timbres.
The sawtooth is one of the most common waveforms used for electronic timbres, particularly those meant to mimic analog synthesizer timbres.
Since the spectrum of a sine wave has only one partial, its fundamental, it is the most basic of the waveforms. In fact, each partial in the spectra of the other basic waveforms can be thought of as a separate sine wave. This implies that each of these spectra can be thought of as a sum of sine waves whose frequencies match the partial frequencies and whose amplitudes match the partial amplitudes.
If enough partials are added like this, a sawtooth wave will be formed. This method of synthesis along with a variety of others will be discussed later in the text in the chapter on synthesis methods. The spectra for those sounds are termed harmonic spectra. The rest of the sounds in the world have partials that do not follow the overtone series and thus have inharmonic spectra.
These sounds include everyday sounds such as ocean waves, car engines, and jackhammers, but there are also a number of musical instruments that have inharmonic spectra, such as bells and some kinds of percussion. It is worth noting that even sounds whose spectra are essentially harmonic have partials that deviate from the precise ratios.
In general real pipes, strings, and reeds have subtle physical characteristics that cause the resultant spectrum to deviate slightly from the pure overtone series. This deviation is sometimes termed inharmonicity. Nevertheless, they are still heard as being largely harmonic and belong in a different category from distinctly inharmonic sounds. Notice that, while there are distinct partials, they do not form an overtone series of f, 2f, 3f, 4f, 5f, and so on.
As a result, this spectrum is deemed inharmonic. Noise Spectra amplitude Noise does not have a harmonic spectrum, nor does it have distinct partials. Instead, the spectra of various kinds of noise are better conceived as a distribution of energy among bands of frequencies. White noise, for example, has a spectrum whose energy is distributed evenly among all the frequencies. This can be described as equal energy in equal frequency bands, so white noise will have the same amount of energy between Hz and Hz as between Hz and Hz, or 1, Hz and 1, Hz.
Note that the partials are not whole number multiples of the fundamental. Dashed lines indicate the positions of whole number multiples of the fundamental. Since we perceive these frequency bands as being of equal musical size octaves , pink noise seems more evenly distributed and somewhat more pleasant to our ears.
We perceive white noise as being louder at higher frequencies because the absolute size in hertz of musical intervals thirds, fifths, octaves, etc.
As a result, the white noise distribution contains more energy in higher octaves than in lower octaves. However, we actually have quite a bit of experience in manipulating timbre through the tone or equalization EQ controls of our home and car stereos.
Often stereos will have bass and treble controls, or bass, midrange, and treble controls. For each of these frequency bands, you can cut them reduce the amplitude , leave them alone flat , or boost them increase the amplitude. Definitions of these ranges vary widely from device to device, but bass is roughly 20 to Hz, midrange is roughly to 5, Hz, and treble range is roughly 5, to 20, Hz. Many EQs have more than three bands and will often split the midrange up into two or three parts.
Many manufacturers have their own definitions of these frequency bands. Many stereos, other sound playback devices, and pieces of sound software have graphic equalizers that can adjust more than just two or three frequency bands. Screenshots reprinted with permission from Apple Inc. As a director at Bell Labs in the s, Pierce championed the early computer music research there. This book is thorough and enjoyable, with just enough math to get the point across, but not so much that it is intimidating.
Measured tones weaves together physics, music, and the history of music into an engaging and informative narrative. Despite the potentially daunting title, the material is carefully presented without much higher math. An introduction to the psychology of hearing by Brian C.
Moore 5th ed. Section I Suggested Activities This chapter provides some suggested activities relating to sound. The version referred to here is version 1. You will have to learn the basics of Audacity to complete these activities. The documentation is available on the website given above. Fortunately, Audacity is a relatively straightforward program. You can substitute sound generated by softsynths for the generated waveforms described here. Save this project to your disk so you can use it for later activities below.
What is their approximate period? Does it match the expected period given the frequency you entered above? How do they differ from each other? How do these differ from the waveforms in steps 1 d and 2 d? In what two ways are they different? In Mac OS X, use shift-command-4 and then drag around the area.
In Windows Vista, use the Snipping Tool program. Try to use different articulations and different tone qualities timbres when you sing or play. How does this differ from the waveforms from steps 1 d , 2 d , and 3 d? How do the waveforms for the recorded notes differ from each other? How do the waveforms differ at different points in each note? You may need to adjust some of the settings such as the size or should do to make the partials clear.
Do they follow the overtone series? Notice the broad distribution of energy for each of the types of noise and how they differ from each other at the higher frequencies.
Look at the spectrum for different parts of each note closer to the attack and closer to the tail. Use screenshots to compare if necessary. How are they different? How is the spectrum different for each tone quality? How does the spectrum change for different parts of a note? This will create a new track with the same audio as the source track.
Click OK to apply the EQ. Do the same for the unmodified audio. How do the two spectra now differ? Save this project to your disk. This is the frequency of an equal-tempered fifth. How do they sound different? This is the ideal ratio major third.
This is the equaltempered major third. If the tone of the sawtooth is too strident, you can select all of the audio and use the equalizer discussed in project 6 to reduce some of the high frequencies. Though we live in the digital age, analog and digital audio are intertwined and it is impossible to consider one without the other. The audio we record, edit, and mix might come from live voices or instruments analog , from hardware synthesizers digital or analog , from software synthesizers and samplers digital , from sample or loop libraries digital , or all of the above.
This section is designed to acquaint you with the basics of audio hardware, digital audio, and digital audio software. Though the material in this section makes regular reference to audio recording, which is a specific application of audio technology, the concepts involved here are integral to the remaining sections of this text. MIDI, the subject of Section III, is a protocol designed to control the electronic generation of digital audio, whether through sampling or synthesis.
Section IV describes various sampling and synthesis methods for generating this digital audio. Notation software and computer-assisted instruction software, the subjects of Section V, utilize digital audio extensively for playback, whether from audio files or generated from sampling and synthesis. In addition, even the simplest music technology hardware setup includes various types of audio connections and the use of speaker technology.
More complex systems might also incorporate microphones, preamplifiers, mixers, and audio interfaces. Sound waves are generated by the continuous changes in the physical positions of strings, reeds, lips, vocal cords, and membranes that in turn generate continuous changes in air pressure. In order to be recorded, these continuous changes in air pressure must be converted into continuous changes in an electrical signal.
While this electrical signal will eventually be converted into a non-continuous digital signal, the analog equipment used in the steps leading up to this analog-to-digital conversion are very important to the overall quality of digital audio. Similarly, when the non-continuous digital signal is converted back into a continuous electrical signal and then to continuous changes in air pressure, analog equipment takes center stage again.
Until a way is found to pipe digital signals directly into our brains, all sound will remain analog. Modern audio recording is inherently a mix of analog and digital technologies. It is still possible to record to an analog medium such as reelto-reel tape, but, even with the current resurgence of interest in all things analog, that would be unusual. Once converted to a digital signal, the audio can be edited, processed, mixed, mastered, and distributed. The digital electrical signal is converted to an analog electrical signal, which is in turn converted into vibrations in the air that eventually reach your ears and brain.
The term transducer refers to a device that converts energy from one form to another, such as a solar panel, which converts solar energy into electrical energy, or an electric motor, which converts electrical energy into physical energy. The transducer that converts acoustic energy into analog electrical energy is a microphone. The electrical energy generated by a microphone is carried down a cable and, because the amount of energy produced by a microphone is usually quite small, it is then connected to a preamplifier, or preamp.
The device that converts an analog electrical signal into a digital electrical signal is an analog to digital converter, or ADC see Figure 4. In the context of digital audio recording, ADCs are built into the audio inputs of your computer or into a specialized audio interface.
At this point, the signal has become a string of binary numbers expressed as an electrical signal digital audio that can be stored on some digital storage medium, such as a hard drive, flash drive, digital tape, or CD. Once the audio has entered the digital domain, the possibilities for editing, processing, and mixing are nearly endless. A DAC is also a very common device: almost any device that makes sound nowadays starts as digital audio and thus must be converted to analog audio to be played back over headphones or loudspeakers.
In the context of digital audio playback, the DAC is built into the audio output of your computer or into an audio interface. The analog electrical signal output from a DAC is sent to an amplifier to make the small electrical signal larger, and then to a transducer—a speaker in this case—to convert the electrical energy back into acoustic energy. A final conversion of sound by the cochlea another transducer!
Audio Recording Path Summary The audio recording path can be summarized as follows: 1. Vibrations in the air are converted to an analog electrical signal by a microphone.
The microphone signal is increased by a preamplifier. The preamplifier signal is converted to a digital signal by an ADC. The digital signal is played back and converted to an analog electrical signal by a DAC. The analog electrical signal is made larger by an amplifier. The output of the amplifier is converted into vibrations in the air by a loudspeaker. As a result, pound-for-pound, microphones are among the most expensive pieces of equipment in a studio.
While truly high-quality microphones run into the thousands of dollars, it is possible to record with microphones that are merely a few hundred dollars, or even less, to create acceptable demos, podcasts, and other such projects. There are many different kinds of microphones, but there are two primary types that are widely used in audio recording: dynamic mics and condenser mics.
One common type of dynamic microphone uses a diaphragm attached to a coil that is suspended near a magnet see Figures 4. In this moving-coil design, the diaphragm acts like an eardrum, moving back and forth when hit with sound waves. The back and forth motion of the diaphragm results in a back and forth motion of the attached coil near the magnet. In practice, the properties of the electrical signal produced by a microphone will always differ somewhat from those of the sound wave.
The size of the diaphragm impacts the resultant sound, so moving-coil dynamic mics are often classed as small diaphragm or large diaphragm mics. Moving-coil dynamic mics tend to be sturdy and many are not terribly expensive, making them ideal for sound reinforcement in live performance. Moving-coil dynamic mics are also used in a studio setting for applications such as miking guitar amps, drums, and vocals.
Another, less common, form of dynamic microphone is the ribbon microphone, which consists of a thin piece of corrugated, conductive metal—the ribbon—placed between two magnets see Figure 4.
When sound waves hit the ribbon, it moves within the magnetic field, creating a small electrical current in the ribbon. Another result of the lightness of the ribbon is that ribbon mics can be more delicate than other dynamic mics and have been largely used as studio mics.
Early ribbon mics were easily destroyed by a sudden puff of air or by dropping the microphone not a magnet a b ribbon c output leads backplate output leads diaphragm diaphragm coil output leads magnets Figure 4. More recent ribbon mic designs are more robust and ribbon mics are used in a wider variety of situations now.
Condenser microphones also use a diaphragm that vibrates back and forth when sound waves hit it. However, in this case, the diaphragm is metal-coated and suspended in proximity to another metal plate see Figures 4.
When a charge is applied across the plates they form a capacitor, which is an electronic element once called a condenser these mics are also referred to as capacitor microphones.
The capacitor diaphragm and backplate and its associated electronics make up the capsule for a condenser microphone. Less often, this power is supplied by a battery in the microphone itself. As the diaphragm moves back and forth, the distance between the two plates changes periodically, causing a periodic change in the voltage across the plates and producing an electrical signal in wires attached to the diaphragm and backplate. As with dynamic mics, the properties of the electrical current will differ somewhat from the properties of the sound wave.
As with moving-coil dynamic mics, the size of the diaphragm impacts the resultant sound, so these mics are also classed according to their diaphragm size. Condenser microphones are used frequently in studio settings and for recording live performances.
They are more sensitive than dynamic mics and tend to reproduce the total frequency range more fully, due in part to the fact that the moving element, the diaphragm, is lighter than the diaphragm-coil unit of a moving-coil dynamic mic. Many condenser mics are more delicate and more expensive than moving-coil dynamic mics, though there are a number of robust, inexpensive condenser microphones available now.
A form of condenser mic called an electret microphone is commonly found in camcorder mics, computer mics, and lapel mics. Electret mics are condenser mics that have a permanent charge between the plates, instead of using phantom power or a battery to create the charge. Electret mics do need power for the rest of their electronics, which can be supplied by a battery or phantom power.
While electret mics used in consumerquality applications such as camcorders tend to be of mediocre quality, there are also some electret mics that are high-quality studio microphones. Many microphones connect to a preamp, or audio interface with a built-in preamp, using a cable with XLR connectors see Figure 4. These microphones have a built-in preamp and ADC. In addition, there are a variety of wireless microphones ranging from lapel mics to regular hand-held or stand-mounted microphones used for live performance see Figure 4.
Copyright Shure Incorporated. Used with permission Figure 4. Omni, cardioid, and bidirectional polar patterns. Photo courtesy of Audio-Technica Figure 4. Omni and cardioid polar patterns. Hosken is pragmatic in his approach and prioritises the importance of the presented material to students operating in the contemporary music production landscape.
An example of this method is the choice of topics for the appendices. The decision to include the discussion on computer hardware and software in the appendices was dictated by the fact that, while important, these topics are intuitively understood by current generations of students who grew up with computer technologies. While a good understanding of the fundamental knowledge related to computer hardware and software is very helpful in troubleshooting of the technical problems as well as the efficient day-to-day work with music technology, the priority in the volume is given to sound-related topics, which is a sensible choice.
The second edition of the book, reviewed here, sees a restructure of some of the content as well as new additions. The new edition offers several references to mobile platforms, particularly iOS-based apps facilitating music creation and performance as well as computer-assisted instruction.
This discussion on mobile apps is a welcome update, as since the launch of the iOS App Store in musicians have gained access to an ever-growing range of tactile apps with unparalleled music capabilities. In addition to the information on iOS apps, the updated text features references to hardware accessories relevant to the iOS platform.
The book helps to facilitate an understanding of the key principles that lie behind the technology, rather than discussing specific software or hardware tools. While the text does not focus on specific software there are numerous references to popular plugins and digital audio workstation DAW programs. These examples offer a fairly broad overview of available software options, with the most frequently mentioned DAW applications being Pro Tools, Reason and Logic Pro, while other popular DAWs such as Cubase and Ableton are mentioned only in passing.
This approach helps to avoid the dangers of analysing minute details of technological tools that change at a rapid pace, which would quickly render such analysis obsolete. In my own tertiary teaching practice, I found the segments of the book discussing the properties of sound, MIDI, synthesis, sampling and bit rates to be of particular value to students new to these aspects of music technology.
Such content has proven to be an excellent resource for introductory information on these topics. The book is designed for makers and creators who want to use technology in their present or future professional activities and for whom, Hosken argues, it is important to understand how music technology works.
He does not discuss technology separate to music, which, I believe, is a step than can help practitioners who frequently wear a hat of musician and sound technician at the same time.
In my practice as a music producer I found that it is often easy to fall into the trappings of technological solutions to problems encountered in a mix and forget about the musical ones. An example of a discussion just as important to music performers as it should be to music technologists is Chapter Three, featured in the first section of the book focused on Sound, where topics such as harmonics, overtones and timbre modification are discussed.
A limitation of the book is the lack of more in-depth explanation of some complex topics or processes that might be challenging for beginners. Examples of such topics, covered rather briefly, include a discussion on tuning and temperament in the section on Sound, and the description of compression in the section covering Audio. A topic that could be also expanded in a future update to the section on MIDI and Software Instruments is elaboration on how MIDI technology and computer software can be used in a live performance context.
0コメント