Wednesday, January 7, 2009

Fundamentals of Sound (part 2)


Last time we got into some of the basics of sound, discussing the two main attributes:

Frequency and Amplitude.


Today we're going to get a little deeper into this stuff so that we can understand how it relates to the music we make using tools like Reason.

OK. So, frequency is basically dealing with pitch, right? How high or low a sound is. Measured in Hertz (Hz). Let's listen to a few tones to recalibrate our ears:

So what we were listening to were pure tones. Here is a picture of what a pure 440 Hz tone (aka "Concert A") looks like:
What we have been looking at and listening to so far are what are called sine waves. Sine waves are basically pure tones - nice and smooth and even, no additives or preservatives. But, as with so many things in life, reality is almost never so smooth and even...

Here is what Concert A (aka 44oHz) looks like when played by a grand piano:


Whoa. Lots of stuff going on here. I should point out that this picture is zoomed way out, so we're not seeing the individual waves like we were in the picture of the sine wave. But my point is this: there is a LOT more than just a simple 440Hz sine wave playing when you hit the A key on a piano.

Major point #1: Almost no natural sound contains only one frequency.

You might be asking yourselves then, "What are the other frequencies?"

Short answer: harmonics.

Harmonics are whole number multiples of a specific frequency.

OK, so I just lost about half the class with that last sentence. But it's really not that complicated. Check it out:


So, we're looking at the first 5 harmonics of a vibrating string. The first harmonic is what is called the fundamental frequency. The fundamental is like the "main note" being played. For example, in the picture of the piano note above, 440 Hz is the fundamental, but all that other stuff in the waveform is a bunch of harmonics:

1st harmonic (aka fundamental) = 440Hz
2nd harmonic = 880Hz (440 x 2)
3rd harmonic = 1320 Hz (440 x 3)
4th harmonic = 1760 Hz (440 x 4)
5th harmonic = 2200 Hz (440 x 5)

Every musical instrument has harmonics, but the amounts and combinations of these harmonics are unique to every instrument. This is why a guitar sounds like a guitar, a snare drum like a snare drum, Mariah Carey like Mariah Carey, etc.

Here is a comparison of a flute, a clarinet, an oboe, and a saxophone all playing Middle C (256 Hz):

As you can see, they are similar (they are all instruments from the woodwind family), but it is the unique harmonic content that gives each one a unique sound.

Major Point #2: Even though different instruments (including human voices) have different frequency ranges, most of them overlap.
Here is a chart that shows some instruments and their ranges (click on the picture to see a larger image):

For example, a violin, an MC's voice, and a snare drum may all contain a lot of the same frequencies. This is important to understand when you're mixing music because if you have a bunch of instruments all playing in the same general area of the frequency spectrum, it means that they are all competing for the listener's attention. So, what you want to do is give each one its own special spot in the mix. You do that by cutting certain frequencies and boosting others.

Cutting and boosting- that's the basic concept. Now let's talk about the tools you have to accomplish this. There are basically two types of EQ that you use in mixing:

1. Shelving EQ - This is simple. With a shelving EQ, you're just boosting or cutting everything above or below a specific frequency. This is a more general tool that lets you make adjustments to big sections of your sound. You will generally have one for dealing with the High Frequencies, and one for the Low Frequencies. Here's a chart:
2. Peaking (aka Parametric) EQ - This one let's you zero in on a very specific frequency range to cut/boost. This is a more precise tool for working with really detailed parts of the sound. Generally, you will have a couple of these that are meant to be used in the Low-Mid and Hi-Mid ranges. Here's a chart:


This is what the Digirack EQ plugin that comes with Pro Tools looks like:


Notice that there are five sets of EQs. The middle three sets are all Peaking EQs. The ones on the far left and far right can be EITHER Peaking or Shelving, depending on how you set them. They will normally be set to Shelving.

Really knowing how to EQ is an art form and, just like any other art, takes years of practice to really master. Just remember:

LESS IS MORE. Something that is recorded halfway decently should not need more than a little EQ adjustment. Anything more than +/- 6 dB is a pretty big adjustment.


Here is your assignment for today:
  1. Pick 2 tracks from your song that have a similar frequency range (like 2 low instruments).
  2. Insert EQs on each of them.
  3. Work with one track at a time: solo it and loop it.
  4. Using a Peak EQ, narrow the Q to about 6 and boost the Gain to about 12 dB.
  5. Now sweep the Frequency from high to low and pay attention to what frequency range gets louder when you sweep through it.
  6. Lower the Gain back down.
  7. Now solo the other track and pull up the EQ
  8. Using the Peak EQ, find the same frequency range and lower the gain on that track just a little bit, like -3dB.
  9. Listen to both tracks together and see if you can hear the first track popping out just a little bit more now. Try adjusting the Q and the Gain to make it sound right.
  10. Pick two more tracks and repeat the same process.

Monday, January 5, 2009

Filtering assignment

Today we're going to start working a bit with frequency using a type of processor called an Equalizer (aka "EQ").

"What is an EQ?" you ask.

Well, an
EQ lets you boost or cut a specific frequency ranges in your tracks.

"Why would you want to do that?" you ask.

Well, because generally you want certain things to stand out more in your mix, and other things to be more in the background. Cutting/boosting certain frequencies can help you do this. Also, you can make things sound cleaner and clearer. It's sort of like having a toolkit for working on the details of your tracks.

Today we're just going to work with a very basic form of EQ-ing, called filtering.

Filtering lets you completely cut out a certain frequency range and just leave the remaining part. I want for you to see what it sounds like when you filter your own voice. Basically, we're going to use filters to make our voices sound like they are coming through a telephone.

Please do this:
  1. Open your Pro Tools session.
  2. Pick a vocal track that you want to work with. Solo it by clicking the yellow S button.
  3. Pick a specific region and loop it. Hit play.
  4. Go to the Mix Window.
  5. In the dark grey section at the top of the track, click on one of the sets of double arrows.
  6. Click on Plugin>EQ>7-band EQ3
  7. In the sections called HPF and LPF, click the IN buttons.
  8. Now find the spots where it says 6dB/Octave. Turn up the knobs until it says 12dB/Octave.
  9. In the HPF section, turn up the Frequency knob until your voice starts to sound kind of thin. Make a note of what frequency this happens at (about 1kHz).
  10. Now in the LPF section, turn down the Frequency knob until the voice starts to muffle a little and sound more like it's coming through a phone. Note what frequency this happens at (about 2kHz). The frequency screen should look something like this:
  11. Play around with all these knobs and other parts of the EQ and see how they affect the sound of your track.

Fundamentals of Sound - part 1

Welcome back! Hope you had a restful/exciting break and are ready to get back into the wonderful world of audio.

So far we've discussed a bunch of different topics, from music theory to hip hop history to navigating software. Today we're going to talk a little about some fundamental audio concepts.

What is sound?

On the most basic level, sound is the vibration of molecules. Since we live in an air-filled atmosphere, sound for us is usually the vibration of air molecules.

Whenever there is any kind of movement or friction or impact in our air-filled environment, the air molecules get compressed and are pushed out of their normal position. They then react by springing back in the other direction. Same concept as a pulling a piece of string tight and then plucking it; the molecules swing back and forth.

It's important to understand that these vibrations don't just stay fixed in one place; as the vibrating molecules get pushed out of place they bump into their neighbor molecules and cause those molecules to vibrate, causing their neighbors to vibrate, then those molecules bump into their neighbors, and so on. Basically, the vibrations spread out in all directions in waves, sort of like dropping a rock in a pool of water. This is how the sound gets to your ears. The waves move outward at a steady rate, but get weaker and weaker as they move farther and farther away from the source...


If we try to draw a picture of a sound vibration, we get something like this:
A picture like this is called a waveform.

If we zoom in really close, then we see somthing like this:



What this diagram is showing you is a single cycle of a sound, and in this picture we can see the two basic aspects of sound, which brings us to the main point of today's lesson...

FREQUENCY and AMPLITUDE!!!

On the most basic level, here is what you need to understand:

Frequency = pitch (Hz)

Amplitude = loudness (dB)

Now, more specifically, frequency is the number of cycles that happen in a single second. The faster the vibrations are, the more cycles are happening per second, the higher the pitch. In the waveform diagram above, the horizontal axis is showing frequency. The closer the cycles are to each other, the higher the pitch and vice versa.

The unit of measurement of cycles per second is the Hertz (Hz).

Amplitude is a little trickier to explain, but basically it is the amount of energy that is going into making the sound. In a waveform diagram like above, the height of the wave is showing you how loud the sound is.

The unit of measurement of amplitude is the decibel (dB).

Last and SUPER IMPORTANT thing to know for today:

The human range of hearing is approximately 20Hz to 20,000Hz.

With this information, we can start to get into really working with sound. Tune in next time for the wonderful world of harmonics, folks!

Wednesday, December 10, 2008

Recording with Pro Tools (part 2)


Probably the single most important part of an audio engineer's job is in getting good levels BEFORE he/she starts recording.

Why is this so important?

For one thing, if the performer does a really good take, but you didn't bother to set things up right, then
you are responsible for the quality not being as good as it could have been.

For another thing, once you have recorded, you are pretty much stuck with the performance. Sure, you can add all kinds of processing and do crazy stuff with it, but none of that can make it sound as good as if you really took the time to set things up right.

So what can you do to get the quality as good as possible? Couple things:
  1. Proper microphone placement.
  2. Setting a proper level at the preamp.
For setting a good recording level, the rule of thumb is this:

Try to get the level as loud as possible without ever clipping (hitting the red).

What is clipping?

Clipping is when something is distorting digitally. It happens when the volume of your track is louder than the computer can handle. Usually it is really obvious and it sounds like things are crackling in a really ugly way. Sometimes, though, it's hard to hear while it's happening, especially when you're listening on cheap headphones or monitors. But then when you listen back to your music on a better system, you suddenly realize that it's there. At that point you're stuck with it. This is why Pro Tools has a little red light at the top of each track meter to tell you when you are clipping.

Always take the time to set levels and make sure you're not clipping before you start recording!!!

Wednesday, December 3, 2008

Recording with Pro Tools (part 1)


So, hopefully today we will start to actually do some recording with Pro Tools. From here on out, every time you do any recording, I want you to think of yourself as an engineer. As an engineer, you are responsible for the quality of the recording. In this class, you are going to follow some basic steps to make sure you get good quality.

Here is my basic recipe for setting up a recording session.
  1. First, transfer your Pro Tools session to the instructor station and make sure it opens correctly. Note: All PT sessions should be saved to the folder called Student PT Sessions
  2. Create a new track (mono, audio) to record your performer on and label it something like "Vox 1".
  3. Create a second new track and label it TB.
  4. Attach the microphone to the mic stand properly.
  5. Adjust the mic stand so that the mic is the appropriate height, angle and distance from the performer. Always make sure that the front of the mic is facing the performer!!!
  6. Plug all XLR (microphone) cables and headphones into the appropriate spots on the Digi 003 (or mBox).
  7. Attach a talkback microphone so that you can communicate with the performer.
  8. Turn on phantom power (if appropriate).
  9. Record enable your tracks by clicking on the little red R button.
  10. Adjust the levels of the two mics with the microphone input (aka preamp) knobs on the Digi 003 so that you are getting a strong signal but not clipping.
  11. Adjust the performer's headphone level so that he/she can hear both him/herself and the beat.
  12. Adjust the level of your own headphones.
  13. Check with your performer to make sure he/she is ready and start recording!

Monday, December 1, 2008

Intro to Pro Tools - part 4

Pro Tools organizes all of the information and files that go into a session in a very specific way. It is important for you to know a little bit about it because you're going to be moving your recording sessions between computers and you may run into a situation where a certain file is missing and you will have to go and find it. Where do you look?

First, let's ask ourselves what happens when we create a Pro Tools session...

When first start a new session, this window pops up:

What you are doing in this window is deciding a couple of things:
  1. The name of the session
  2. What quality (Sample Rate, Bit Depth) you want your session to be (higher resolution = more memory required)
  3. What file format (AIF, WAV, MP3, etc.) you want the audio to be
  4. Where on your hard drive you want to store the session
Then, once you hit "Save", Pro Tools creates a new folder containing some specific things. Take a look...
Here is the folder for my new session.If I go inside this folder, this is what I will see.

So what is all this stuff?
  • Pro Tools session file - this is the actual "file" that you open to work with Pro Tools. However, it doesn't actually have any audio in it. It's just a window that lets you work with files that are actually located somewhere else - sort of like one of those remote controlled robots that astronauts use.
  • Audio files folder - this is where all your recordings and imported audio is actually saved. This is folder is actually more important than the Pro Tools session file
  • Session file backups - Pro Tools periodically does an automatic save for you, just in case your computer crashes, or some catastrophe hits. You can load these files to recover your work.
  • Wavecache - This is basically a file that contains the regions in your session.
  • The other folders (Region Groups, Video Files, Fade Files) you don't need to worry about at this point. They all store information related so specific processes in Pro Tools that we will get to later. Just be aware that they are there!
The most important thing that you need to understand is this: a Pro Tools session file, which looks like this: is nothing by itself.

Key Point
When you are trying to move or back up a copy of your Pro Tools session, you must copy the entire folder that contains all the other folders inside it, especially the Audio Files folder!!!



Recording Projects - prep work

Today I want you to do a few things in preparation for working on your recording projects.
  1. Go into the Media Share>Class Materials>Project Worksheets and find the file called "Project Worksheet.odt". Copy it to your folder and open it. Please fill out this form so that Corina and I know what you are working on and can help you plan your project. IMPORTANT: be sure to save this file as "(your name)_Project Worksheet.odt", then copy it back to the Project Worksheets folder
  2. If you have a Reason beat that you are going to be recording vocals over, you need to get the beat into Pro Tools. To do this, you will have to Export audio files of all the individual instruments out of Reason and Import them into a Pro Tools session. (See below for instructions on how to do this.)
  3. If you need to finish writing lyrics, then please take time to do that today.
  4. If you still need to finish a beat, then do that.
So, everyone should have stuff to work on. If you need suggestions, ask Chris or Corina...

Click on the links for info on how to do the following:
Exporting Audio out of Reason
Importing Audio into Pro Tools