Twenty Questions about using Microphones

Posted on Updated on

Q#1 – Where should you place a microphone (ideally) to record vocals?

Firstly, if you clap your hands and get an echo effect, you should consider deadening the room using carpet, blankets, drapes, or other sound absorbing materials. Move your mic setup toward the room’s center to avoid reflective surfaces (walls/glass/etc).   Vocalist should be roughly 6 – 8 inches away from the microphone. Getting too close to the microphone tends to increase bass response and can create problems with plosive sounds – those popping Ps, Bs, Ds, and Ts.   Too much further increases the risk of picking up room ambiance and the effect of the vocalist “being in a bowl”.  A constant distance from the microphone will provide the greatest tonal balance

Q#2 – What’s a simple way to prevent unwanted noise?

Use a pop shield between the mic and the vocalist to prevent “bumping” sounds on “B” and “P” sounds.  A nylon stocking stretched over a wire (or wooden) hoop works.  Place the shield midway between the mouth and the microphone

Q#3 – Is there a way to test microphone placement?

To find the best location, wear fully enclosed headphones to monitor the microphone output while you move the mic around the performer.  As he/she through the material, you can choose the best microphone position by ear

Q#4 – What is “close miking”?

When miking at a distance of 1 inch to about 1 foot from the sound source. This technique generally provides a tight, present sound quality and does an effective job of isolating the signal and excluding other sounds in the acoustic environment

Q#5 – What is “distance miking”?

Distant miking refers to the placement of microphones at a distance of 3 feet or more from the sound source. This technique allows the full range and balance of the instrument to develop and it captures the room sound. This tends to add a live, open feeling to the recorded sound, but careful consideration needs to be given to the acoustic environment

Q#6 – What is “ambiant miking”?

Placing the microphones at such a distance that the room sound is more prominent than the direct signal. This technique is used to capture audience sound or the natural reverberation of a room or concert hall

Q#7 – How do you reduce the risk of phase anomaly when “stereo miking”?

This risk of phase anomaly can be reduced by using the X/Y method, where the two microphones are placed with the grills as close together as possible without touching. There should be an angle of 90 to 135 degrees between the mics. This technique uses only amplitude, not time, to create the image, so the chance of phase discrepancies is unlikely

Spaced Pair X/Y Method

Q#8- How can you use microphone placement to reflect different sound character ?

When you are getting a microphone placement for your singer, make sure to move the mic around sideways and up & down to see if you can get a better sound. Get closer and farther away. Change the angle and experiment with different polar patterns (pickup patterns).  When you do that, you’ll notice the sound changing character. A breathy sound close, more natural farther away. There are a lot of different subtle voice character changes in relation to position to the voice.   Keep in mind the style and spirit of the song. Some songs need a different character of voice (i.e. bright and bold vs. soft and dreamy). The singer can also change positions and vocal techniques during the song to change the character.   This is the real art in mic placement and technique. There is no shortcut to this other than experience

Q#9 – What is “bleeding” and how do you avoid it?

Bleeding occurs when the signal is not properly isolated and the microphone picks up another nearby instrument. This can make the mixdown process difficult if there are multiple voices on one track. Use the following methods to prevent leakage:
 
Place the microphones closer to the instruments
Move the instruments farther apart
Put some sort of acoustic barrier between the instruments
Use directional microphones

Q#10- Do foam wind shields prevent popping?

Some microphones come with foam wind shields that fit over the microphone grille, but in practice they tend to be ineffective against anything more than a gentle breeze, and they are no match for a full-on plosive. Furthermore, the thickness of foam invariably absorbs some high frequencies, causing the sound to become noticeably duller than it should be. Wind shields can be handy in live performance to stop the mic filling with drool, but they have a very limited effect on popping

Q#11 – How do you mike an acoustic guitar?

There are two optimum points for microphone positioning – either near the bridge or by the twelfth fret.  Placing the microphone in front of the instrument’s sound hole,  usually increases low frequency response to the point of making the instrument sound “boomy.”
 
Twelfth Fret Placement: Placing the microphone roughly 2 – 4 inches from the twelfth fret and aimed directly at the strings will generally produce a warm, full bodied sound with good tonal balance. Using this technique, the sound hole’s contribution will be moderated since the microphone is not pointed directly at it.
Bridge Placement: Similarly, you can position the microphone so it is 3 – 6 inches from the guitar’s bridge. This will generally produce a somewhat brighter tonal quality. You should also be prepared to experiment positioning the microphone slightly off-axis should you find yourself capturing too much low frequency response from the guitar’s sound hole

Q#12 – How do you mike a piano?

Ideally, you’ll want a minimum of two microphones. Usually, the microphone capturing the higher strings is assigned to the left channel and the microphone capturing the lower strings is assigned to the right channel in the final stereo mix, though the stereo spread generally is not hard left and right. While a single microphone can be used, the lower and upper extremities of the instrument will likely be compromisedIf you are using a single microphone to record a grand piano, position the microphone approximately 8 inches from the piano hammers (to reduce mechanical noise) and 8 – 11 inches above the strings – centered over the piano’s mid point. Pan position should be centered and the piano’s lid should be at full stick

recording piano positioning

Using a single microphone for an upright piano, it is generally recommended that you record from above, as placement of the microphone in the lower center may interfere with the performer’s ability to access the pedals and the microphone will likely pick up excessive pedal and other mechanical noise. Position the microphone just over the open top, centered over the instrument:

Piano diagram

Q#13- How do you mike a drum kit?

Stereo Overhead Pair: Position the two microphones approximately 16 – 20 inches above the performer’s head – separated laterally by roughly 2 – 3 feet and placed 5 – 6 feet out in front of the drum kit. Adjust the two microphone’s Pan position so that you achieve a good stereo spread, though generally not hard left and right
Single Overhead Microphone: Position the microphone approximately 16 – 20 inches above the performer’s head – centered in front of the drum set, and placed 5 – 6 feet out in front. The microphone’s Pan position should be centered for mono drums

Recording Drums

Q#14 – How do you mike an amplified speaker?

The mic should be placed 2 to 12 inches from the speaker. Exact placement becomes more critical at a distance of less than 4 inches. A brighter sound is achieved when the mic faces directly into the center of the speaker cone and a more mellow sound is produced when placed slightly off-center. Placing off-center also reduces amplifier noise.  A bigger sound can often be achieved by using two mics. The first mic should be a dynamic mic, placed as described in the previous paragraph. Add to this a condenser mic placed at least 3 times further back (remember the 3:1 rule), which will pickup the blended sound of all speakers, as well as some room ambience. Run the mics into separate channels and combine them to your taste

Q#15 – How does a Dynamic microphone work?

A microphone is a transducer, a device that changes information from one form to another. Sound information exists as patterns of air pressure; the microphone changes this information into patterns of electric current. In the magneto-dynamic, commonly called dynamic, microphone, sound waves cause movement of a thin metallic diaphragm and an attached coil of wire. A magnet produces a magnetic field which surrounds the coil, and motion of the coil within this field causes current to flow. The principles are the same as those that produce electricity at the utility company, realized in a pocket-sized scale. It is important to remember that current is produced by the motion of the diaphragm, and that the amount of current is determined by the speed of that motion. This kind of microphone is known as velocity sensitive

Q#16 – How does a Condensor microphone work?

In a condenser microphone, the diaphragm is mounted close to, but not touching, a rigid backplate. (The plate may or may not have holes in it.) A battery is connected to both pieces of metal, which produces an electrical potential, or charge, between them. The amount of charge is determined by the voltage of the battery, the area of the diaphragm and backplate, and the distance between the two. This distance changes as the diaphragm moves in response to sound. When the distance changes, current flows in the wire as the battery maintains the correct charge. The amount of current is essentially proportional to the displacement of the diaphragm, and is so small that it must be electrically amplified before it leaves the microphone

Q#17 – What are the various microphone pick-up patterns?

Q#18 – What is a “ribbon” microphone?

 
also known as a ribbon velocity microphone, a ribbon microphone uses a thin, electrically conductive ribbon placed between the poles of a magnet to generate voltages by electromagnetic induction.  Ribbon microphones are typically bidirectional, meaning they pick up sounds equally well from either side of the microphone

Q#19 – What is “reflection” & “absorption” of sound waves referring to?

Sound waves are reflected by surfaces if the object is as large as the wavelength of the sound. It is the cause of echo (simple delay), reverberation (many reflections cause the sound to continue after the source has stopped), and standing waves (the distance between two parallel walls is such that the original and reflected waves in phase reinforce one another).  Sound waves are absorbed by materials rather than reflected. This can have both positive and negative effects depending on whether you desire to reduce reverberation or retain a live sound

Q#20 – What is “diffraction” & “refraction” of sound waves referring to?

Objects that may be between sound sources and microphones must be considered due to diffraction. Sound will be stopped by obstacles that are larger than its wavelength. Therefore, higher frequencies will be blocked more easily than lower frequencies.  Sound waves bend (refraction) as they pass through mediums with varying density. Wind or temperature changes can cause sound to seem like it is literally moving in a different direction than expected
Advertisements

Recording #3 (x3) (+1 to pick up the slack from last week)

Posted on Updated on

Ok so i wanted to understand why nailing the vocal track is so problematic, so i chose to “learn” some songs from a CD Album received a month ago as a Christmas gift, and record the “sing-along” in the kitchen.  Songs are newish to me so as to not rely on long-term memory.

Alot tougher than i thought!  Gave up trying for perfect lyrics (& on key ) Ended up with a mixed bag of recordings from one song i could live with:

 
Artist:  Shane & Shane/ Song Title:  Your Love

Recorded songs as they were playing on CD through a home stereo in the kitchen using a Zoom H6 Recorder with the MS mic recording the room sound in stereo and a second dynamic microphone recording the sing-along vocals on a separate mono track.  Imported files into Audacity and experimented with basic editing & a few effects:

Recording #3 Take 1:   tried to hide the nastiness of the mono vocal track with effects

#3 Take 1 link to Soundcloud here:     https://soundcloud.com/sue-cunha/zoom0004-kitchen-blend       

Recording #3 Take 2:  used the “Paulstretch” effect on a section of the recording to create a haunting sound

#3 Take 2 link to Soundcloud here:    https://soundcloud.com/sue-cunha/the-haunting

*****************************************************************************************************************************************

Lets try something else:

An oldie:   The Allman Brothers  “Soulshine”

Recording #3 Take 3:  Just singing along with the Allman Brothers in my kitchen

#3 Take 3 link to Soundcloud here:    https://soundcloud.com/sue-cunha/zoom0013-allman-bros

Note:  Soundcloud was not happy with the “copyright infringement” of this gorilla recording.  Re-uploaded using “private” settings:  https://soundcloud.com/sue-cunha/zoom0013-soulshine-singalong but still NFG so y’all will have to miss out on my rendition of “Soulshine”

– probably for the best that it stays in the kitchen lol (but so much for picking-up-the-slack)

How about recording sound at the floor level?

Recording #3 Take 4:  Flamenco Shoes (warm-up): 

#3 Take 4 link to Soundcloud here:    https://soundcloud.com/sue-cunha/zoom0025-flamenco-warmup

I have no idea who the Artist is (to my shame)…Recording was part of a summer party playlist assembled by others

Made the recording as the song was playing on CD through a home stereo in the kitchen using a Zoom H6 Recorder.  The MS mic was recording the room sound in stereo and a second dynamic microphone was set up on a stand contorted so that the mic was recording the foot action on a separate mono track at about two feet above floor level

 

Findings!!

– using a handheld dynamic microphone for vocals allows for physical movement and artistic vocal expression – but takes practice for the artist to achieve the desired sound
– feels like your performance is improved when you are singing along with the sound of prerecorded music pumping-it-up in the room, BUT end result is closer to laziness (off key/poor tone/timing issues/and vocals often “not connected”)  if artist is relying on the recorded lyric track to carry the song
– editing & effects may cover a multitude of sins

Recording #2 on the ground @ iFly Toronto

Posted on Updated on

Top left: Foundations complete
Bottom left: Structural Steel complete
Right: Looking up at the base of the wind tunnel from the basement floor

Made a visit to one of our Projects underway in Oakville this week

to take site photos of

the progress of the Work

“Skyventure” iFly Toronto indoor Sky-Diving facility

It is an interesting building with
 unique structural & architectural features
 and i thought it would be cool to record sound
 in the basement plenum space
The foundation is constructed of 14" concrete walls
 reinforced with 1" re-bar in a 12" grid pattern
and the building footprint is 100ft long by 50ft wide with a basement depth of 25 ft

  Standing in the centre of the basement looking up you see an opening
 in the base of a 14ft diameter, 45ft long cylindrical tube
The opening appears to be roughly 1/10th of the diameter of the floating chamber where you chill
until the plenum is fully pressurized using four massive motorized louvers & propellers
generating an airstream capable of reaching wind speeds
of up to 300 kph to simulate the
free fall experience

The distance from the basement floor to the service deck is 80ft (8 stories high)

i used the Voice Note Recording feature on a BlackBerry (Torch), emailed the file to myself and imported it into Audacity

discovered that the file format was not recognized and was prompted  to download  the FFmpeg library

exported it as a WAV file

listen to the recording here:

http://soundcloud.com/sue-cunha/ifly-basement-plenum-sound

*************************************************************

So how does one download & install the FFmpeg Import/Export Library?

The optional FFmpeg library allows Audacity to import and export a much larger range of audio formats including AC3, AMR(NB), M4A, MP4 and WMA and also audio from most video files.

  • Because of software patents, Audacity cannot include the FFmpeg software or distribute it from its own web sites. Instead, use the following instructions to download and install the free and recommended FFmpeg third-party library.
Warning icon FFmpeg 0.6.2 for Windows and Mac listed below should be used with the latest version of Audacity from http://audacity.sourceforge.net/download/.

Windows

  1. Go to the external download page
    Left-click this link, do not right-click.
  2. Look for “For FFmpeg/LAME on Windows”, then a few lines under that, left-click the link FFmpeg_v0.6.2_for_Audacity_on_Windows.exe and save the file anywhere on your computer.
  3. Double-click “FFmpeg_v0.6.2_for_Audacity_on_Windows.exe” to launch the installer (you can safely ignore any warnings that the “publisher could not be verified”).
  4. Read the License and click Next, Next and Install to install the required files to “C:\Program Files\FFmpeg for Audacity” (or “C:\Program Files (x86)\FFmpeg for Audacity” on a 64-bit version of Windows).
  5. Restart Audacity if it was running when you installed FFmpeg.
  6. If you have problems with Audacity detecting FFmpeg, follow the steps to manually locate FFmpeg.
  • Alternative zip download for FFmpeg 0.6.2
  1. Download http://lame3.buanzo.com.ar/FFmpeg_v0.6.2_for_Audacity_on_Windows.zip from the external download site.
  2. Extract the contents to a folder called “FFmpeg_v0.6.2_for_Audacity_on_Windows” anywhere on your computer, then follow the instructions below to locate avformat-52.dll using the Libraries Preferences.

BROVOS…CV Trailer

Posted on Updated on

click link to view:      BROVOS

click link to view:      FILM PRODUCTION PLANNING DOCUMENT

Ten Questions about Audacity (DAW)

Posted on Updated on

Question #1     What is Audocity’s default bit recording quality?  32 bit

  • Supports 16-bit, 24-bit and 32-bit (floating point) samples (the latter preserves samples in excess of full scale).
  • Sample rates and formats are converted using high-quality resampling and dithering.
  • Tracks with different sample rates or formats are converted automatically in real time.

Recording in 32 bit quality takes a lot more work than recording 16 bits, and a slower computer may not be able to keep up, with the result being lost samples. If you are recording for immediate export without editing, 32 bit recording may offer no advantage over 16 bit recording if you only have a 16 bit sound device. Most built-in consumer sound devices on computers are only 16 bit (including cheap sound cards)

You can change the default sample format Audacity records at to 24 bit or 16 bit by going to the Quality tab of Preferences.
Record at 24-bit depth on Windows (using Windows WASAPI host), Mac OS X or Linux (using ALSA or JACK host).

also:

  • Record at very low latencies on supported devices on Linux by using Audacity with JACK
  • Record at sample rates up to 192,000 Hz (subject to appropriate hardware and host selection). Up to 384,000 Hz is supported for appropriate high-resolution devices on Mac OS X and Linux

Question #2     How can I split a long recording into multiple files or CD tracks? 

silences between tracks are automatically detected and labeled using  Analyze>Silence Finder  (Audacity legacy 1.2.6 Plug-in Pack)

Manually:

  1. Click to place the cursor at the start of the first song
  2. Choose “Add Label at Selection” from the Project menu (or Tracks menu in Audacity Beta). If you wish, you can type the name of the song
  3. Repeat steps 1 and 2 for each song
  4. When you are finished, choose “Export Multiple” from the File menu. When you click the “Export” button, Audacity will save each song as a separate file, using the format and location you choose

Question #3     How do you capture a quality voice recording ?

Audacity can record live audio through a microphone or mixer, or digitize recordings from cassette tapes, records or minidiscs

 

  • Record from microphone, line input, USB/Firewire devices and others
  • Timer Record and Sound Activated Recording features
  • Dub over existing tracks to create multi-track recordings

Dealing with Technical Issues

Dynamic Range

One feature of recording people speaking is uncertainty of recording level. Speakers vary in volume, and may not be aware of the best microphone techniques so for example may stand in different positions relative to the microphone. In some cases, such as meetings and conference recording, there may also be remote participants who are being heard through a radio or television receiver. The result is wide recording level variation

Rather than record at the final bit depth wanted (let’s say 8 bits), with digital recording one can record at greater bit depth and set the recording level relatively low (say 10 dB to 20 dB below the -0 dB distortion level). This retains plenty of dynamic range but avoids the risk of speakers who are louder than others creating clipping, which will result in unpleasant sound quality

Sample Rates

There are also pros and cons about recording at different sample rates. The sample rate of the recording determines the highest frequencies that can be captured. Generally, lower sample rates are acceptable in speech recording where they are not in music, because voices (especially male) have a lower upper range of fundamental frequencies than instruments. Also, by the nature of the different sounds made when speaking and singing, it’s less important for quality reasons to capture the higher overtone frequencies in speech. In any case, the higher the sample rate you do record at, the more CPU time and disc space will be used

Multi Channel Recording

Where speakers don’t stand close to the microphone, multi-channel recording helps to keep all speakers above the room noise level, and clearly audible

For meetings it may be useful to place several microphones around the room, recording each microphone on its own channel. The multiple channels can be mixed down to mono later, selecting for each speaker whichever channel gives the highest ratio of speech to room noise. When post-processing, simply choose one channel for each speaker, mute the others, then mix down

Once again, more channels mean greatly increased CPU use and greater use of disc space. It’s important to test the hardware in multi-channel mode in advance, as running out of CPU capacity could cause recordings to have drop-outs or fail completely

Where simplicity is required, using only two microphones in different positions can still significantly improve the end result

Reliability

The generally more stable nature of Linux or Unix operating systems may mean a reduced chance of a recording failure if you record with these systems rather than with Windows (other things being equal).  Up to date sound card drivers specific to your hardware are more reliable than generic drivers when recording. Be prepared so you can quickly reinstall sound drivers between events if necessary

Shutting down unnecessary applications and processes so that the recording has most of the available CPU to itself  is important – especially on slower and older machines with less RAM. Don’t make text notes on the PC that is recording, or do anything but record with it. Such actions are likely to cause recording skips.  Consider making a checklist for any important recording. You may want to do a last minute check that you’ve got power settings set to always on, screensaver off, levels set right and so on before you record. Backing up audio files as soon as possible using cloud based storage reduces the chance of data loss

Microphones

Never use a built-in microphone that comes with a laptop, MP3 recorder or tape deck. Such microphones pick up lots of noise from the device’s drive or from the deck motors or tape

While microphones are usually set on stands for formal events, for meetings of a handful of people held round a table, microphones on the table may be sufficient. Always place the microphone on something soft and squashy so that sounds and vibrations transmitted through the table – of which there are usually many – are not picked up directly. The microphone lead itself should sit on the squashy item before it reaches the table, as some sound and vibration can be passed up a short length of cable.  The squashy items should be stable however; sponges fail in this respect! Folded clothing works fine, and the informal appearance helps put speakers a bit more at ease.  Alternatively, microphones hung overhead avoid vibration and disturbance

Speaking Technique

Generally the less attention speakers pay to microphones the better their talk, and a way to minimize awareness is to not even mention the subject.  Hidden microphones can put speakers more at ease. They still know the microphone is there, but not being repeatedly reminded of it helps.  A basic way to create a hidden microphone is to cover a black-coloured microphone and its visible section of lead with a layer of lightweight open weave clothing

Question #4     How do i improve recording quality?

The built-in sound card that comes with many computers is quite poor. It may be satisfactory for playing sound effects, but not good enough for high-quality recording. Worst offenders are the built-in sound on your motherboard, or any audio device on a laptop.  Some tips to reduce noise on your current system:

  • Mute playback of devices that you don’t use for recording – such as MIDI Synth, CD Audio, TAD-In, Auxiliary, Microphone, Line In. Only “un-mute” devices to be used.
  • Update sound drivers
  • Consider shielding your soundcard 
  • If possible, insert your soundcard into a PCI slot which has a dedicated “Interrupt Request (IRQ) Channel”, as described in your motherboard handbook. Except for dual processor motherboards, there will probably be 4 electronic IRQ channels used to assign IRQs. (This is not the same thing as the 16 virtual IRQs we usually talk about.) For example, on my ASUS CUV4X mobo, Interrupt Request channel “A” is shared by AGP (reportedly noisier than PCI video cards) & PCI-slot1 (leave blank if AGP is in use) & PCI-slot5 (empty). Int-“B” is shared by AGP & PCI-slot2 (NIC – noisy). Int-“C” is a dedicated electronic channel, taking hardwire interrupt pulses generated solely by the device installed in PCI-slot3 (soundcard). Int-“D” is shared by PCI-slot4 (SCSI – noisy) and USB-controller (mouse, keyboard, etc. – very noisy). If I install my soundcard in any slot other than PCI-slot3, the result is a scratching sound, like a loose connection at an input jack. But, it comes from the mouse pulses (slot controlled by Int-“D”) or from video rewrites (slot controlled by either int-“A” or “B”)
  • Even if you are using a ‘silent PC’ (one with passive cooling rather than a fan) you will still need sound insulation between it and your mic (a piece of felted board will do) as the hard drives are not silent
  • If you are using any outboard (externally powered) audio hardware, make sure all the equipment is plugged into the same power strip. Grounding issues can cause ground loops, which will appear in your recording as a hum
  • Cheap sound hardware, anywhere in the analog chain, will result in poor quality recorded sound

Sound cards

Buy a new sound card – especially if you were using your computer’s built-in audio capabilities before. The sound card’s ADC or analog > digital converter is the final step in your analog audio chain. Consider buying a USB audio external interface. The main advantage is that the A/D converter is then outside your computer’s case, which keeps electrical noise to a minimum – only the digital signal gets transmitted back to your computer. Another advantage is that you don’t have to open up your computer to install anything, just plug it in and go (after installing any software drivers required by the device). It’s easier to share it between multiple computers, too. Make sure other USB devices are unplugged if not being used, as especially USB 1.x has a limited bandwidth. Even things like network cards can interfere with USB audio so disable them

Microphones, more than any other single piece of hardware, will impact the quality of your recorded sound

  • If you are doing studio work, a condenser microphone (rather than a dynamic) will probably be the most suitable. They have greater accuracy and dynamic and transient response compared to dynamic microphones. For live recordings, professional dynamic microphones may be preferable – they will be less prone to picking up extraneous stage and audience noise
  • If you use a professional microphone, you’ll need a good preamp. The “mic” input on a sound card has a preamp behind it, but it’s usually not very good quality and will usually not provide sufficient amplification for the low outputs of pro microphones
  • Note also that built-in computer 1/4 inch mic ports are almost always mono and unbalanced. Built-in computer line-in ports are almost always unbalanced. Unbalanced inputs mean you must keep the cable short to prevent interference and muffling, but this increases the interference risk from being too close to the computer. For this reason, many external USB and firewire recording interfaces will provide balanced inputs and outputs
  • Don’t forget accessories like microphone stands and cables. Use XLR cables 

Question #5     How do you set recording levels?

The level at which you record your audio is very important. If the level is set too low, your audio will have background noise when you turn the volume up to hear it properly. If the level is too high, you will hear distortion. The process of testing the recording level without actually recording is called monitoring . To do this in Audacity, you need to use the Meter Toolbar:

Metertoolbar.png
If the meters are not visible, click View > Toolbars > Show Meter Toolbar
or in legacy 1.2.x go to the Interface tab of Preferences and check Enable Meter Toolbar

In the image above, the left-hand VU Meter with the green bars measures the playback level, and the right-hand meter with the red bars measures the recording level. Assuming you are recording a stereo track , the upper bar stating “L” refers to the left-hand channel, and the lower bar “R” refers to the right-hand channel. The values on the meter are negative decibel  values below the distortion level, where the distortion level has a value of zero (0). Hence the smaller the negative values become, and the closer the meter reads to the right-hand edge of the scale, the closer you are to the maximum possible level without distortion

To start monitoring, look at the right-hand recording meter just to the right of the recording symbol, and click the downward-pointing arrow:

Metertoolbarmonitor.jpg

This reveals a dropdown menu:

Monitormenu.jpg

The menu has several options. “Vertical Stereo” will rotate the meter so that the zero level is at the top, and “Linear” will change the meter scale so that the values read from zero to 1.0 where the distortion level has a value of 1.0

Now you can start singing into your microphone, playing your guitar (or your record or tape), and you will see the red recording bands move in real time with the loudness of the input signal. If you can’t hear what you are recording, this doesn’t matter for the purposes of the level test because the meters will indicate the level accurately

For most purposes, an optimal recording level is such that when your input is at its loudest, the maximum peak on the meters is around –6.0 dB (or 0.5 if you have your meters set to linear rather than dB). This will give you a good level of signal compared to the inherent noise in any recording, but without creating distortion. Distortion is often referred to as clipping , because at this point there are not enough bits available to represent the sound digitally, so they are cut off above this point

Enlarging the Meter Toolbar by clicking and dragging may help gauge levels more accurately

To adjust the input level itself, use the right-hand slider (by the microphone symbol) on the Mixer Toolbar:

Mixertoolbar input slider.png

Move the slider left or right until the recording level settles at about -6 dB. If the meter bars drift so far to right of -6 dB that they touch 0, the red indicator will appear to right of the meter bar, as in this image where the left hand channel has at some stage peaked beyond the distortion level:

Metertoolrec.png

As soon as you see the red indicator you’ll know you have increased the input level too far, so will need to move the input volume slider back leftwards. Note that the achieved recording level is a combination of both the input level you record at and the output level of the source. If you find you achieve near maximum levels on the recording meter with only a very low setting on the input slider, this may lead to the recording sounding excessively close and un-natural. In this case you may want to cut back the output level somewhat if you can. Similarly if you can’t get close to maximum levels on the meter even when the input slider is on maximum, try turning up the output level

If you find the meters don’t respond at all to the input slider, double-check that you are recording from the correct input source as selected in the dropdown box to right of the input slider. If you still have problems, try setting the levels in the system mixer instead. On Windows machines this is done through the Control Panel, and on Mac OS X systems in Apple Audio-MIDI Setup

There is no reason you can’t use a standalone recording meter if you prefer

Monitoring the audio using playthrough

The simplest way if you wish to hear what the monitored input sounds like is to go to the Audio I/O tab of Audacity Preferences, and enable Software Playthrough. Don’t enable this option if you are recording sounds the computer is playing with the sound device’s “Stereo Mix” or similar option, because this will lead to echoes or even failure of the recording

Question #6     How do you reduce noise?

Noise can be reduced during post-production, by use of various plugins. Typically, they are fed a sample of the noise alone and then subtract that noise from the rest of the recording. To facilitate this, be sure to record a second or two of “silence” before you start the actual performance. This gives you a clean sample of the noise. This works extremely well with low-level background sound like air conditioning

PREVENTION IS BETTER THAN CURE

  • Avoid noise in the first place instead of trying to remove it afterwards
  • Use a sound card with balanced inputs, and use shielded cable, sufficiently long so that you can move the microphone right away from the computer
  • An inexpensive external USB sound card should (other things being equal) be much quieter than the motherboard sound device that came with the computer
  • Place the microphone on a floor stand and ground it separately from the computer, or use a ceiling-suspended microphone
  • Keep all microphone cables away from mains electricity cables

To prevent noise entering your microphone recording:

  • Set the correct input level of your sound sources. Set it as high as possible to increase the dynamic distance of sound and noise, but as low as necessary to prevent clipping
  • Use balanced audio connections and shielded cable
  • If you have it at hand use a hardware limiter as long as Audacity cannot process sound in real-time. Personally, when recording a band I use a simple foot pedal limiter on the vocals, because these seem to be the most dynamic sound source. The input level of instruments can be adjusted quite easily
  • When you have found the optimal levels, decrease them about a dB or two when recording music just to be sure. When actually performing, musicians tend to be a little bit louder than in rehearsal mode
  • Shut every non-used sound channel and sound source. Mute non-used channels on your mixing board, switch off non-used amps, keyboards, and don’t forget shut the door and the window
  • Especially in a home-recording environment, avoid switching lights or electric machines on or off during recording, because a spark can cause a knack on the track
  • Avoid fluorescent lighting and keep cell-phones a good distance away from any equipment

Noise tends to stay on the same low level and cannot be controlled in general, but the record source is often dynamic and may change and can be controlled. So you may get aware of your potentials in noise control here. Initially non-audible noise often comes to attention when a low signal is amplified and/or normalized because normalizing and amplifying increases both the wanted signal and the unwanted noise. Therefore, the measures and proceedings described prevent some of the typical noise problems later

Environmental Noise

Room Noise

  • Cheap PC mics, besides having rotten sound quality, are nearly omnidirectional. Use a directional microphone.  If the business end is not pointing at the noise source, it won’t pick up the noise. It may still pick up ambient noise, including any sound originating elsewhere in the room and bouncing off the walls. That will be reduced if you put the microphone right on top of the sound source. When recording vocals, the performer’s lips should almost be touching the mic, and sing straight into it (not across the top). Ambient noise will be blocked by the singer’s head
  • Use a noise-blocking stand and a long enough cable to distance the mic from the computer. Often times the vibrations from a computer’s fans and drives will vibrate the computer desk and the surrounding area. If the microphone and its stand are on the computer’s desk, the microphone often will pick up the vibrations and produce a noise on the audio track (often referred to as a “warble” sound: a soft, repeating hum). To help prevent this, use a ceiling-suspended microphone stand or a full-size floor stand that can have its height adjusted. If these (pricey) options are not available, an alternative is to support a desk stand using a sound-insulating lift, such as a flimsy cardboard shoe box or a number of newspapers. These things insulate the noise rather well, making it difficult for any vibration noises to flow through to the microphone. Almost any lift made of non-rigid, flexible material will do
  • Direct connection. If recording instruments like keyboards and electric guitars, feed their signal directly into your sound card’s line input, or to a sound board and then into your PC. Guitars will need preamps. If you’re recording acoustic instruments, use directional microphones placed close to the instrument, or use a pickup with preamp and connect direct
  • Get the desired signal as loud as possible (without clipping) into the microphone. This allows you to reduce the gain, which will also reduce the low-level noise. The further a microphone is away from the source, the more you have to amplify the mic’s input signal to get to a usable level. But, boosting the gain amplifies everything, including background sounds and even the internal electrical noise of the amplifier. Ideally, the microphone should be right on top of the source, with the gain no higher than necessary to get peaks around -3dB. If you are doing multitrack recording, record each individual track as loudly as possible. Set the final volume of each track during post-production mixdown
    Note: placing the microphone “right on top of the sound source” might not be ideal when recording certain sources (such as bowed instruments like violins and cellos). Instead of placing the mic right on top of your sound source without regard to factors other than noise, you should experiment with different kinds of microphone placement until you find one that provides the best sound. If the “optimal” placement is too noisy you can look for other ways to reduce the noise. In the end, nothing beats an ideal recording environment
  • Don’t forget the possibilities of non-technical noise reduction:
    • Turn off your refrigerator and furnace / air conditioner during the recording session
    • Watch out for telephones, cell phones, pagers, ticking clocks, lawnmowers, and the like
    • Avoid locating the recording session near airports, train tracks, and fire stations
    • Hang blankets on the walls, to dampen a live room. Or record in a room with wall-to-wall carpeting
    • If you can, record in a basement, to help isolate your session from outside noise. You’ll probably need to use the blanket-on-the-wall trick here, since concrete walls make good sound reflectors
    • Record late at night to reduce traffic noise leaking in from outside

Electrical noise

60/50 Hz hum and/or crackle

  • A common problem. Make sure all your recording equipment is connected to the same ground. This is easiest to accomplish by plugging everything into the same power strip. Then ground the computer separately from the recording equipment
  • Keep microphone cables well away from mains electricity cables (including those behind walls)
  • Try to use incandescent light bulbs (including halogen lamps); avoid using fluorescent lamps near a signal path (cables and equipment), especially for low-power signal lines such as microphone cables.  Fluorescent lamps often generate a significant amount of high-frequency RF noise, which may then be captured by the cable or the equipment.  Lamps on the ceiling do not usually induce buzzes (because they are far away), but if used in a group of 4 or more, they may introduce buzzes into the power line, which may affect other equipment on the same power circuit.  Power conditioners may be used to alleviate this problem
  • If all else fails, get rid of the hum during post-production by using a de-noise plugin or an extremely narrow notch filter. Crackle will be much more difficult to remove

Remove noise

On virtually any recording you can find noise. It is not always necessary to get rid of it completely. First-of-all, it is often audible only in very low-volume passages. Second, the average not-too-picky listener will accommodate to the noise level of your recording. In this regard it is comparable to the odor of a room: When you enter it you become aware of it, but once you stay in there for a period of time you’ll probably cannot smell it anymore. Third, he/she may listen to your program in the car or while washing dishes so he/she may be not in the position at all to hear the noise

Sometimes, a completely silent passage, e.g. between sequential parts of a program, can irritate a listener more than a constant low-level noise throughout the mix. This is so because complete silence may disrupt the ambiance of the material. There are situations where you actually want to add noise (e.g. in film production between a cuts of the same scene)

So, you may want to change your attitude towards noise here: It’s not just dirt that needs to be removed, but it’s a natural part of all listening experiences that has to be dealt with appropriately. In general, we need to accomplish two things: The noise just has to stay on a roughly similar level throughout the material and it should not be too obvious to let it be ignored

Hi-Band Noise

Removing hi-band noise appears to be an option you can choose to take or to leave out. If you apply it, do it on the most basic level you can reach, on unprocessed tracks (for example, before adding reverb), on single tracks or even on single passages

On multiple tracks:

  • Before doing anything else, increase the tracks to a working level with the Audacity Normalize function. Leave a little bit of headroom when amplifying
  • Mute all passages where there is nothing to be used in the mix. You may use the envelope tool to make silent passages of a track really silent
  • Fade-ins and fade-outs are much better than sharp edges cut-and-pasted that very likely will cause clicks

On a single noisy track, you may want to use the Noise Removal feature of Audacity. You accomplish this task in two steps:

1) You pick a “noise-only” part of the track’s signal. This part should not be longer than approx. half a second. This is a sample of the noise that will be used to compute the necessary changes to the track to remove just the noise (though this idea is always merely theoretical)

You have to be careful in selecting your sample. If you pick a sample that contains not only noise but also a slight part of – let’s say – a reverb tail, you’ll remove that, too. To give you another example, the sound of breathing can be quite similar to noise but it provides a lot of the vitality of a vocal track.  Now, select a small portion of noise, call “Noise Removal” and Select “Get Noise Profile”. If you are unhappy with your selection, you can repeat this step. Every time the last sample will be overwritten with the new one

2) Now, select the portion of the track that needs noise reduction (in most cases this will be the complete track). Call the “Noise Removal” function a 2nd time. You can click on “Preview” to listen to the first seconds of your selection or on “Remove Noise” to execute the noise filtering. The less/more slider is quite obvious, I think, and can be changed for testing and applying the effect

If you’re unhappy with the result you can Undo all changes on your track and try again.  Noise Removal often helps to reduce hi-band noise such as hissing and (to a certain extent) crackling

Subsonic Noise

While removing hi-band noise might be considered optional, removing subsonic noise seems mandatory to the writer. In contrast to hi-band noise you can apply subsonic reduction as a step in the mastering process, on the mixed and processed material, before a final Normalize

Subsonic or low-band noise can enter your recorded material in many ways, such as through physical vibrations during the recording or noise from the tape machine (if you still use one of those)

Everything below 20 Hz is called “subsonic” because the human ear is unable to perceive it as recognizable sound. You can recognize subsonic noise by eye when the zoomed in waveform in Audacity is not symmetrical along the time axis. If you have already applied normalization in Audacity as recommended above, this should have removed any DC offset in the recording. The reasons for removing subsonics are the same as with DC offset – they will reduce the headroom available on the recording by taking up dynamic range, and can introduce clicks when editing

To remove subsonics from your track you may filter it with:

  • Audacity’s built in Equalizer under the Effect menu
  • Audacity’s built-in High Pass Filter under the same menu – set the cutoff frequency to around 25 Hz. You can repeat this same effect a couple of times if a sharper cutoff slope is desired

After removing subsonic noise you can generally re-normalize your track, and it will appear louder yet much more defined in the bass

Question #7     How do you repair “popping” vocals?

Manually fixing ‘breath sounds’ on a vocal recording can take an inordinate amount of time – so if possible avoid them in the first place. A ‘pop-shield’ between speaker and microphone can help, and can be made cheaply. Should you still have popping or percussive vocals, here’s how to repair them, though it will never be as perfect as a good original recording:

1) Make sure the recording’s DC offset is zeroed. (This in itself will eliminate one possible cause of clicks generated by subsequent edits and silences and should be done before you do any editing). Do this by selecting the whole track and choosing the Normalize effect.  In the resulting box make sure you’ve only selected “Remove DC offset”

2) Zoom in on the percussive sound. They’re easy to pick up. They look like a big single waveform just before the rest of the sound

3) Select this waveform and then choose the fade in effect. This will soften the percussive and hopefully solve your problem

4) Since these percussive sounds are mostly very low in frequency, some users have reported great success using the ‘high pass’ effect instead of the ‘fade in’ effect as suggested in step 3), above. Note that the ‘high pass’ effect can be repeated multiple times on the same selection. This approach has an additional advantage of not interfering with or reducing the level of higher frequency sounds, an advantage when the vocal percussive sound was recorded along with other instruments or sounds

Question #8     What file types may be imported/exported?

  • Audacity does not care what the extension of the file is. If the file is well-formed, libsndfile will correctly detect the format and import the file appropriately. Audacity fully supports 24-bit and 32-bit samples and almost unlimited large sample rates.
  • Import and export WAV, AIFF, AU, FLAC and Ogg Vorbis files
  • Fast “On-Demand” import of WAV or AIFF files (letting you start work with the files almost immediately) if read directly from source
  • Import and export all formats supported by libsndfile such as GSM 6.10, 32-bit and 64-bit float WAV and U/A-Law
  • Import MPEG audio (including MP2 and MP3 files) using libmad
  • Import raw (headerless) audio files using the “Import Raw” command
  • Create WAV or AIFF files suitable for burning to audio CD
  • Export MP3 files with the optional LAME encoder library
  • Import and export AC3, M4A/M4R (AAC) and WMA with the optional FFmpeg library (this also supports import of audio from video files).
  • Audacity also supports virtually any uncompressed format using the Import Raw function. With this function you can also import SoundDesigner-II Files (used in the Mac-World).
  • You can import multiple files at once by shift-clicking or control-clicking on multiple files in the Open or Import dialog boxes. Alternatively, drag multiple files to your Audacity window. (On Windows, drag the files to the Audacity project window, and on Mac OS X, drag the files to Audacity’s icon in the Finder or in the Dock)
    • Hint: Audacity may not realize the file is an MPEG file unless it has an appropriate extension. To be sure, try renaming it so that it ends in “.mp3”, and then if libmad can open it, it definitely will.
    • Audacity imports ID3 tags from MP3 files, which give the Artist, Title, Album, and other song info, using libid3tag. You can see these tags by selecting “Edit ID3 Tags…” from the Project menu. Audacity will let you save these tags if you export an MP3 file. You can write either ID3V1 tags or ID3V2.3 tags.

Managing projects

  • Audacity projects consist of a project file (.aup) and a corresponding data directory. Audacity project files are just XML, so you can read them using any text editor or XML reader. The file format is intended to be specific to Audacity, but open so that others can work with it if they’re interested. Audacity projects store everything that you see in the Audacity window. They open and save very quickly, so you can continue your work where you left off.

Audacity project files are not intended to be used as a portable format, or as the primary way to store your audio. Export as a common supported format like WAV, AIFF, or MP3

  • When you create a new project, Audacity writes data to a temporary directory. You can set the location of this directory (folder) in the Preferences dialog.
  • To save time, Audacity doesn’t make a copy of files when you import them – instead, it saves a reference to the original file in the project. If you prefer Audacity projects to be self contained, you can choose to always make copies in the Preferences dialog on the File Formats tab.

Question #9     What type of editing tools are available?

Editing audio

  • Unlimited Undo lets you revert actions all the way back to when you first opened the project.
  • Undo History window lets you see all of the changes you’ve made, and quickly jump back to a previous point.
  • Audacity splits tracks into small blocks internally, so large cut and paste operations are quick because they don’t require rewriting the entire track each time a change is made. This is different than the Edit Decision List system used by many other editors, but the effect is similar: editing is quick, and it’s easy to Undo.
  • Audacity displays the current cursor position or selection bounds in a status bar in the bottom of the project window. You can change the units using the “Set Selection Format” option in the View menu.
  • Lots of basic editing operations:Save/restore selection
    • Cut
    • Copy
    • Paste
    • Trim (delete everything except selection)
    • Delete
    • Silence
    • Split
    • Duplicate
    • Find Zero Crossings
  • Modify cursor and selection using arrow keys (modify with Shift and Control)
  • Envelope editor lets you adjust the relative volume of tracks over time. Just select the envelope tool (the one with the two white diamonds pointing towards a center control point) and click on a track
  • Drawing tool has three options to edit individual samples (zoom in first): Multi-mode tool lets you select, modify envelopes, edit individual samples, and zoom, all from one tool. Which tool is active is based on the exact location of the mouse:
    • Click: change samples
    • Alt-click: Smooth
    • Ctrl-click (and drag): change just one sample

Mixing, panning, and warping

  • Audacity automatically mixes when you have more than one track open. It automatically resamples as necessary.
  • Each track is designated as either Left, Right, or Mono. When you see a stereo track (two tracks joined together), the top one is the Left Channel, and the bottom one is the Right Channel. To change this, use the track pop-down menu.
  • Each track has a gain control that you can use to adjust its volume.
  • Each track also has a panning control that lets you give it relatively more volume in the left or the right channel.
  • Adding a Time Track lets you warp the speed of playback over time.

Question #10….What built-in special effects are available?

  • Audacity has many built-in effects and also supports plug-in effects in the LADSPA, VST, and Nyquist formats.
    • Change the pitch without altering the tempo (or vice-versa)
    • Remove static, hiss, hum or other constant background noises.
    • Alter frequencies with Equalization, Bass and Treble, High/Low Pass and Notch Filter effects.
    • Adjust volume with Compressor, Amplify, Normalize, Fade In/Fade Out and Adjustable Fade effects.
    • Remove Vocals from suitable stereo tracks.
    • Create voice-overs for podcasts or DJ sets using Auto Duck effect.
    • Other built-in effects include: Run “Chains” of effects on a project or multiple files in Batch Processing mode.
      • Echo
      • Paulstretch (extreme stretch)
      • Phaser
      • Reverb
      • Reverse
      • Truncate Silence
      • Wahwah

 

Class #2 Listening Exercise…the stuff and substance of music

Image Posted on Updated on

Kitchener Studio Project
Monday, Jan 20/2014
Sounds of a group of singers dancing & drumming in a communal celebration. Thoughts that came to mind while listening:
WHERE DOES THE SOUND RESONATE IN THE BODY?
THE GROUP IS THE INSTRUMENT
THE SOUND IS OPEN TO THE SKY
BUT IS CONTAINED WITHIN A CIRCLE OF MOVING PEOPLE
INTERSECTION OF HIGHER NOTES
USING THE VOICE AS AN INSTRUMENT
USING THE BODY AS AN INSTRUMENT
EVERYONE PROVIDING A CONTRIBUTION
FEELS LIKE A HEARTBEAT
SINGING DRUMS IN THE CENTRE
STORY UNITES
RHYTHM IS A POETIC LANGUAGE
CHANTING IS A GROUP RHYTHM
ORCHESTRA OF VOICES, CHOIR OF SOUNDS
 

                                                                                                                                                     RANDOM CLASS NOTES FOR FOLLOW-UP:

DEPTH OF FIELD IS SUPER IMPORTANT TO RECORDING.  DEPTH OF FIELD IS CAPTURED…NOT CREATED IN POST

LISTEN TO:  TOWER OF POWER (METERS?)  NEW ORLEANS STYLE DRUMMING THAT CREATES A SENSE OF SPACE

IF YOU CAN MASTER “ONE MIKE” RECORDING, THE REST IS EASY
THE ORIGINAL TRACKS TO LAY DOWN ARE REFERRED TO AS “BED TRACKS”

THE SCAFFOLDING THAT HOLDS THE MUSIC TOGETHER:

CONCEPT OR THEME

MELODY – (2ND MOST IMPORTANT QUALITY & WHAT MOST SOUND ENGINEERS STRIVE FOR)

RHTHYM – (#1)

  • IF YOU MAKE A MISTAKE IN DRUMMING, IT IS HARD TO RECOVER, OTHER INSTRUMENTAL ERRORS ARE FORGIVABLE
  • THEREFORE DRUMMER MUST BE ABLE TO KEEP TIME  TO MAKE THE RECORDING PROJECT WORK (& AVOID TIME CONSUMING REMEDIES)
  • KICK AND SNARE USED TO KEEP A NATURAL FLOW
  • “TIGHTNESS” IS RECOGNIZED AS A PRO QUALITY, & IS RADIO FRIENDLY

 HARMONY – (3RD)

  • ANOTHER MELODY
  • PADS THE NOTE MAKES IT RICHER, FLUFFIER; DIAD/CHORD/CHORD EXTENSIONS
  • INTERWEAVING, COMPLIMENTING IN THE SAME KEY

 LYRICS

  • THEY TAKE THE MOST TIME TO PRODUCE (FOLLOWED BY RHYTHM)

 DENSITY

  • HOW MUCH IS GOING ON?
  • TECHNIQUES INCLUDE FLATTENING (OFFSET 3-8 MILLISECONDS)  & ADDING MORE VOCALS
  • FLANGING & DELAY (DIVERSION OFF THE ORIGINAL SOURCE)

 INSTRUMENTATION

  • YOU EVALUATE YOUR PERFORMER (FATIGUE/STRENGTH) AND YOU MAKE INSTRUMENT CHOICES FROM THERE
    • THE MATERIALS THAT INSTRUMENTS ARE CONSTRUCTED WITH HAVE CHANGED (MAHAOGANY TO MAPLE, ETC)
    • PICK THE RIGHT INSTRUMENT…IT CAN CHANGE EVERYTHING

SONG STRUCTURE

LISTEN TO TOM PETTY – WALLFLOWER
 
PERFORMANCE

QUALITY OF THE GEAR & THE RECORDING

THE MIX

MASTERING

  •  TAKING YOUR FINAL MIX AND MANIPULATING IT, MAKING IT “LARGE”
  • LOUDNESS WARS?

PLAYBACK –  WHAT IS THE END MEDIA THAT THE PROJECT IS GOING TO BE LISTENED ON???  WORK BACKWARDS FROM THERE

RECORDING BIT AND SAMPLE RATE?

  • LOSSY (MP3 & AAC) VS LOSTLESS (WAVE & AIFF)
  • LOSING QUALITY OF AUDIO WHEN FILES GET COMPRESSED
  • CD QUALITY IS AT 16 BITS 44.1 SAMPLE RATE VERSES MP3’S WHICH ARE 128 & NOW 320

REVERB & ECHO:

SOUND IS DRAWING OUT THE ROOM (LIKE AN ARTIST) LETTING YOU KNOW WHAT THE SURROUNDING IS

WORKING WITH ROOMS IS AS IMPORTANT AS MICROPHONE TECHNIQUE

  • TEST OUT THE MID RANGE IN A ROOM
  • DO YOU WANT TO USE THE ROOM FULL SIZE?  HOW DO YOU DEADEN THE ROOM?
  •  COUNT OUT THE REVERB TIME (1SEC, 1.5 SEC?)
  • PADDING THE FLOOR DEADENS SOUND
  • BACKGROUND NOISE?  UNWANTED SOUNDS? LISTEN FOR FINE DETAILS

HUMIDITY & TEMPERATURE:

  • SOUND TRAVELS FASTER IN HEAT
  • HUMID SOUND IS WARMER, THICKER, LOOSER…INSTRUMENTS MAY SOUND A BIT OUT OF TUNE
  • AT 500Hz AND UNDER – HUMIDLY ROUNDS OUT THE SOUND BUBBLE

500-5000Hz ARE THE MID RANGE FREQUENCIES THAT ARE MOST PROBLEMATICAL,  ABOVE 5000 YOU ENCOUNTER VOCAL ISSUES,   BELOW THAT LIKE 40 – 100 (SUBS)

1000Hz IS THE HOLY GRAIL OF FREQUENCIES – EASY TO HEAR

ASSIGNMENTS FOR NEXT CLASS:

  1. BLINDFOLDED;  SPEND 15 MIN A DAY  LISTENING TO AMBIANCE AND TAKE MENTAL NOTES – WRITE ABOUT WHAT YOU HEAR
  2. QUIZ– RESEARCH 10 FACTS/FINDINGS ABOUT DAW AND POST ON WORDPRESS SITE BEFORE NEXT CLASS
  3. DOWNLOAD MIXCRAFT
  4. PERSONAL BLOG AND RECORDING DUE NEXT CLASS ALSO

First Recording! Othello Act 5 Scene 2 – Desdemona’s Death_Take 1

Posted on Updated on

SSS Sunday EvenThe Death Of Desdemona - Recording_1ing Live! 
What’s this? Another Gorilla Repertory Theatre offering?  Take One
 
 
 
 
Desdemona’s Death:   https://soundcloud.com/sue-cunha/othello-act-5-scene-2
 Trying to simulate a podcast environment using two microphones through a Behringer 1204USB mixer connected to an Asus Eee running Audacity.  Microphone One at chin level on a stand recording Krissy B reading from Othello and Microphone Two suspended from the ceiling approximately 5ft apart from Microphone One and at roughly the same height.  Microphone Two is picking up the accompanying guitar at 3ft from the sound hole