DAW

Twenty Questions you have been dying to ask about Re-amping

Posted on Updated on

reamp

The goal of the reamp box is to make the amplifier react in
exactly the same way a live guitar would,
but with a pre-recorded audio source

Q. 1  What is re-amping?

Re-amping is the process whereby the direct signal from a guitar, bass or keyboard is recorded — usually on a separate track alongside the signal captured simultaneously with a microphone from an amp — and later routed to an amp in a studio to be miked up and overdubbed.

Q.2  Why is it used?

This approach allows the choice of amp or amp settings, or mic and mic position, to be changed after the initial recorded performance, but without the compromises and limitations inherent in trying to process an already recorded amp sound. It is a popular and widely used technique, although it is more common in the production of some musical genres than others.

Q.3  Does Re-amping save time?

Re-amping can be both a time saver and a time waster, depending on how and why it is employed! As a way of modifying a guitar part to better suit the track as the mix progresses, it is an invaluable technique, saving the time and effort of having to record a new performance. However, if used to avoid committing to a sound during tracking, it can be an enormous time waster.

Q.4  What does a re-amping box do?

There are various products available with integrated facilities for re-amping, as well as dedicated re-amping units, although the latter approach seems the more popular. There is nothing complicated about a re-amp box, which, in most cases, is essentially a passive DI box used in reverse.

A re-amping box accepts a balanced line‑level signal (nominally +4dBu) and converts it to an unbalanced instrument‑level signal (nominally ‑18dBu), usually via a transformer. A variable level control is often provided to optimize the level fed to the amp, along with a ground‑lift facility to separate the balanced source and unbalanced output grounds, thus avoiding ground‑loop hum problems.

A passive DI box can often be used reasonably well in this role, although it is normally necessary to attenuate the line‑level input significantly, to avoid saturating the transformer and generating an excessive unbalanced output level. Alternatively, the kind of line-level balanced/unbalanced interface intended for connecting domestic equipment to professional systems can be used, and the original ART CleanBox is often recommended in this role. However, for only a slightly greater outlay, a dedicated re-amp box, such as the Radial ProRMP, is rather more convenient to use.  .

Thanks to SOS for the above four answers!

 

Q.5  What musical changes are achieved by re-amping ?

Examples of common re-amping objectives include musically useful amplifier distortion, room tone, compression, EQ/filters, envelopes, resonance, and gating.

Q.6  What is meant by “warming up” dry tracks ?

Re-amping is often used to “warm up” dry tracks, which often means adding complex, musically interesting compression, distortion, filtering, ambience, and other pleasing effects. By playing a dry signal through a studio’s main monitors and then using room mics to capture the ambience, engineers are able to create realistic reverbs and blend the wet signal with the original dry recording to achieve the desired amount of depth.

Q.7  What are some advantages of re-amping ?

Re-amping allows guitarists and other electronic musicians to record their tracks and go home, leaving the engineer and producer to spend more time dialing in “just right” settings and effects on prerecorded tracks. When re-amping electric guitar tracks, the guitarist need not be present for the engineer to experiment long hours with a range of effects, mic positions, speaker cabinets, amplifiers, effects pedals, and overall tonality – continuously replaying the prerecorded tracks while experimenting with new settings and tones. When a desired tone is finally achieved, the guitarist’s dry performance is re-recorded, or “re-amped,” with all added effects.

Q.8  Who were some of the early ‘pioneers’ of re-amping ?

Les Paul and Mary Ford recorded layered vocal harmonies and guitar parts, modifying prior tracks with effects such as ambient reverb while recording the net result together on a new track. Les Paul placed a loudspeaker at one end of a tunnel and a microphone at the other end. The loudspeaker played back previously recorded material – the microphone recorded the resulting altered sound.

Thanks to Wikipedia for answering questions #5 to #9!

Q.9  Why is re-amping so popular?

Re-amping is a technique that gained a lot of popularity in the last 15 years. The technique’s obvious advantages are numerous:

  • Direct recording is an ideal way to reserve tonal flexibility for mixing (especially useful in the DIY world);
  • Instrument amplifiers and stomp boxes offer virtually limitless opportunities to create the right sound with a not-so-virtual interface;
  • It’s fun, which is still allowed.

Sometimes the re-amping goal is simple. An electric guitar can be recorded direct while monitoring a software amp simulator. During mixing the direct guitar track (sans faux amp) will be re-recorded through an actual amp.

Q.10  What does the work flow look like ?

Re-Amp Signal Flow

Other popular uses include adding some grit to a direct bass track, rescuing underwhelming keyboard sounds, or using your favorite stomp boxes as outboard processing. In any case, if the goal of the process is amp-related, you can be sure it is also more or less distortion-related.

Sompbox Outboard Workflowa

Q.11  How do you manage the gain staging of a re-amped signal?

As pictured above, the re-amp process requires us to adapt the balanced, relatively low impedance output from our DAW to the unbalanced, high impedance input of the amp or pedal(s) in question. The biggest factor in managing the gain staging of your re-amping signal chain is your choice of adapter.

The two choices are:

  1. A purpose built adapter, like the ones made by the company called Reamp, or Radial Engineering; or
  2. A passive direct box (so say some).

The purpose-built re-amping devices (most of which are derivative of John Cuniberti’s early 1990’s design) have the distinct advantage of being designed to operate in the amplitude and impedance ranges typically found in +4dBu pro audio and the instrument amplifier world. The same cannot be said of a typical passive direct box. These are important characteristics of inductive systems.

That’s not to say you can’t get the signal flow happening with a passive DI and an adequate amount of attenuation. However, the passive DI fails to simply supply a properly adapted signal. If your goal is to use the amps and pedals as signal processors, the adapter ought to facilitate that work, not pile on it’s own distortion.

Q.12  How do you address the relative phase of the original & the re-amped signal ?

Relative Phase

In applications where the originally recorded signal and the re-amped signal will be used in the mix together, their relative phase is an important tone-shaping factor.

There are two great options for addressing the relative phase of these two signals (options that put a polarity switch to shame):

  1. Speaker to mic distance. For many signals, moving the microphone back and forth along the pick-up axis will reveal a dramatic range of tonal difference. This can be particularly apparent with signals that have complex midrange harmonic content.
  2. Phase ‘alignment’ tools, like the IBP from Little Labs. Used as the re-amping adapter or after the mic pre-amplified return, these devices provide sweepable electronic control over relative phase. This allows the mic to stay in the spot you liked the most.

Regardless of your choice of tool, remember that relative phase is a subjective tone control in this setting. Don’t think about what’s right or wrong.

Q.13  What do you do if the two signals do not sound right together ?

Sometimes it can be difficult to decide whether the original signal should be used in combination with the re-amped signal. In these cases there’s usually something unique about each signal, but they may not be working together well.

This conflict can often be resolved by creating more contrast between the original and re-amped signals. On keyboard tracks, for example, I will frequently make significant, crossover-style EQ choices that allow me to more subtly combine the unique elements of each signal type.

Another technique that can be used with remarkable ease is one I dubiously call “Sum and Amp-ness”. I think it kills for gritty bass, particularly with tight, close drums.

  1. Use a DI bass right up the middle of your mix. Get it sounding great, and setup a re-amp path;
  2. Setup a nicely overdriven bass tone on an amp. Somewhere in signal flow, HPF this path in the 300 – 500Hz neighborhood. I like to do it before the amp;
  3. Use the return from the amp just as you would use the ‘side’ component of a mid-side mic array. For maximum sum and difference affect, mic the amp off-axis.

This set-up leaves you with strong, centered low frequency focus, but adds an interesting distorted ‘width’ component. Try it out in mono-tending drum and bass situations.

Finally, don’t be afraid to let the re-amp path hang out in input monitoring while you mix. There’s no real reason to record it until you’re getting close to printing mixes. It’s incredibly easy to make changes as long as it’s all still live.

Thanks to Pro Audio Files for answering questions #9 to #13!

 

 Q.14  How do you load down a guitar ?

THE LOADING ISSUE

To capture a characteristic guitar sound, you need to record the same thing you would hear if the guitar connected directly to an amp. Although many people like the “high-fidelity” sound of a guitar feeding an ultra-high impedance input, others prefer the slight dulling that occurs with a low-impedance load (e.g., around 5-100 kohms) as found with some effects boxes, older solid-state amps, etc. This is especially useful when the guitar precedes distortion, as distorting high frequencies can give a grating, brittle effect that resembles Sponge Bob on helium.

There are several ways to load down your guitar:

  •  Find a box that loads down your guitar by the desired amount, then split the guitar to both the box and the mixer or audio interface’s “guitar” input.
  • If your recorder, mixer, or sound card has a guitar input, try using one of the regular line level inputs instead.
  • Use a box with variable input impedance (e.g., the “drag control” on Radial products)
  • Create a special patch cord with the desired amount of loading by soldering a resistor between the hot and ground of either one of the plugs. A typical value would be 10 kohms.
  • If you’re going through host software with plug-ins, insert an EQ and roll off the desired amount of highs before feeding whatever produces distortion (e.g., an outboard amp that feeds back into the host, or an amp simulator plug-in). However, this doesn’t sound quite as authentic as actually loading down the pickup, which creates more complex tonal changes.

 Note that you need to add this load while recording, as it’s the interaction between the pickup’s inductance and load that produces the desired effect. Once the dry track is recorded, the pickup is out of the picture.

But just because we have a signal doesn’t mean we can go home and collect our royalties, because this signal now goes through a signal path that may include pedals and other devices. As guitarists are very sensitive to the tone of their rigs, even the slightest variation from what’s expected may be a problem. For example, the transformers in some direct boxes or preamps color the sound slightly, so the guitarist might want to send the signal through the transformer, even though transformer isolation is usually not necessary with a signal coming from a recorder.

 Q.15  What are some plug-ins and interfaces available ?

Traditional re-amping is replaced by virtual re-amping using guitar-amp plug‑ins, many of which offer remarkably good quality and enormous versatility. The process is exactly the same, but without having to physically route the signal out of the DAW and into a real amp in a real studio, miked up with real mics.

Plug-ins and low-latency audio interfaces have opened up “virtual re-amping” options. Guitar-oriented plug-ins include IK Multimedia AmpliTube, Native Instruments Guitar Rig, Line 6 POD Farm, Scuffham Amps, Waves G|T|R|, iZotope Trash, Peavey ReValver, Overloud TH2, McDSP Chrome Tone, and others.

The concept is similar to hardware-based re-amping: Record the direct signal to a track, and monitor through an amp. The key to “virtual re-amping” is that the host records a straight (dry) guitar signal to the track. So, any processing that occurs depends entirely on the plug-in(s) you’ve selected; you can process the guitar as desired while mixing, including changing “virtual amps.” When mixing, you can use different plug-ins for different amp sounds, and/or do traditional hardware re-amping by sending the recorded track through an output, then into a mic’ed hardware amp.

Q.16  What are the limitations when using a plug-in ?

Using plug-ins has limitations. If feedback is part of your sound, there’s no easy way to create a feedback loop with a direct-recorded track. This is one reason for monitoring through a real amp, as any effect the amp has on your strings will be recorded in the direct track. Still, this isn’t as interactive as feeding back with the amp that creates your final sound. And plug-ins themselves have limitations; although digital technology does a remarkable job of modeling amp sounds, picky purists may pout that some subtleties that don’t translate well.

Furthermore, monitoring through a host program demands low-latency drivers (e.g., Steinberg ASIO, Apple Core Audio, or Microsoft’s low-latency drivers like WDM/KS). Otherwise, you’ll hear a delay as you play. Although there will always be some delay due to the A/D and D/A conversion process, with modern systems total latency can often be under 10ms. For some perspective, 3 ms of latency is about the same delay that would occur if you moved your head 1 meter (3 feet) further from a speaker—not really enough to affect the “feel” of your playing.

If latency is an issue, there are other ways to monitor, like ASIO Direct Monitoring. Input signal monitoring (often called zero-latency monitoring) is essentially instantaneous; the signal appearing at the audio interface input is simply directed to the audio interface out, without passing through any plug-ins. With this method you can also feed the output to a guitar amp for monitoring, while recording the straight signal on tape.

In any event, regardless of whether you use hardware re-amping, virtual re-amping, or a combination, the fact that the process lets you go back and change a track’s fundamental sound without having to re-record it is significant. If you haven’t tried re-amping yet, give it a shot—it will add a useful new tool to your bag of tricks.

Q.17  When did the term “re-amping” come into use ?

Background: A History of Re-Amping

by Peter Janis, Radial Engineering

As with so many aspects of audio, it’s hard to pin down exactly when a technique was first used, and that goes for re-amping. While Reamp made the first commercial box designed expressly for this purpose, engineers had already been creating re-amping setups for years.  Recording historian Doug Mitchell, Associate Professor at Middle Tennessee State University, comments that “The process of ‘re-amping’ has actually been utilized since the early days of recording in a variety of methods. However, the actual process may not have been referred to as re-amping until perhaps the late ’60s or ’70s. From the early possibilities of recording sound, various composers and experimenters utilized what might be termed ‘re-amping’ to take advantage of the recording process and to expand upon its possibilities.

The first commercially available box for re-amping has been tweaked and revised over the years.

In 1913 Italian Futurist Luigi Russolo proposed something he termed the ‘Art of Noises.’ Recordings of any sound (anything was legitimate) were made on Berliner discs and played back via ‘noise machines’ in live scenarios and re-recorded on ‘master’ disc cutters. This concept was furthered by Pierre Schaeffer and his ‘Musique Concrète’ electronic music concept in the ’30s and ’40s. Schaeffer would utilize sounds such as trains in highly manipulated processes to compose new music ideas. These processes often involved the replaying and acoustic re-recording of material in a manipulated fashion. Other experimenters in this area included Karlheinz Stockhausen and Edgard Varèse.

With the possibilities presented by magnetic recording, the process of what might be termed re-amping was utilized in other ‘pop’ music areas. Perhaps the first person to take advantage of this was Les Paul. His recordings with Mary Ford often utilized multiple harmonies all performed by Mary. Initially these harmonies were performed via the re-amping process. Later, Les convinced Ampex to make the first 8-track recorder so that he might utilize track comping to perform a similar function. Les is also credited with the utilization of the re-amping process for the creation of reverberant soundfields, by placing a loudspeaker at one end of a long tunnel area under his home and a microphone at the other end. Reverberation time could be altered with the placement of the microphone with respect to the loudspeaker playing back previously recorded material.

Wall of sound pioneer Phil Spector is perhaps the most widely accredited for the use of the re-amping process, and because of his association with the Beatles, is potentially regarded today as the developer of the process. However, Phil was actually refining a process and exploring its possibility for use in rock music.

Thanks to Harmony Central  for answering questions #14 to #17!

Q.18  Can you build your own re-amping box ?

Yes!  How to Build a DIY Reamping Box

One of the most powerful tools for expanding your sonic pallet in the studio is a reamping box–a box that converts the output from your mixer/interface/tape machine to an instrument-level signal. Suddenly, all of your guitar amps, effects pedals, and synthesizers become effects for any signal you can throw at them.

A reamping box is a great first-project for DIY beginners: it’s totally passive (you can’t shock yourself), there are a limited number of solder joints to make, and there’s plenty of room to make those joints. For a better idea of what’s involved in this build, check out this video on how to make a simple reamping box:  here

LINE2AMP diy kit

Full kits are currently available, including everything needed to complete the project:

  • High-quality transformer by Edcor USA
  • Pre-drilled, diecast aluminum case
  • TRS jacks by Neutrik
  • Xicon metal-film resistor
  • Toggle switch
  • All hookup wire needed
  • Nut, bolt, and lock washer for ground connection

 

Q.19  What does guitar re-amping sound like?

In a nutshell, it sounds real. Follow this link for examples from Pure Mix Advanced Audio.  Thanks to Ben Lindell for this awesome tutorial

Quite a difference right? I love how the reamped track is crunchy but with some life to it. So what was my signal path? This track started with my Telecaster running into the Demeter Tube DI box connected to a DBX 386 pre and into my Digidesign 192. Then for reamping the signal traveled out of a Digidesign 192 into the Little Labs Redeye then into a Mesa Boogie Nomad 55 on the clean channel with the ‘Pushed’ switch flipped. I recorded it with a Beyerdynamic M 201 TG about a foot away going through one of my custom germanium preamps and combined that with a Neumann U87 in omni a bit further away running though an API 3124 preamp and into an 1176 to maximize the room tone.

 

Reamping Guitar

Q.20  What are some of the applications promoted by “REAMP” ?

courtesy of http://www.danalexanderaudio.com/reamp.html :

1. Change amplifier make, tone settings, and effects at any time after original performance. A flat direct safety track is recommended but not always necessary for the best results. Preserve the inspired first-takes, always knowing you can REAMP later if you are unhappy with the amplified sound.

2. Engineers and producers can experiment with mic placement and room ambiance without asking the musician to keep playing over and over. Record a scratch direct signal on tape/disk and feed it to the musician’s amp via the REAMP and experiment.

3. Insert instrument effects at any time during production. REAMP from tape/disk to any stomp box (such as a wah-wah, or overdrive), then take the effects output and return it to the console via a direct box.

4. Live recordings direct from the instrument’s output to tape/disk can later be REAMP’ed in the studio. This solves many problems related to remote recording. You can match the sound for a punch in by using the original instrument and direct box, thereby making only small repairs. After the repairs are done, REAMP to any amplifier instead of re-recording the entire performance. The REAMP is the cheapest insurance policy going!

5. Insert studio pre-amps, equalizers, signal processors, and dynamics control before reaching the instrument amplifier.

6. Synthesized guitar and bass tracks can sound more live-like by RE-AMPing into an instrument amplifier and using mics. Send drum tracks to various instrument amps and mic the room for ambiance.

7. No need to record instrument amps during a tracking session if space or leakage is a problem; perfect for late night home recording. The next day plug in your amp, turn it up and REAMP the previous night’s performance. Record the bass direct and REAMP later to tape/disk or during the mix.

8. REAMP the same performance with different amps and stack the sounds for various textures and panning. You can use the REAMP to overdrive the front end of a guitar amp for intense distortion by turning up the REAMP’s trim control to eleven!!

 

Ten Questions about Audacity (DAW)

Posted on Updated on

Question #1     What is Audocity’s default bit recording quality?  32 bit

  • Supports 16-bit, 24-bit and 32-bit (floating point) samples (the latter preserves samples in excess of full scale).
  • Sample rates and formats are converted using high-quality resampling and dithering.
  • Tracks with different sample rates or formats are converted automatically in real time.

Recording in 32 bit quality takes a lot more work than recording 16 bits, and a slower computer may not be able to keep up, with the result being lost samples. If you are recording for immediate export without editing, 32 bit recording may offer no advantage over 16 bit recording if you only have a 16 bit sound device. Most built-in consumer sound devices on computers are only 16 bit (including cheap sound cards)

You can change the default sample format Audacity records at to 24 bit or 16 bit by going to the Quality tab of Preferences.
Record at 24-bit depth on Windows (using Windows WASAPI host), Mac OS X or Linux (using ALSA or JACK host).

also:

  • Record at very low latencies on supported devices on Linux by using Audacity with JACK
  • Record at sample rates up to 192,000 Hz (subject to appropriate hardware and host selection). Up to 384,000 Hz is supported for appropriate high-resolution devices on Mac OS X and Linux

Question #2     How can I split a long recording into multiple files or CD tracks? 

silences between tracks are automatically detected and labeled using  Analyze>Silence Finder  (Audacity legacy 1.2.6 Plug-in Pack)

Manually:

  1. Click to place the cursor at the start of the first song
  2. Choose “Add Label at Selection” from the Project menu (or Tracks menu in Audacity Beta). If you wish, you can type the name of the song
  3. Repeat steps 1 and 2 for each song
  4. When you are finished, choose “Export Multiple” from the File menu. When you click the “Export” button, Audacity will save each song as a separate file, using the format and location you choose

Question #3     How do you capture a quality voice recording ?

Audacity can record live audio through a microphone or mixer, or digitize recordings from cassette tapes, records or minidiscs

  • Record from microphone, line input, USB/Firewire devices and others
  • Timer Record and Sound Activated Recording features
  • Dub over existing tracks to create multi-track recordings

Dealing with Technical Issues

Dynamic Range

One feature of recording people speaking is uncertainty of recording level. Speakers vary in volume, and may not be aware of the best microphone techniques so for example may stand in different positions relative to the microphone. In some cases, such as meetings and conference recording, there may also be remote participants who are being heard through a radio or television receiver. The result is wide recording level variation

Rather than record at the final bit depth wanted (let’s say 8 bits), with digital recording one can record at greater bit depth and set the recording level relatively low (say 10 dB to 20 dB below the -0 dB distortion level). This retains plenty of dynamic range but avoids the risk of speakers who are louder than others creating clipping, which will result in unpleasant sound quality

Sample Rates

There are also pros and cons about recording at different sample rates. The sample rate of the recording determines the highest frequencies that can be captured. Generally, lower sample rates are acceptable in speech recording where they are not in music, because voices (especially male) have a lower upper range of fundamental frequencies than instruments. Also, by the nature of the different sounds made when speaking and singing, it’s less important for quality reasons to capture the higher overtone frequencies in speech. In any case, the higher the sample rate you do record at, the more CPU time and disc space will be used

Multi Channel Recording

Where speakers don’t stand close to the microphone, multi-channel recording helps to keep all speakers above the room noise level, and clearly audible

For meetings it may be useful to place several microphones around the room, recording each microphone on its own channel. The multiple channels can be mixed down to mono later, selecting for each speaker whichever channel gives the highest ratio of speech to room noise. When post-processing, simply choose one channel for each speaker, mute the others, then mix down

Once again, more channels mean greatly increased CPU use and greater use of disc space. It’s important to test the hardware in multi-channel mode in advance, as running out of CPU capacity could cause recordings to have drop-outs or fail completely

Where simplicity is required, using only two microphones in different positions can still significantly improve the end result

Reliability

The generally more stable nature of Linux or Unix operating systems may mean a reduced chance of a recording failure if you record with these systems rather than with Windows (other things being equal).  Up to date sound card drivers specific to your hardware are more reliable than generic drivers when recording. Be prepared so you can quickly reinstall sound drivers between events if necessary

Shutting down unnecessary applications and processes so that the recording has most of the available CPU to itself  is important – especially on slower and older machines with less RAM. Don’t make text notes on the PC that is recording, or do anything but record with it. Such actions are likely to cause recording skips.  Consider making a checklist for any important recording. You may want to do a last minute check that you’ve got power settings set to always on, screensaver off, levels set right and so on before you record. Backing up audio files as soon as possible using cloud based storage reduces the chance of data loss

Microphones

Never use a built-in microphone that comes with a laptop, MP3 recorder or tape deck. Such microphones pick up lots of noise from the device’s drive or from the deck motors or tape

While microphones are usually set on stands for formal events, for meetings of a handful of people held round a table, microphones on the table may be sufficient. Always place the microphone on something soft and squashy so that sounds and vibrations transmitted through the table – of which there are usually many – are not picked up directly. The microphone lead itself should sit on the squashy item before it reaches the table, as some sound and vibration can be passed up a short length of cable.  The squashy items should be stable however; sponges fail in this respect! Folded clothing works fine, and the informal appearance helps put speakers a bit more at ease.  Alternatively, microphones hung overhead avoid vibration and disturbance

Speaking Technique

Generally the less attention speakers pay to microphones the better their talk, and a way to minimize awareness is to not even mention the subject.  Hidden microphones can put speakers more at ease. They still know the microphone is there, but not being repeatedly reminded of it helps.  A basic way to create a hidden microphone is to cover a black-coloured microphone and its visible section of lead with a layer of lightweight open weave clothing

Question #4     How do i improve recording quality?

The built-in sound card that comes with many computers is quite poor. It may be satisfactory for playing sound effects, but not good enough for high-quality recording. Worst offenders are the built-in sound on your motherboard, or any audio device on a laptop.  Some tips to reduce noise on your current system:

  • Mute playback of devices that you don’t use for recording – such as MIDI Synth, CD Audio, TAD-In, Auxiliary, Microphone, Line In. Only “un-mute” devices to be used.
  • Update sound drivers
  • Consider shielding your soundcard 
  • If possible, insert your soundcard into a PCI slot which has a dedicated “Interrupt Request (IRQ) Channel”, as described in your motherboard handbook. Except for dual processor motherboards, there will probably be 4 electronic IRQ channels used to assign IRQs. (This is not the same thing as the 16 virtual IRQs we usually talk about.) For example, on my ASUS CUV4X mobo, Interrupt Request channel “A” is shared by AGP (reportedly noisier than PCI video cards) & PCI-slot1 (leave blank if AGP is in use) & PCI-slot5 (empty). Int-“B” is shared by AGP & PCI-slot2 (NIC – noisy). Int-“C” is a dedicated electronic channel, taking hardwire interrupt pulses generated solely by the device installed in PCI-slot3 (soundcard). Int-“D” is shared by PCI-slot4 (SCSI – noisy) and USB-controller (mouse, keyboard, etc. – very noisy). If I install my soundcard in any slot other than PCI-slot3, the result is a scratching sound, like a loose connection at an input jack. But, it comes from the mouse pulses (slot controlled by Int-“D”) or from video rewrites (slot controlled by either int-“A” or “B”)
  • Even if you are using a ‘silent PC’ (one with passive cooling rather than a fan) you will still need sound insulation between it and your mic (a piece of felted board will do) as the hard drives are not silent
  • If you are using any outboard (externally powered) audio hardware, make sure all the equipment is plugged into the same power strip. Grounding issues can cause ground loops, which will appear in your recording as a hum
  • Cheap sound hardware, anywhere in the analog chain, will result in poor quality recorded sound

Sound cards

Buy a new sound card – especially if you were using your computer’s built-in audio capabilities before. The sound card’s ADC or analog > digital converter is the final step in your analog audio chain. Consider buying a USB audio external interface. The main advantage is that the A/D converter is then outside your computer’s case, which keeps electrical noise to a minimum – only the digital signal gets transmitted back to your computer. Another advantage is that you don’t have to open up your computer to install anything, just plug it in and go (after installing any software drivers required by the device). It’s easier to share it between multiple computers, too. Make sure other USB devices are unplugged if not being used, as especially USB 1.x has a limited bandwidth. Even things like network cards can interfere with USB audio so disable them

Microphones, more than any other single piece of hardware, will impact the quality of your recorded sound

  • If you are doing studio work, a condenser microphone (rather than a dynamic) will probably be the most suitable. They have greater accuracy and dynamic and transient response compared to dynamic microphones. For live recordings, professional dynamic microphones may be preferable – they will be less prone to picking up extraneous stage and audience noise
  • If you use a professional microphone, you’ll need a good preamp. The “mic” input on a sound card has a preamp behind it, but it’s usually not very good quality and will usually not provide sufficient amplification for the low outputs of pro microphones
  • Note also that built-in computer 1/4 inch mic ports are almost always mono and unbalanced. Built-in computer line-in ports are almost always unbalanced. Unbalanced inputs mean you must keep the cable short to prevent interference and muffling, but this increases the interference risk from being too close to the computer. For this reason, many external USB and firewire recording interfaces will provide balanced inputs and outputs
  • Don’t forget accessories like microphone stands and cables. Use XLR cables 

Question #5     How do you set recording levels?

The level at which you record your audio is very important. If the level is set too low, your audio will have background noise when you turn the volume up to hear it properly. If the level is too high, you will hear distortion. The process of testing the recording level without actually recording is called monitoring . To do this in Audacity, you need to use the Meter Toolbar:

Metertoolbar.png
If the meters are not visible, click View > Toolbars > Show Meter Toolbar
or in legacy 1.2.x go to the Interface tab of Preferences and check Enable Meter Toolbar

In the image above, the left-hand VU Meter with the green bars measures the playback level, and the right-hand meter with the red bars measures the recording level. Assuming you are recording a stereo track , the upper bar stating “L” refers to the left-hand channel, and the lower bar “R” refers to the right-hand channel. The values on the meter are negative decibel  values below the distortion level, where the distortion level has a value of zero (0). Hence the smaller the negative values become, and the closer the meter reads to the right-hand edge of the scale, the closer you are to the maximum possible level without distortion

To start monitoring, look at the right-hand recording meter just to the right of the recording symbol, and click the downward-pointing arrow:

Metertoolbarmonitor.jpg

This reveals a dropdown menu:

Monitormenu.jpg

The menu has several options. “Vertical Stereo” will rotate the meter so that the zero level is at the top, and “Linear” will change the meter scale so that the values read from zero to 1.0 where the distortion level has a value of 1.0

Now you can start singing into your microphone, playing your guitar (or your record or tape), and you will see the red recording bands move in real time with the loudness of the input signal. If you can’t hear what you are recording, this doesn’t matter for the purposes of the level test because the meters will indicate the level accurately

For most purposes, an optimal recording level is such that when your input is at its loudest, the maximum peak on the meters is around –6.0 dB (or 0.5 if you have your meters set to linear rather than dB). This will give you a good level of signal compared to the inherent noise in any recording, but without creating distortion. Distortion is often referred to as clipping , because at this point there are not enough bits available to represent the sound digitally, so they are cut off above this point

Enlarging the Meter Toolbar by clicking and dragging may help gauge levels more accurately

To adjust the input level itself, use the right-hand slider (by the microphone symbol) on the Mixer Toolbar:

Mixertoolbar input slider.png

Move the slider left or right until the recording level settles at about -6 dB. If the meter bars drift so far to right of -6 dB that they touch 0, the red indicator will appear to right of the meter bar, as in this image where the left hand channel has at some stage peaked beyond the distortion level:

Metertoolrec.png

As soon as you see the red indicator you’ll know you have increased the input level too far, so will need to move the input volume slider back leftwards. Note that the achieved recording level is a combination of both the input level you record at and the output level of the source. If you find you achieve near maximum levels on the recording meter with only a very low setting on the input slider, this may lead to the recording sounding excessively close and un-natural. In this case you may want to cut back the output level somewhat if you can. Similarly if you can’t get close to maximum levels on the meter even when the input slider is on maximum, try turning up the output level

If you find the meters don’t respond at all to the input slider, double-check that you are recording from the correct input source as selected in the dropdown box to right of the input slider. If you still have problems, try setting the levels in the system mixer instead. On Windows machines this is done through the Control Panel, and on Mac OS X systems in Apple Audio-MIDI Setup

There is no reason you can’t use a standalone recording meter if you prefer

Monitoring the audio using playthrough

The simplest way if you wish to hear what the monitored input sounds like is to go to the Audio I/O tab of Audacity Preferences, and enable Software Playthrough. Don’t enable this option if you are recording sounds the computer is playing with the sound device’s “Stereo Mix” or similar option, because this will lead to echoes or even failure of the recording

Question #6     How do you reduce noise?

Noise can be reduced during post-production, by use of various plugins. Typically, they are fed a sample of the noise alone and then subtract that noise from the rest of the recording. To facilitate this, be sure to record a second or two of “silence” before you start the actual performance. This gives you a clean sample of the noise. This works extremely well with low-level background sound like air conditioning

PREVENTION IS BETTER THAN CURE

  • Avoid noise in the first place instead of trying to remove it afterwards
  • Use a sound card with balanced inputs, and use shielded cable, sufficiently long so that you can move the microphone right away from the computer
  • An inexpensive external USB sound card should (other things being equal) be much quieter than the motherboard sound device that came with the computer
  • Place the microphone on a floor stand and ground it separately from the computer, or use a ceiling-suspended microphone
  • Keep all microphone cables away from mains electricity cables

To prevent noise entering your microphone recording:

  • Set the correct input level of your sound sources. Set it as high as possible to increase the dynamic distance of sound and noise, but as low as necessary to prevent clipping
  • Use balanced audio connections and shielded cable
  • If you have it at hand use a hardware limiter as long as Audacity cannot process sound in real-time. Personally, when recording a band I use a simple foot pedal limiter on the vocals, because these seem to be the most dynamic sound source. The input level of instruments can be adjusted quite easily
  • When you have found the optimal levels, decrease them about a dB or two when recording music just to be sure. When actually performing, musicians tend to be a little bit louder than in rehearsal mode
  • Shut every non-used sound channel and sound source. Mute non-used channels on your mixing board, switch off non-used amps, keyboards, and don’t forget shut the door and the window
  • Especially in a home-recording environment, avoid switching lights or electric machines on or off during recording, because a spark can cause a knack on the track
  • Avoid fluorescent lighting and keep cell-phones a good distance away from any equipment

Noise tends to stay on the same low level and cannot be controlled in general, but the record source is often dynamic and may change and can be controlled. So you may get aware of your potentials in noise control here. Initially non-audible noise often comes to attention when a low signal is amplified and/or normalized because normalizing and amplifying increases both the wanted signal and the unwanted noise. Therefore, the measures and proceedings described prevent some of the typical noise problems later

Environmental Noise

Room Noise

  • Cheap PC mics, besides having rotten sound quality, are nearly omnidirectional. Use a directional microphone.  If the business end is not pointing at the noise source, it won’t pick up the noise. It may still pick up ambient noise, including any sound originating elsewhere in the room and bouncing off the walls. That will be reduced if you put the microphone right on top of the sound source. When recording vocals, the performer’s lips should almost be touching the mic, and sing straight into it (not across the top). Ambient noise will be blocked by the singer’s head
  • Use a noise-blocking stand and a long enough cable to distance the mic from the computer. Often times the vibrations from a computer’s fans and drives will vibrate the computer desk and the surrounding area. If the microphone and its stand are on the computer’s desk, the microphone often will pick up the vibrations and produce a noise on the audio track (often referred to as a “warble” sound: a soft, repeating hum). To help prevent this, use a ceiling-suspended microphone stand or a full-size floor stand that can have its height adjusted. If these (pricey) options are not available, an alternative is to support a desk stand using a sound-insulating lift, such as a flimsy cardboard shoe box or a number of newspapers. These things insulate the noise rather well, making it difficult for any vibration noises to flow through to the microphone. Almost any lift made of non-rigid, flexible material will do
  • Direct connection. If recording instruments like keyboards and electric guitars, feed their signal directly into your sound card’s line input, or to a sound board and then into your PC. Guitars will need preamps. If you’re recording acoustic instruments, use directional microphones placed close to the instrument, or use a pickup with preamp and connect direct
  • Get the desired signal as loud as possible (without clipping) into the microphone. This allows you to reduce the gain, which will also reduce the low-level noise. The further a microphone is away from the source, the more you have to amplify the mic’s input signal to get to a usable level. But, boosting the gain amplifies everything, including background sounds and even the internal electrical noise of the amplifier. Ideally, the microphone should be right on top of the source, with the gain no higher than necessary to get peaks around -3dB. If you are doing multitrack recording, record each individual track as loudly as possible. Set the final volume of each track during post-production mixdown
    Note: placing the microphone “right on top of the sound source” might not be ideal when recording certain sources (such as bowed instruments like violins and cellos). Instead of placing the mic right on top of your sound source without regard to factors other than noise, you should experiment with different kinds of microphone placement until you find one that provides the best sound. If the “optimal” placement is too noisy you can look for other ways to reduce the noise. In the end, nothing beats an ideal recording environment
  • Don’t forget the possibilities of non-technical noise reduction:
    • Turn off your refrigerator and furnace / air conditioner during the recording session
    • Watch out for telephones, cell phones, pagers, ticking clocks, lawnmowers, and the like
    • Avoid locating the recording session near airports, train tracks, and fire stations
    • Hang blankets on the walls, to dampen a live room. Or record in a room with wall-to-wall carpeting
    • If you can, record in a basement, to help isolate your session from outside noise. You’ll probably need to use the blanket-on-the-wall trick here, since concrete walls make good sound reflectors
    • Record late at night to reduce traffic noise leaking in from outside

Electrical noise

60/50 Hz hum and/or crackle

  • A common problem. Make sure all your recording equipment is connected to the same ground. This is easiest to accomplish by plugging everything into the same power strip. Then ground the computer separately from the recording equipment
  • Keep microphone cables well away from mains electricity cables (including those behind walls)
  • Try to use incandescent light bulbs (including halogen lamps); avoid using fluorescent lamps near a signal path (cables and equipment), especially for low-power signal lines such as microphone cables.  Fluorescent lamps often generate a significant amount of high-frequency RF noise, which may then be captured by the cable or the equipment.  Lamps on the ceiling do not usually induce buzzes (because they are far away), but if used in a group of 4 or more, they may introduce buzzes into the power line, which may affect other equipment on the same power circuit.  Power conditioners may be used to alleviate this problem
  • If all else fails, get rid of the hum during post-production by using a de-noise plugin or an extremely narrow notch filter. Crackle will be much more difficult to remove

Remove noise

On virtually any recording you can find noise. It is not always necessary to get rid of it completely. First-of-all, it is often audible only in very low-volume passages. Second, the average not-too-picky listener will accommodate to the noise level of your recording. In this regard it is comparable to the odor of a room: When you enter it you become aware of it, but once you stay in there for a period of time you’ll probably cannot smell it anymore. Third, he/she may listen to your program in the car or while washing dishes so he/she may be not in the position at all to hear the noise

Sometimes, a completely silent passage, e.g. between sequential parts of a program, can irritate a listener more than a constant low-level noise throughout the mix. This is so because complete silence may disrupt the ambiance of the material. There are situations where you actually want to add noise (e.g. in film production between a cuts of the same scene)

So, you may want to change your attitude towards noise here: It’s not just dirt that needs to be removed, but it’s a natural part of all listening experiences that has to be dealt with appropriately. In general, we need to accomplish two things: The noise just has to stay on a roughly similar level throughout the material and it should not be too obvious to let it be ignored

Hi-Band Noise

Removing hi-band noise appears to be an option you can choose to take or to leave out. If you apply it, do it on the most basic level you can reach, on unprocessed tracks (for example, before adding reverb), on single tracks or even on single passages

On multiple tracks:

  • Before doing anything else, increase the tracks to a working level with the Audacity Normalize function. Leave a little bit of headroom when amplifying
  • Mute all passages where there is nothing to be used in the mix. You may use the envelope tool to make silent passages of a track really silent
  • Fade-ins and fade-outs are much better than sharp edges cut-and-pasted that very likely will cause clicks

On a single noisy track, you may want to use the Noise Removal feature of Audacity. You accomplish this task in two steps:

1) You pick a “noise-only” part of the track’s signal. This part should not be longer than approx. half a second. This is a sample of the noise that will be used to compute the necessary changes to the track to remove just the noise (though this idea is always merely theoretical)

You have to be careful in selecting your sample. If you pick a sample that contains not only noise but also a slight part of – let’s say – a reverb tail, you’ll remove that, too. To give you another example, the sound of breathing can be quite similar to noise but it provides a lot of the vitality of a vocal track.  Now, select a small portion of noise, call “Noise Removal” and Select “Get Noise Profile”. If you are unhappy with your selection, you can repeat this step. Every time the last sample will be overwritten with the new one

2) Now, select the portion of the track that needs noise reduction (in most cases this will be the complete track). Call the “Noise Removal” function a 2nd time. You can click on “Preview” to listen to the first seconds of your selection or on “Remove Noise” to execute the noise filtering. The less/more slider is quite obvious, I think, and can be changed for testing and applying the effect

If you’re unhappy with the result you can Undo all changes on your track and try again.  Noise Removal often helps to reduce hi-band noise such as hissing and (to a certain extent) crackling

Subsonic Noise

While removing hi-band noise might be considered optional, removing subsonic noise seems mandatory to the writer. In contrast to hi-band noise you can apply subsonic reduction as a step in the mastering process, on the mixed and processed material, before a final Normalize

Subsonic or low-band noise can enter your recorded material in many ways, such as through physical vibrations during the recording or noise from the tape machine (if you still use one of those)

Everything below 20 Hz is called “subsonic” because the human ear is unable to perceive it as recognizable sound. You can recognize subsonic noise by eye when the zoomed in waveform in Audacity is not symmetrical along the time axis. If you have already applied normalization in Audacity as recommended above, this should have removed any DC offset in the recording. The reasons for removing subsonics are the same as with DC offset – they will reduce the headroom available on the recording by taking up dynamic range, and can introduce clicks when editing

To remove subsonics from your track you may filter it with:

  • Audacity’s built in Equalizer under the Effect menu
  • Audacity’s built-in High Pass Filter under the same menu – set the cutoff frequency to around 25 Hz. You can repeat this same effect a couple of times if a sharper cutoff slope is desired

After removing subsonic noise you can generally re-normalize your track, and it will appear louder yet much more defined in the bass

Question #7     How do you repair “popping” vocals?

Manually fixing ‘breath sounds’ on a vocal recording can take an inordinate amount of time – so if possible avoid them in the first place. A ‘pop-shield’ between speaker and microphone can help, and can be made cheaply. Should you still have popping or percussive vocals, here’s how to repair them, though it will never be as perfect as a good original recording:

1) Make sure the recording’s DC offset is zeroed. (This in itself will eliminate one possible cause of clicks generated by subsequent edits and silences and should be done before you do any editing). Do this by selecting the whole track and choosing the Normalize effect.  In the resulting box make sure you’ve only selected “Remove DC offset”

2) Zoom in on the percussive sound. They’re easy to pick up. They look like a big single waveform just before the rest of the sound

3) Select this waveform and then choose the fade in effect. This will soften the percussive and hopefully solve your problem

4) Since these percussive sounds are mostly very low in frequency, some users have reported great success using the ‘high pass’ effect instead of the ‘fade in’ effect as suggested in step 3), above. Note that the ‘high pass’ effect can be repeated multiple times on the same selection. This approach has an additional advantage of not interfering with or reducing the level of higher frequency sounds, an advantage when the vocal percussive sound was recorded along with other instruments or sounds

Question #8     What file types may be imported/exported?

  • Audacity does not care what the extension of the file is. If the file is well-formed, libsndfile will correctly detect the format and import the file appropriately. Audacity fully supports 24-bit and 32-bit samples and almost unlimited large sample rates.
  • Import and export WAV, AIFF, AU, FLAC and Ogg Vorbis files
  • Fast “On-Demand” import of WAV or AIFF files (letting you start work with the files almost immediately) if read directly from source
  • Import and export all formats supported by libsndfile such as GSM 6.10, 32-bit and 64-bit float WAV and U/A-Law
  • Import MPEG audio (including MP2 and MP3 files) using libmad
  • Import raw (headerless) audio files using the “Import Raw” command
  • Create WAV or AIFF files suitable for burning to audio CD
  • Export MP3 files with the optional LAME encoder library
  • Import and export AC3, M4A/M4R (AAC) and WMA with the optional FFmpeg library (this also supports import of audio from video files).
  • Audacity also supports virtually any uncompressed format using the Import Raw function. With this function you can also import SoundDesigner-II Files (used in the Mac-World).
  • You can import multiple files at once by shift-clicking or control-clicking on multiple files in the Open or Import dialog boxes. Alternatively, drag multiple files to your Audacity window. (On Windows, drag the files to the Audacity project window, and on Mac OS X, drag the files to Audacity’s icon in the Finder or in the Dock)
    • Hint: Audacity may not realize the file is an MPEG file unless it has an appropriate extension. To be sure, try renaming it so that it ends in “.mp3”, and then if libmad can open it, it definitely will.
    • Audacity imports ID3 tags from MP3 files, which give the Artist, Title, Album, and other song info, using libid3tag. You can see these tags by selecting “Edit ID3 Tags…” from the Project menu. Audacity will let you save these tags if you export an MP3 file. You can write either ID3V1 tags or ID3V2.3 tags.

Managing projects

  • Audacity projects consist of a project file (.aup) and a corresponding data directory. Audacity project files are just XML, so you can read them using any text editor or XML reader. The file format is intended to be specific to Audacity, but open so that others can work with it if they’re interested. Audacity projects store everything that you see in the Audacity window. They open and save very quickly, so you can continue your work where you left off.

Audacity project files are not intended to be used as a portable format, or as the primary way to store your audio. Export as a common supported format like WAV, AIFF, or MP3

  • When you create a new project, Audacity writes data to a temporary directory. You can set the location of this directory (folder) in the Preferences dialog.
  • To save time, Audacity doesn’t make a copy of files when you import them – instead, it saves a reference to the original file in the project. If you prefer Audacity projects to be self contained, you can choose to always make copies in the Preferences dialog on the File Formats tab.

Question #9     What type of editing tools are available?

Editing audio

  • Unlimited Undo lets you revert actions all the way back to when you first opened the project.
  • Undo History window lets you see all of the changes you’ve made, and quickly jump back to a previous point.
  • Audacity splits tracks into small blocks internally, so large cut and paste operations are quick because they don’t require rewriting the entire track each time a change is made. This is different than the Edit Decision List system used by many other editors, but the effect is similar: editing is quick, and it’s easy to Undo.
  • Audacity displays the current cursor position or selection bounds in a status bar in the bottom of the project window. You can change the units using the “Set Selection Format” option in the View menu.
  • Lots of basic editing operations:Save/restore selection
    • Cut
    • Copy
    • Paste
    • Trim (delete everything except selection)
    • Delete
    • Silence
    • Split
    • Duplicate
    • Find Zero Crossings
  • Modify cursor and selection using arrow keys (modify with Shift and Control)
  • Envelope editor lets you adjust the relative volume of tracks over time. Just select the envelope tool (the one with the two white diamonds pointing towards a center control point) and click on a track
  • Drawing tool has three options to edit individual samples (zoom in first): Multi-mode tool lets you select, modify envelopes, edit individual samples, and zoom, all from one tool. Which tool is active is based on the exact location of the mouse:
    • Click: change samples
    • Alt-click: Smooth
    • Ctrl-click (and drag): change just one sample

Mixing, panning, and warping

  • Audacity automatically mixes when you have more than one track open. It automatically resamples as necessary.
  • Each track is designated as either Left, Right, or Mono. When you see a stereo track (two tracks joined together), the top one is the Left Channel, and the bottom one is the Right Channel. To change this, use the track pop-down menu.
  • Each track has a gain control that you can use to adjust its volume.
  • Each track also has a panning control that lets you give it relatively more volume in the left or the right channel.
  • Adding a Time Track lets you warp the speed of playback over time.

Question #10….What built-in special effects are available?

  • Audacity has many built-in effects and also supports plug-in effects in the LADSPA, VST, and Nyquist formats.
    • Change the pitch without altering the tempo (or vice-versa)
    • Remove static, hiss, hum or other constant background noises.
    • Alter frequencies with Equalization, Bass and Treble, High/Low Pass and Notch Filter effects.
    • Adjust volume with Compressor, Amplify, Normalize, Fade In/Fade Out and Adjustable Fade effects.
    • Remove Vocals from suitable stereo tracks.
    • Create voice-overs for podcasts or DJ sets using Auto Duck effect.
    • Other built-in effects include: Run “Chains” of effects on a project or multiple files in Batch Processing mode.
      • Echo
      • Paulstretch (extreme stretch)
      • Phaser
      • Reverb
      • Reverse
      • Truncate Silence
      • Wahwah