Wait, both at the same time? Why on Earth would you do something like that? I'm happy to tell you.
There's a lot of information you can include with podcast episodes. Some publish the title of the show, date of the show and a brief description of the contents. Others provide a lot of detail about the episode, including a list of resources and references relevant to the episode. If you want to be comprehensive, you provide a full transcript for the show.
Let's take a step back... why provide a full transcript? As an advocate of Accessibility, I believe in making the show usable by as many people possible. For most normative users, listening to the show is sufficient, but what if you can't listen to the show? Closed captioning for podcasts is a limited technology and typically used with videocasts as opposed to audio-only podcasts. Therefore, I provide a full transcript for each "The Testing Show" episode.
Transcribing a podcast is slow, time-consuming work. What about speech recognition software? Yes, I've tried quite a few. In most cases, I get a partial response mixed with a lot of stalling and correcting large areas of text. I've experimented with using Soundflower to direct WAV audio to a text file. When it's just my voice, speaking slowly, I get a good hit rate of spoken words to transcribed text. The more speakers on a recording, the lower that hit rate. With how much time I spend editing and fixing errors that appear in the transcript, the less I experience any real time savings. Therefore, I kick it old school and manually transcribe the shows.
"Dude, you can totally farm that work out to other people". I've done exactly that on more than a few occasions. When I am far ahead of the deadline and I feel the conversation is clear and concise, I am willing to have other people (read: pay) do the transcription. To have that be effective, I need to complete audio editing at least a week before the deadline. Sometimes, that's easy to do. Other times, not so much. Real life finds ways to take away from podcast production time, especially since I don't do this full-time. If I can't guarantee a long enough lead time to have a service do the transcription, I do it myself. If you do decide to have a service do your transcription, I give high marks to "The Daily Transcriber".
Waveform editor on the left of me. Text editor on the right. Here I am, stuck in the middle with you ;). |
Along with a transcript, I also provide what I refer to as a "grammatical audio edit" for each show. What's a grammatical audio edit? It's where I go through each statement from each speaker and remove elements that would not flow well in a written paragraph. That includes verbal tics (those "um", "ah", "like", "you know"), repeated sequences, tangents, semantic bleaching, etc. Realize, I cannot magically fix the way people speak. At a certain point, I have to let them say what they will say in their style. Any transcript will, of course, reflect this. I do a word for word scrubbing of the recorded audio. Since I'm editing by the second, simultaneously transcribing as I'm editing is a reasonable approach. I listen to a section of dialogue, edit and sequence the conversation with a reasonable cadence, and while I'm doing that, I type out (or use Apple's "dictation" option, which can be activated with the "fn fn" sequence) to write out the words recorded.
To this end, it's important that you have already done a rough edit of the podcast. You should know which sections you are going to keep and which ones you are going to "leave on the cutting room floor" and have silenced out those sections, and then run "Truncate Silence" to squeeze everything together. This way, you know the sections you are editing and transcribing will be in the finished podcast. You can always add a section back later if you change your mind, but removing a section you've already done a full edit and transcription for is frustrating. Minimize this if you can.
GEEK TRICK: If you use Audacity, you can use the Transcription tool. It slows down or speeds up the audio to a level that you determine. It has its own playback button that then plays the audio at the designated speed. It also lowers or raises the pitch of the audio, which can be an annoyance. Still, making sense of a fast passage, or listening at the pace that you type, this feature is helpful. The Transcription tool can also, in fast playback mode, check levels between speakers.
Audacity's Transcription Tool. Slow Down or Speed Up audio. |
"Dude, that's overkill". It certainly might be. If you don't want to provide a full transcript, you don't have to. Clear and interesting show notes and a catchy embedded description with the show will do a lot to help get the point across about each episode. Some cool examples of embedded show notes for episodes are the "Back 2 Work" and "CodeNewbie" podcasts, in that they include almost all of the details of the show and resource links. Some shows include timestamps along with their show note links ("Greater Than Code" and "Ruby Rogues" are both good examples of this).
Something I would also encourage if you want to go the route of detailed show notes is to develop the notes while the show is happening. That's hard if you are the only person recording the show or you a doing a one on one interview. It's easier if you have a panel of speakers. As the show runner, I try my best to keep track of what people are talking about. If I hear a comment about a talk, a video, an article, or something that I think might be helpful to reference, I jot down a quick note in my schedule sheet so I know generally where to look for it and reference it.
GEEK TRICK: Here's my basic method for transcribing and writing show notes.
1. Create a header. In that header, make a list of everyone speaking on the show. Confirm name spelling and pronunciation, etc. This way, it's easier to know who you are listening to and how to tag each line of speech.
2. Create a Macro to replace your regular contributors, and add new names as you choose. For this, I put an initial and a colon for each full name, such as "ML: " (yes, preserve the space ;) ). When I'm finished editing, I run the macro and it goes a find and replace for all of the "ML: " tags and replaces them with "MICHAEL LARSEN: ". Same for all of the other names I've gathered. One run and done.
3. I use "Insert Endnote" option each time I come across something I want to provide as a show note reference/resource. This creates a running list of resources at the end of the document. If I have the link to the reference, I include it while I am in edit mode. If I don't or I'm offline at the time (often since I do a lot of the editing and transcribing while I'm sitting on a commuter train) I make the list with as much detail as I can, then fill in the link later after I've had a chance to look it up.
Every show should start with a descriptive paragraph copy. It should be fun, interesting and hopefully engaging. As I stated in the first post of this series, sometimes I find this to be the most difficult part.
Some final details that I do are to add metadata tags to the podcast. At this point in time, I keep it very basic. I list the name of the show, the title of the show, the episode number, the year published, and that it is designated as a podcast. Also, to preserve the audio, I export the final podcast as Ogg Vorbis format and then convert it to MP3 using Max (which I like because it makes it simple to tag with metadata and to add cover art). From there, I upload it to the shared folder that we all use, I alert the folks at QualiTest that we have an episode ready to publish, and they handle updating their website and posting to iTunes, Libsyn and their RSS Feed.
Next time, let's talk about ways to encourage people to download, listen to and share your podcast.