Once again, just about all of the definitions came from text books, but I’ll provide more insight and examples as we go along. I know things like these can seem daunting, but understanding these terms will help give you a better understanding for the craft on the whole.
This is extra material beyond the in and out points. If you did not have this extra footage, you would not be able to perform dissolves at the beginning and end of the clip.
Really this is something that has to be done on the set during filming. If it’s not, you really can’t do much with it in editing. By then it’s already too late. You’re only real option is to do a freeze frame and drag it out to use as your handles, but that’s a “no other option” kind of solution. It’s not preferred.
The space between the top of the character’s head and the top of the frame.
Again, this is something that is dealt with during filming, but can have consequences in editing. Even though you’re handling post-production you need to be aware of the production aspects as well. You have to be on the look out for things like poor head room and framing, while you’re examining the footage. This way you know which take to use and which to toss.
Colors present in a video signal that are not supported by the current video playback system. This can result in the image being displayed incorrectly and is especially important when preparing content for TV broadcasts, as NTSC televisions have limited color support.
This is yet another reason why color correction is necessary on just about every scene. If you don’t pay attention to what’s legal and illegal on the colors your final product might not look the way you want it too. Some colors and frames will appear off. Use color balancing to fix this problem.
The timecode value at which a clip begins.
The process of inserting a clip onto a timeline and pushing content aside to make room for it. In this method, no content is overwritten.
The types of edit you use, all depends on the context in which you’re using it. If you’re just getting the scene laid down for the first time on the timeline you may use an insert edit just to get everything down, and then mix it all up. It can also be used for those cutaways in order to sneak them in between your shots.
The process of creating an image from two fields that combine to create a full image. This is the opposite of progressive scanning, in which the image is comprised of single frames, resulting in increased visual quality.
This is fairly common in the digital age now, and for the most part it won’t be something you’ll mess with in editing. It’s automatic and unless the footage is shot in a certain way, de-interlacing won’t do anything extra for you.
This is used in animation to calculate the motion in between two user-generated keyframes so that each frame does not need to be animated manually. This speeds up the process and makes the resulting animation smoother.
This is also something called ‘Tweening’; which references the frames between keyframes. It makes life easier when it comes to animating in Flash or After Effects so that you won’t have to animated everything frame by frame. However, there are times when doing so is the only way to get the effect you want.
Titles that appear on their own between footage. Commonly seen in silent movies to substitute dialogue but sometimes used as chapter headings such as in Kill Bill.
This isn’t something that you see very often anymore, outside of things like chapter headings (Scott Pilgrim vs. The World is another great example of their use). As with all things in editing, make sure that if you do use them…they fit the needs of the story, and isn’t there just to look cool.
To move forward or backward through video by playing it one field or frame at a time.
As an editor, you’ll actually find yourself doing this a lot (or at least you should). Sometimes the perfect edit comes down to a single frame. So when you’re fine tuning and edit or transition, it pays to go over it back and forth one frame at a time. This will help you spot any errors that you wanted cut out. It also comes in handy for syncing up sound effects and stuff like that. Trust me, get used to seeing your project one frame at a time.
A cut in which the action does not completely match that of the preceding shot, causing characters to “jump” to a slightly different position. This is generally a mistake but is sometimes used for creative effect, such as to simulate the passing of time.
If you’re going to intentionally use a jump cut, make sure that it’s an obvious style choice for the film. Don’t just throw it in randomly without some sort of precedent showing it’d be a stylistic choice. Otherwise it will look like a mistake instead of a deliberate move on your part.
The horizontal spacing between textual characters. This deals with typography. It’s a design aspect and understanding good type layout can make your titles pop out and be effective. Not understanding typography design aspects can make your titles look cheaply done.
A frame that contains a record of specific settings (e.g. scale, rotation, brightness, etc). By setting multiple keyframes, you can adjust these parameters as the video plays to animate certain aspects. For example, you could set a keyframe for brightness at 100% and then set one at 50% when the camera enters a bright area.
Keyframes are very important when it comes to animating. And I’m not just talking about cartoons. If you want to have full control over fades/transitions, you’re going to be using keyframes. Using a matte to remove something from your scene or putting it in there? You’re going to be using keyframes in order to get it to look right.
An informal term for compositing two images together using mattes created from color information (chroma key), brightness information (luma key), the difference between the two images (difference key) or by using a manually-created matte.
We’ve been over this when we talked about compositing. Once again these are things that require attention to detail and patience in order to make the effect work.
A cinematic montage experiment in which shots of a person’s face were cut between various other shots, giving the impression that he was emoting to all of these events but in actuality the shots of his face were identical each time.
This is an important study for us editor’s. It truly shows the power that editing has when it comes to shaping the story and overall tone of a film. It’s a good thing to remember, when you’re arranging your shots and juxtaposing images. You want to be sure you’re giving off the right vibe and leading the audience where YOU want them to go.
The vertical spacing of textual characters. Also referred to as line spacing. Again this is dealing with typography like Kerning. Just good things to keep in mind.
The process of fitting a 16:9 image on a 4:3 screen by placing black lines at the top and bottom.
A form of video editing in which cuts are laid out sequentially, one by one, to produce the final scene. This is in contrast to non-linear editing in which cuts can be performed in any order.
This is pretty much extinct now. This is the original form of editing where you had to lay out your footage in the correct order from start to finish. Now with non-linear editing, editors are free to work on scenes and cuts that are outside the sequential order of the film. While this is where editing started from (so it’s good to remember) it’s something you’ll almost never see anymore.
A record of start and end timecode, reel numbers, scene descriptions and other information for a specified clip.
This is some of the most important paperwork that an editor can have. Usually made on the set during production, you’ll make another one as you watch the dailies. However, if you don’t have the luxury of dailies, you’ll need to insist to your director on having a well kept log. This will save you hours of time and searching through clips for a specific shot or performance. Keep this log on your desk pretty much at all times.
The process of matching the motion of a computer generated object with the motion of the camera or an object in the scene in order to blend it seamlessly within the scene.
Being off by even a frame will make the footage appear flawed and out of place. Matchmoving is incredibly important when it comes to integrating your composited elements. This requires painstaking detail and a lot of patience/practice. Fortunately there are entire programs dedicated to easing this process, but a hands on approach will be needed for fine tuning.
An image mask that is used in visual effects to control which parts of the image the effect will be applied to.
A matte determines what you block out of a frame and what you keep in it. This is used to eliminate a piece of the scene, like the sky, and can then be used in order to fill it back in with something; like a different view of the sky with clouds, something in the background, etc. Working with these can be tedious but can also add a lot to your footage.
The process of creating 3D objects inside a computer, similar in many ways to the process of sculpting.
If you’re going to use CGI in any way shape or form, you’re going to encounter the modeling process. This is where the CGI object is created. Whether it’s a ball, car, box, or even a full person, it will have to be modeled. This is an entire job all on it’s own. So unless you plan of taking a few years for your post-production, don’t try to edit and model everything on your own. Get some assistance from a person who is skilled at it.
Shooting without recording sound. Originated from a German to English mistranslation.
This is something done on the set obviously, but hopefully this type of footage will be marked on the log for you. Pay close attention to any scenes shot in MOS, since you’ll need to add in background noise, roomtone, and other sound effects in order to still make it feel apart of the scene.
Visual interference caused by the difference between the frame rate of the camera and the motion of the object. The most common display of this is when filming a computer or television screen. The screen will flicker or a line will scan down it, and is caused by the difference in frame rates and a lack of synchronization between the camera and television.
More than likely you’ve encountered this at some point, if you’ve ever tried to capture any footage from a computer or TV screen. It’s impossible to disguise, and if you have a motion artifact in your footage you shouldn’t use that footage. It’ll just look amateurish and cheap. There are ways to adjust the frame rate on your camera to compensate and get rid of the artifacts, or you can film a blank screen and composite an image on top of it later.
The “streaking” effect caused when an object passes quickly across the screen. This is because the object is in many positions during the exposure of one frame of film.
Important to remember, since many people now associate motion with this type of blur. People expect it and the eye is generally used to it. High-speed cameras don’t show motion blur and sometimes that’s the look you want. Really the best thing to remember about motion blur is when it comes to adding in other objects (CGI, animations, ect.). Without adding in some artificial motion blur when necessary to an object, it will stand out from the rest of the frame, and will appear ‘wrong’ to the eye.
Motion Capture (Mo-Cap)
The process of digitally recording an actor’s movement in order to apply this movement to a computer-generated object.
You’ve probably heard this term a lot these days, since it’s now a highly used technique for digital characters. Instead of just animating a character in full CGI (expressions and all) mo-cap is now used in order to digitally record an actor’s movements and facial expressions. Digital artists can then map their CG textures on top of these. This gives CGI characters a far greater depth of realism and emotion. Still fairly expensive though, so on a smaller set you probably won’t encounter this.
The process of controlling the motion of the camera by computer in order to obtain precise control over its movement. Commonly used to match up a model with a live-action shot in order to composite the two together later.
This is a fun thing to do, but you always have to be careful. Sometimes the director/director of photography set up a shot and the camera in a specific way for a specific goal. So don’t go too overboard with adjusting the camera. Be subtle with it.
Extra information in a video or audio signal that is not intended to be present.
This could also be called white noise or distortion. Any kind of noise on a picture or audio track isn’t good and nearly impossible to cover up. Unless your film/visual style calls for some noise on the footage every once and a while, you might as well count the footage/audio as a loss. There are some ways to limit it though, so you may give it a try before tossing it out. Just don’t waste all of your time trying to recover footage if you don’t have to.
Non-Drop Frame Timecode
Timecode that counts every frame and does not compensate for the inaccuracies that occur when 29.97 fps is converted to 30. Thus the frames are accurate but the time is inaccurate.
Just be careful if you find this setting on your editing platform. You’ll end up throwing off your actual run-time and it can cause all kinds of havoc when it comes to syncing. Just be aware of it.
An editing system in which edits can be performed at any time, in any order. Access is random, which means that the system can jump to specific pieces of data without having to look through the whole footage to find it. Computer software such as Avid and Final Cut Pro are examples of non-linear editing systems.
This is the standard of editing now, and pretty much all you will be working with, especially on a consumer (at-home editing) level. It truly the best way to edit as it provides unlimited possibilities instead of confining you to a linear editing style. It allows for on the go editing, and being able to immediately start working on footage the day it’s shot, as opposed to having to wait on it.
Standard United States broadcasting system for standard definition television. It is broadcast at 30 fps at various resolutions including 640×480, 648×486, 720×486 and 720×546. The differences in resolution are based on whether the image is displayed with square or rectangular pixels.
Broadcast standards are things you need to familiar with. Whatever format you plan on showing your project at (TV, theater, film fest, DVD, etc.) you need to know what standards you’re going to be broadcasting it at. You always want it to look the best it can for each format. So if you’re going to put it on a big screen, adjust the resolution accordingly…same for TV and Internet.
The process of editing a project at a lower resolution than the final output, in order to cut equipment costs or reduce disk space – or in the case of film projects, to preserve the original negative.
While not the ideal way to edit things together (you won’t be able to see the finer details to help match stuff up), it’s a great way to save some disk space while doing the rough cut of the film. For the fine cut and compositing, you’ll want to boost it back up.
After an offline edit, the sequence is then reassembled using high resolution media for the final output, normally using an EDL as a reference. This means that only the footage used in the final output needs to be recaptured, thus saving on storage space.
Again, sometimes you’ll want to do an online edit before the ‘final’ output, but even doing it just from a rough cut can save up tons of space.
A measure of the transparency (or lack of) in an image, which is of importance when compositing. Opacity information is stored in an image’s alpha channel.
When you need to manually control and set up a fade or dissolve, you’ll be messing with the opacity of an image. Want to interpose an image on top of your footage (like the old school music videos did)? Opacity is what you’ll need to mess with. Sometimes you’ll use it just to help line up things for compositing. When you need to see the image in order to get something in the right place, but don’t want it blocked out.
The timecode value at which a clip ends.
An editing method in which existing data is overwritten when dragging a new clip onto a timeline.
This is the beauty of non-linear editing! Suddenly don’t like the cutaway or shot you used for one of your scenes? No problem. Merely find a good replacement shot and drop it in on top. Your previous shot will be gone and you didn’t have to mess with the rest of the scene. This gives you the power to use simple placeholder footage so that you can continue to move on with your edit and not have to wait on more footage to get to you.
Phase Alternating Line. A standard definition broadcast standard in Europe. Similar to NTSC but with a higher resolution and running at 25 fps instead of 30.
Again, like with NTSC it’s just good to know your broadcast standards. Though this is the one used in Europe, it’s always good to have that knowledge.
Pan and Scan
A method of converting widescreen images to a 4:3 aspect ratio. The video is cropped so that it fills the entire screen and is panned into position to show most essential part of the scene.
I swear this is the bane of my editing existence. For the most part, you won’t have much to do with this, as that’s handled by people after you’ve already finished the film. The most common use of this is when it comes to putting the film onto TV. While the age of HDTVs is making this practice less common, it still pops up.
Similar to motion capture but with an emphasis on capturing the intricacies of the actor’s hand movements, facial expressions, etc, rather than simply their overall motion.
Most people no longer distinguish between mo-cap and performance capture. These days they are used pretty much simultaneously, although performance capture would be used exclusively for close-up type shots.
Reshooting a certain shot or scene due to errors on a previous day.
Most of the time, pickup shooting is a result of what happens in the editing bay. Some filmmakers just don’t shoot enough coverage (different angles, cutaways, etc.) to make a scene dynamic and visually appealing. On the set, it’s partly an effort of trying to get the production moving along and not taking too much time. With younger filmmakers it’s because of inexperience, and sometimes a scene just isn’t working no matter what. These are all things that are discovered in the editing bay. While they can be costly and time consuming, you should never hesitate to speak up to your director when you feel you don’t have the footage to do what’s needed. You won’t always get the pickups but it’s worth asking if necessary.
Pixel Aspect Ratio (PAR)
The ratio of the width of a pixel to its height. Often, rectangular pixels are used for anamorphic sequences to maximize the available resolution.
This is something to keep in mind for your resolution quality. If you want a high-quality output you need to pay attention to what the PAR is. You also have to be aware of what it’s at when the footage comes into you.
The display of large, blocky pixels in an image, caused by over-enlarging it.
This happens when you force a zoom that just isn’t there. Sometimes you’ll want a shot that zooms in on an object or person in order to bring more focus to it. While this could help the flow of a scene and story when editing, you don’t always have that. You could resort to pickups (though for something like that, it’s not likely), or some people force a zoom in editing. What often results is some degree of pixelation, which will scream “amateur” to your audience. Higher resolution footage can help with this and give you some leeway just be careful.
An empty shot of the background with no foreground elements, used for removing certain foreground elements from the scene such as light stands, wires, etc.
Plate shots can also be used when you’re compositing in more CG elements. Star Wars used plate shots (created from models) of various locations and the interior of the ships and space stations in order to put the CGI characters and technology inside. Pretty much any time you plan on adding or subtracting something from a scene, do your best to get a plate shot for reference.
The final stage of the filmmaking process, normally involving picture editing, sound design, visual effects and outputting the film to a format suitable for release.
The whole reason why we exist!
Frame scanning technology that processes each frame as one complete image, as opposed to two separate fields as with interlacing.
This was part of the advent of HD quality footage. Without the need to interlace, the resolution of the frames skyrocket and give a much more realistic look and feel to the footage.
All right, well that about wraps it up for this round of Post-Production Terms. Be sure to stay tuned for our final batch of post-production phrases to help keep you in the know.