3D CGI animation terminology  

3D Animation terminology

  • 2D Two dimensional  [To be expanded].
  • 3D Three dimensional  [To be expanded]
  • 4D Four dimensional  [To be expanded]
  • Animatic A more advanced storyboard using proxy models to rough out basic animation and camera shots. (Film industry term)
  • Anti-aliasing Over-sampling methods for avoiding the unwanted visual effects or artefacts caused by limited display resolution. These aliasing effects include ‘jaggies’ (stair-casing along diagonal lines), moiré effects, and temporal aliasing (strobing) in animated scenes.
  • Alpha channel A view of an image that represents the presence and degree of opacity of objects. The channel associated with each pixel determines the degree of opacity of that pixel using a greyscale range. In video production, the alpha channels used to determine layer masking (mask channel).
  • Animation The process of developing the actions (poses, timing, motion) of objects. Animation methods include key-frame animation, path animation, non-linear character animation, and motion capture animation. Animations are sequences of frames.
  • Aspect ratio The proportions of an image expressed as the ratio between the horizontal and vertical dimensions. Because pixels are not necessarily proportional, the aspect ratio is independent of the number of pixels in the X and Y directions. For example, both NTSC and PAL television screens are 4 x 3 (aspect ratio 1.33). However, a CCIR601 NTSC image is 720 x 486 pixels, while a PAL image is 720 x 576 pixels.
  • Atmosphere In rendering, the environment that surrounds the objects in a scene. For example, the simulation of fine particles (fog, smoke, or dust) in the air. When an object is photographed in the real world, it is usually within an atmosphere (for example, air) and can be surrounded by other background objects.
  • Axis One of three vectors (X, Y, and Z) that define the three dimensions of a scene. Often defined as local space, object space, origin axis or world space.
  • Bezier curve In modelling, a curve with at least four control points available to control shape of the curve. This term may also refer to a NURBS curve.
  • Bezier patch In modelling, a parametric surface, approximately rectangular, that is quilted together with other Bezier patches to form a large, curved surface. The shape of a Bezier patch is controlled by 16 control points distributed uniformly over the surface. Also known as patch surface.
  • Bitmap Image comprising pixels (as opposed to vector artwork such as EPS).
  • CAD Computer Aided Design. Designing 2D and 3D work using a computer as a tool.
  • Camera Like a real-world camera, the 3D camera frames the view of a scene by tracking, tumbling, panning, and zooming. Unlike a real-world camera, the 3D camera does not automatically capture lighting, motion blur, and other effects - these effects must be explicitly created and tuned for realistic output.
  • Caustics Light pattern created by specular reflection or refraction of light, such as the patterns of light on the bottom of a swimming pool, or light through a glass of wine.
  • Cartesian coordinate A mathematical representation of Euclidean space. Every point can be described by three coordinates (X, Y, Z) representing the position along the orthogonal X, Y, and Z axes. The point (0, 0, 0) is called the origin, which is the global centre of the 3D world.
  • CG Computer generated. Design output via a computer.
  • CGI Computer generated Imagery. Design output via a computer.
  • CODEC Abbreviation of “compressor/de-compressor”. This is the term used to reference the way that software programs handle different movie files, such as Quick Time, AVI, etc. The CODEC can control image quality, and can assign the amount of space given to the movie file.
  • Colour depth The number of bits used to represent a colour. For example an 8-bit image uses 2^8=256 colours. The bits build up the three primary colours red, green and blue. The following table indicates the number of colours an image can have.

    8-bit = 2^8 = 256
    16-bit = 2^16 = 65536
    24-bit = 2^24 = 16 million
    32-bit = 2^32 = 4.3 billion (inc alpha channel)
    Also see Floating Point, or HDR images.

  • Compositing (‘Comping’) The process of combining two or more images to form a new image. In video, compositing is the process of combining two or more video sequences to form a new video sequence.
  • CMYK Cyan/Magenta/Yellow/Black. The four ink colours used in 4-colour process printing.
  • Depth channel The distance of objects from the camera. Also known as Z-depth or Z-buffer channel.
  • Depth of field (DOF) A photographic term for the range of distances within which objects will be sharply focused.
    (Objects outside of this range appear blurred or out of focus)
  • Device aspect ratio The aspect ratio of the display device on which you view the rendered image. The device aspect ratio represents the image aspect ratio multiplied by the pixel aspect ratio.
  • Diffuse Surfaces reflect (or scatter) light, and colour in many angles. This type of surface causes light and colour to spread freely.
  • Dynamics A branch of physics that describes how objects move using physical rules to simulate the natural forces that act upon them. Dynamic simulations are difficult to achieve with traditional key-frame animation techniques, but new technology lets you set up the conditions and constraints that you want to occur, and then automatically solves how to animate the objects in the scene.
  • Encoding The process of converting uncompressed image/s to a new format, usually compressed. eg Mpeg, MP4, QuickTime, WMV, H264 etc.
  • File texture A bitmap image that can be mapped to shading attributes.
  • FPS Frames per second. The number of single frames needed to be displayed in a second to achieve smooth animation (usually 25 fps).
  • Fractal A three-dimensional random function with a particular frequency distribution. Fractal textures are useful for simulating many natural phenomena, such as rock surfaces, clouds, or flames.
  • Frame In animation, a still image and the basic unit of time measurement. Typically, 25 frames of animation are required for one second of PAL video.
  • Frustrum A volume of space that includes everything that is currently visible from a given camera viewpoint. A frustum is defined by planes arranged in the shape of a 4 sided cone with dimensions that correspond to the film aspect ratio.
  • Geometry [Aka mesh] In general, a NURBS surface, NURBS curve, subdivision surface, or polygonal surface. Geometry is a mathematical description of an object. [Must expand]
  • Hardware render An interactive rendering method that uses the capabilities of a computer’s graphics card (GPU) to create lighting and texturing effects.
  • HDTV High definition TV. Most commonly available in two resolutions:
    HD720 - 1280x720, Aspect ratio = 1.777
    HD1080 - 1920x1080, Aspect ratio = 1.777
  • HSV Hue, Saturation, and Value. A colour mode that determines the shading and tint of a colour. Hue corresponds to the pure colour; saturation to the amount of white mixed with the hue; and value to the amount of black mixed with the hue.
  • HDRI – High dynamic range imaging An HDRI image has an extra floating point value associated with each pixel that is used to define the persistence of light at that point. Until recently, a high-dynamic range image was be created from several digital photographs with different exposures combined to show the full range of light. Nowadays, specialised cameras have the capability to capture a large dynamic range of exposure which can even exceed the natural human range of vision. [Must expand]
  • IBL – Image based lighting The simulation of light emitted from an infinitely distant (environment) sphere to create photo-realistic images. With image-based lighting, an environment texture (an image file, ideally HDRI) is needed to illuminate the scene and provide the necessary environment reflections.
  • Key-framing Is the process of assigning values to parameters at specific moments in time to create an animation. The most important parameters to be key-framed are the transformations of models (objects), the camera, and lights. Thus all objects in the scene can be scaled (resized), rotated and transformed (moved) the course of the animated sequence. The lights can be translated and rotated. The rendering camera can also be transformed and rotated. The surface material characteristics of an object, the colour or intensity of a light, the zoom ratio of the camera, and even the geometry of objects can be key-framed. The 3D application interpolates between the key-frames, creating the frames in between the key-frames when rendering. Interpolation can occur in both space and time. Animation curves are used for full control.
  • LOD Level of detail. System used to handle high definition models in a real-time environment. Models have multiple instances with varying levels of detail are replaced based on distance to camera. (Games industry term)
  • Lighting model Mathematical formula describing the interaction of light with CG surfaces. Physically accurate lighting models require great computing power.
  • Light probe A tool used to create custom HDR environment maps.
  • Light source In rendering, an object that provides illumination to a scene. In the real world, the surfaces of objects are illuminated by light rays emitted from various light sources (for example, light bulbs, torches, the sun).
  • Match moving The process of matching the camera or object movement from live action footage with a computer-generated (CG) scene.
  • Model A computer-based description and representation of a three-dimensional object. See Geometry also.
  • Motion blur The simulation of the blurring that occurs when a fast-moving surface is captured by a camera.
  • Motion capture In animation, the recording of joint positions and rotations from movements performed typically by a human actor. This information is then applied to a CG skeleton to simulate real-life motion on a character.
  • Motion path In animation, the use of a curve to control the motion of an object.
  • Normals In modelling, the directional line perpendicular to a surface. Polygon normals indicate the orientation of polygonal faces.
  • NTSC National Television Standards Committee. The standard for composite video in North America, Japan, and most of South America. 30fps, 720x486 with a pixel aspect of 0.9
  • NURBS Non-uniform rational B-spline. These are surfaces described by parametric curves. CAD software usually outputs geometry as NURBS compliant data (i.e. IGES).
  • OpenGL A widely used 3D graphics language.
  • PAL Phase Alternate Line. The industry standard for composite video in most of Europe. 25fps, 720x576 with pixel aspect ratio of 1.0667.
  • Particles In dynamics, a point displayed as a dot, streak, sphere, or other effect. You can animate the display and movement of particles with various techniques. Typically used in large quantities to create effects like rain and explosions.
  • Pixel A picture element. The smallest controllable segment of computer or video display or image.
  • Polygon Cross-platform industry standard for constructing geometry. N-sided facet defined by 3 or more vertices in space. A polygonal object can be closed, open, or made up of shells, which are disjointed pieces of geometry. Often referred to as a mesh.
  • Pixel aspect ratio The aspect ratio of each pixel, which may be square (1.0) or non-square.
  • Procedural texture A texture that is calculated based on some algorithm or mathematical formula.
  • Projection map A technique of projecting a 2D image onto 3D geometry, useful for creating textures or icons on a rendered object.
  • Ray tracing A rendering technique, based on complex mathematical algorithms, to accurately represent any conceivable representation of light, including reflections and refractions as well as any conceivable surface forms and materials.
  • Rendering Creating a 2D image from a 3D scene is a process known as rendering. To create a rendered image, the scene must first be constructed within the dedicated 3D graphics software on the computer workstation; this software allows the artist to describe geometry, lighting, surface properties, special effects and animation (time based changes). 3D rendering is a creative process similar to photography or cinematography. The camera is defined at a location in 3D coordinate space, pointing in a given direction. Unlike traditional photography, everything appearing in a 3D rendering needs to be created in the 3D space before it can be rendered - allowing an almost infinite amount of creative control over what appears in the scene and how it is depicted. Artists need to create this scene before the rendering can commence. The rendering output can be setup for photo-realism or be designed to appear stylised. As an animation requires as many as 30 images for every second, rendering time is an extremely important consideration in all 3D animation. Rendering time is a function not only of the power of the computer used, but also of the complexity of the scene, the lighting model, and the presence of computationally intensive elements [to mention only a few]. The properties of rendered image files can be controlled according to post-production or presentation requirements. Also known as software rendering.
  • Render-farm Computer network setup to render frames at a fast rate. Tasks can be distributed between dedicated machines. Rendering is highly CPU intensive, requiring 100% access to CPU, therefore dedicated machines must be used at render time.
  • Resolution For images, the total pixel size of a bitmap image.
  • RGB(A) Red, Green, Blue, Alpha. A colour space most commonly used in computer graphics and component video, where the three additive colour components are mixed to create a colour. Alpha is the extra component that is used to indicate transparency. RGBA is the base for most computer-generated images formats.
  • Rushes The first renders before final compositing. (Film industry term)
  • Scene A scene is a file containing all the information necessary to identify and position all of the models, lights and cameras for rendering. A scene can be identified with the 3D coordinate space in which rendering takes place. This space is often called the “global” coordinate space, as opposed to the “local” coordinate spaces associated with each individual object in the scene.
  • Shader The specification of properties and lighting for a surface. Surface properties must be defined in respect of their colour, reflectivity, surface bump texture, specularity, transparency, translucence etc. Usually comprise a network of connected nodes that control specific aspects of the shading effect. Shading networks define how various colour and texture nodes work with associated lights and surfaces. The placement of textures on surfaces is also controlled by nodes within the network. Nodes in a shading network can be connected in a non-linear way to create the desired effect.
  • Skeleton In animation, a structure that consists of joints and their bones, used to create hierarchical, articulated deformation effects on deformable objects.
  • Stereoscopic Most of the animation we create is created using 3D computer technology to render sequential images to 2D videos. On the other hand, stereoscopic viewing [often referred to as '3D' video] creates the illusion of 3D by supplying each eye with a separate image simultaneously. We have the experience to create full pre-rendered CGI 3D stereo footage. This is not the red/green anaglyphs that were were popular in the last century, but full colour stereo 3D which is now becoming popular (Wikipedia). Different technologies exist, but the principal remains largely the same and polarising glasses are usually required.
  • Storyboard A series of drawings used in the early planning an animation.
  • Texture mapping The process of projecting a two-dimensional image onto a three-dimensional surface.
  • Volumetric fog In rendering, the simulation of light shining through fine particles (fog, smoke, or dust) in the air. Also known as light fog.




Log into your designated secure client area