The Beginner's Guide: Camera Lenses
- harryshaper
- Feb 10
- 14 min read
Updated: Jun 1
Introduction: The lens—arguably one of the most fundamental tools in filmmaking and VFX!
Virtually everything in a film has either been captured through a lens or meticulously crafted to appear as though it has been. Understanding lenses is vital not just for those working behind the camera, but can benefit anyone working within the visual effects (VFX) pipeline. Whether you’re capturing data on-set or re-creating reality in post-production, a solid grasp of how lenses function can elevate your work. When I first started learning photography for Visual Effects, I lacked a clear roadmap and tried to learn everything all at once. This approach was counterproductive and quickly became overwhelming!
If you’re in a similar position—struggling to grasp those initial concepts or unsure of what’s most relevant—this article is here to provide clarity and a starting point.
In this article, we’ll begin by breaking down the anatomy of a lens, examining its physical components and their roles in shaping an image. Next, we’ll delve into how these components influence fundamental concepts, such as focal length, F-stops, and depth of field, which form the backbone of lens behavior. Finally, we’ll discuss some of the artefacts & imperfections inherent in lenses—distortion, vignetting, and chromatic aberration—exploring how these introduce inaccuracy during capture, and are necessary to recreate in post-production.
Section 1: Anatomy
Although lenses may look different from one another, they almost always have the same key parts. The components shown below (Figure 1) are what we’ll be exploring today.

1.1) Filter Thread
Around the entrance of the lens is a threaded edge (Figure 1.1.1). This thread allows external filters to be securely attached to the front element, enabling us to manipulate incoming light for creative or technical purposes. Note that a filter cannot be attached to just any lens—their diameters must match, as measured in millimetres

Some common filter examples include a 'Neutral Density (ND) Filter', which reduces the amount of incoming light by a specific amount of stops; or a 'Circular Polariser' (CPL), which allows us to remove glare and surface reflections from an object — This is very useful for VFX artists! (Figure 1.1.2)

1.2) Glass Elements
A lens functions as it does because of these glass elements. By layering a series of convex and concave glass pieces, the incoming light bends and is redirected onto the camera sensor, forming our image. The process of light bending through a medium like glass is called 'refraction', and is the axiomatic principle of camera lenses.
Although we'll be exploring light more in future articles, I will briefly introduce the concept of 'refraction'.
In its natural state, light travels in straight lines. However, when passing through a material, its path is redirected. Additionally, the steeper the angle at which light enters the material, the stronger the redirection of the light waves. (Figure 1.2.1)

Because of this principle, the shape of a glass element is very important in controlling how much we are wanting the light to distort. By using a converging glass element, we can channel the beams of light onto a smaller surface area; but when we use a diverging glass element, the beams spread across a wide surface area (Figure 1.2.2)

By using multiple converging elements in the front portion of our lens, light is able to be condensed into a small area and pass through a hole known as the 'aperture'. After the aperture, we are able to stack mutliple diverging elements, thereby expanding those light waves and directing them onto our camera sensor. (Figure 1.2.3)

Most of a lens's glass elements are fixed and do not move. However, every lens has at least one moving element for focusing. However, zoom lenses have an additional moving element to adjust the focal length (Figure 1.2.4) - (More details on focal lengths are provided below.)

1.3) Iris Diaphragm & Aperture
In between the glass elements, is the iris diaphragm. Much like the Iris in the human eye, this mechanism is designed to control the amount of incoming light. Made up of a series of overlapping iris blades, the mechanism expands and contracts, widening the central hole, known as the 'Aperture'. (Figure 1.3.1)

The larger the aperture, the more light is allowed into the entrance pupil. The smaller the aperture, the less light is allowed to enter, thereby darkening our image. (Figure 1.3.2)

The aperture plays a crucial role in the resulting image. Besides brightness, the aperture size affects how much of the image appears in focus—a concept known as 'Depth of Field', which will be discussed further in Section 2.4
1.4) Lens Mount
At the base of the lens is where we see the mounting point. This 'lens mount' allows the lens to become fixed to the camera body. Unfortunately, not all lenses are designed to mount to all camera bodies. This means you have to consider this compatibility issue when buying equipment.
The types of mount are usually tied to specific camera brands and/or sensor size. The tell-tale sign of compatibility will usually be near the joining point between lens and camera. For example, with Canon's EF mount lenses showing a red dot, while the EF-S mount lens has a white square (Figure 1.4.1).

Is some cases, camera bodies can accept multiple mount types - Such as the Canon 700D. By having both a red-dot and a white square, it is clearly indicated that it is compatible with both mount types (Figure 1.4.2).

Luckily for us, lens mount adapters exist. This small external attachment will allow you to connect lenses to non-compatible cameras (Figure 1.4.3). This is a great addition to your kit if you're like me, and often shoot with both Canon and Sony cameras.

1.5) Drive Motor (STM Focus Vs. USM Focus)
A lot of beginner photographers have the misconception that autofocusing is solely dependent on the quality of your camera body. In reality, both your camera and lens will affect the process. Within the camera lens is a drive motor that adjusts the position of the focusing element, known as a 'focusing doublet'. When capturing data on-set with a DSLR or Mirrorless camera, the majority of lenses will either use a Stepper Motor (STM) or an Ultra-Sonic Motor (USM); this is indicated on the lens (Figure 1.5.1 & 1.5.2)


Simply put, USM focussing is smoother, faster and preferred for capturing in fast-paced environments. However, between 2 identical models with different motors, the lens with the USM will generally cost more.
Section 2: Fundamental Properties:
2.1) Focal Length
A lens' focal length is a number presented in millimeters (i.e - 24mm, 35mm, 100mm) and is usually displayed in text on the top of the lens (Figure 2.1.1)

This number is a measurement between the nodal point of the lens (also known as the optical centre) and the camera's sensor. (Figure 2.1.1)

The focal length determines your camera's field of view (FoV), measured in degrees. A shorter focal length means you will see more of your surroundings, while a long focal length will give a more narrowed FoV. (Figure 2.1.2)

Adjusting the focal length allows photographers to capture full-frame images of both large and small objects. In VFX data capture, this is a technical decision aimed at maximizing sensor usage and achieving optimal image quality. In cinematography, however, it becomes a creative decision about what best serves the story and frames the subject aesthetically.
One of the biggest considerations of focal length is how it affects 'lens distortion'. This is a fundamental topic throughout the VFX pipeline and is discussed more here.
Zoom vs Prime:
Some lenses have the ability to view multiple focal lengths, these are called 'Zoom Lenses'. This ability gives its user more control over what is visible in their image. Instead of a single number being displayed on your lens, it could display two numbers - indicating that it is a zoom lens. (Figure 2.1.3).

These numbers indicate the range of focal lengths available, with the smaller number providing your widest possible Field of View (FoV), and the larger number being your narrowest FoV. As mentioned earlier, zoom lenses have an additional series of moving glass elements. These elements move further apart or closer together to change how they magnify the incoming light, thereby changing the focal length (Figure 2.1.4).

Zoom lenses are fantastic in time-sensitive environments. Having a range of focal lengths at your disposal means you can quickly switch from taking full-frame pictures of a lamp, to a picture of an entire set without taking time to physically exchange lenses. Although zoom lenses provide flexibility and can save time, they often have a drawback in quality. These additional glass elements often introduce a lack of sharpness, which becomes much more noticeable at the minimum/maximum focal lengths (Figure 2.1.5). More noticeably, the moving parts and additional glass elements exacerbate chromatic aberration (Discussed more here).

When grabbing reference photography of a film set, we'll often use a wider-angle zoom lens such as 14mm-40mm. This is mostly due to the flexibility and speed it offers as well as the ability to see more of our surroundings. However, if asked to grab texture photography on a high-detail hero-asset, it might be a better idea to switch over to the prime lens with fewer artefacts.
Equivalent focal length:
The topic of equivalent focal length isn't discussed much but can be incredibly important for those in match-move and compositing roles. Even though two cameras might be shooting with the exact same lens, the two images might appear to have completely different fields of view. Depending on the sensor size of your camera, the image the lens produces might become cropped, essentially magnifying the view and giving the appearance of a longer focal length. For example, a 100mm lens on a full-frame sensor will have an equivalent focal length of 100mm. However, a Four Thirds sensor has a crop factor of 2.0x and would have the equivalent focal length of 100mm (Figure 2.1.6)

Your image may also become digitally cropped for a number of other reasons, also affecting your perceived focal length. For example, on many high-end consumer cameras, increasing the image resolution or enabling image stabilization can result in a 1.2x crop. This means your 50mm lens would have an equivalent focal length of approximately 60mm.
2.2) Aperture (F-Stop)
Earlier in the article, we mentioned that the area known as the aperture. In photography, we represent the size of this aperture with an F-stop number, such as F2.8, F5.6, F8, etc. A lens's maximum aperture size is usually indicated in writing on the lens barrel.
The F-Stop number value is calculated by dividing the focal length by the diameter of the aperture. Knowing this calculation is not important, but is helpful for those wondering why the numbers are represented this way.
The wider the aperture, the smaller the F-Stop value; and as the aperture becomes smaller, the F-Stop number increases (Figure 2.2.1). For many beginners, this inverse relationship is a bit confusing, but just remember: Smaller number = more light Larger number = less light

When capturing data, it’s generally beneficial to have as much light as possible. However, each adjustment to your camera's exposure settings will involve a trade-off in either image quality or practical limitations. For instance, in VFX work, a larger aperture can allow for more light in our image; but could introduce issues with camera tracking. In future lessons, we’ll dive into ‘The Exposure Triangle’ and how to balance these settings for optimal results. For now, remember that aperture size plays a crucial role in the type of shot you’re trying to capture, and this decision can greatly affect both your real-world footage and VFX integration
2.3) Focus (Light Convergence)
Although focus might seem like a fairly straight forward principle, there are many misconceptions that make it harder to properly understand concepts like 'depth of field'.
When light bounces off an object and enters a lens, it doesn't just hit one small part of the lens—it spreads across the entire surface. These light beams then come together and converge at a single point on the sensor (Figure 2.3.1).

If the beams converge exactly on the sensor, the image will be sharp and in focus. However, if the convergence point is slightly in front of or behind the sensor, the image will be out of focus (Figure 2.3.2).

A lot of people believe focus is a large area of depth, when in reality, true focus is actually an infinitely thin 2D plane in space, called, 'The Plane of Focus' (Figure 2.3.3). This concept is explored more in our next topic of discussion, 'Depth of Field'.

2.4) Depth of Field
Now that we understand how aperture and focus work, we can begin to learn about Depth of Field (DoF).
The term 'Depth of Field' refers to the range of distance within our image, and how much of it appears acceptably sharp and 'in focus'. Although some items might be slightly ahead or behind our point of focus, the deviation in convergence is so minimal that the image is still acceptably sharp. An interesting phenomenon that occurs is the distribution of acceptable focus around the plane of focus. A general rule of thumb is that within the Depth of Field, one-third of the focus will lie in front of the plane of focus, while two-thirds will fall behind it. This ratio may vary slightly depending on camera settings and distances, but it is typically easiest to assume a 1:2 ratio. Practically, this means we should bias the plane of focus two-thirds towards the front of the subject we're capturing.
What I found particularly fascinating is that the human eye is positioned roughly two-thirds towards the front of the face. This is great news for texture projection workflows on the face and perhaps a fortunate coincidence. Not only are we able to capture an even distribution of focus across the front of the head, but our focus will also be at its sharpest on the most complex feature—the eye. (Figure 2.4.A)

The smaller our aperture (a higher number, like f22), the more of our image's depth appears in focus. This is referred to as having 'a large depth of field', as the focus covers a larger range. (Figure 2.4.1)

The opposite case would be if we opened our aperture to something large, like f2.0. Less of the image appears in focus and our image would have a 'shallow depth of field'. (Figure 2.4.2)

For those curious as to why this happens, here is the best way I can explain it. When light comes through our aperture, the distance from the aperture to the sensor will always be constant. However, the area which the light is coming from changes with our aperture size, meaning that light will travel at a wider degree to converge at the same point (Figure 2.4.3). When comparing the light from images with different aperture size, at any given point, the light beams in scenario A are further apart than in scenario B.

Section 3: Imperfections and Artefacts:
3.1) Lens Distortion
As mentioned earlier, when light passes through a material, its direction can change. With a uniform surface, like a prism, the change in angle is easy to predict (Figure 3.1.1).

However, light interacts with a lens in a more complex way due to its curved shape. While the centre of a lens is relatively flat, the edges have a steeper angle. This means that light rays passing through the outer parts of the lens are bent at a greater angle, leading to distortion. (Figure 3.1.2)

With shorter focal lengths such as 14mm the lens distortion becomes obvious, but are almost unnoticable on longer lengths such as 55mm where the distortion is minimal (Figure 3.1.3). An easy way to visualize this is by overlaying straight lines in a grid pattern.

There are 2 primary types of lens distortion that we'll be exploring- Barrel, and Pincushion.

Barrel lens distortion occurs when using shorter focal lengths, generally 50mm and under. This type of distortion becomes more prominent at the edges of frame, where lines will begin to curve outwards (Figure 3.1.4).

Pincushion lens distortion starts appears in longer focal lengths, starting to become apparent around 70mm and upwards. While barrel distortion makes lines curve outward, pincushion does the opposite, and make them curve inwards towards the centre of frame (Figure 3.1.5).

The below figure is an easy way to visualize the presence of lens distortion (Figure 3.1.6). However, be aware this is diagram is exaggerated for illustration purposes. All lenses have a unique distortion profile and won't mimic this diagram exactly.

Lens distortion is something we must profile and correct when considering VFX workflows. By understanding the lens' unique degree of distortion, we can correct it for tasks such as match-movie or even apply the disortion to CG renders to correctly intergrate VFX into live-action plates. If you are wanting to learn more about lens distortion, I'd highly reccomend Andrei Sharapko's writings available from matchmove machine. His documentation is incredibly in-depth and should be your go-to on the topic! https://matchmovemachine.com/grid/
3.2) Chromatic Abberation
Light is made up of different wavelengths, which the human eye interprets as colour (Figure 3.2.1).

Shorter wavelengths, like violet and blue, carry more energy. As white light refracts, these shorter, high-energy wavelengths change direction more quickly than longer wavelengths that carry less energy. As shorter wavelengths are more prone to refraction and dispersion, they bend more sharply, causing the colors to spread apart. This dispersion of lightwaves results in a phenomenon known as 'Chromatic Aberration' or 'Colour Fringing'. (Figure 3.2.2) — Pink Floyd fans have probably seen something like this already...

As briefly mentioned above in "Zooms vs Primes", the additional glass elements often lead to enhanced chromatic aberration. This is due to the increased instances of refraction, which amplify each time the light passes through a new layer of glass. Although more prevalent in zoom lenses, the amount of glass is not the only factor promoting chromatic aberration. Other factors include glass quality, and focal length. High-end camera lenses often include higher quality glass elements, as well as additional optics that try and counteract colour fringing. Cheaper quality lenses lack these specialised elements and do very little do mitigate chromatic aberration. (Figure 3.2.3)

Chromatic aberration also appears more often in lenses with a longer focal length. As these lenses are designed to amplify small differences in lightwave convergence, those small colour shifts are also amplified. So for example, although your 400mm is a prime lens with high-quality glass, the sheer amount of refraction will likely leave you with a bit of colour fringing at the edges of frame. Within a VFX context, chromatic aberration is a large consideration to those on-set as well as those in post-production. Lots of on-set data (HDRI's, texture photography, photogrammetry, etc) has to undergo pre-processing and have artefacts such as chromatic aberration removed. Meanwhile in post-production, compositors will have to add chromatic aberration back onto their CG renders to make it seem though it was captured by film camera.
3.3) Vignetting
Vignetting is a phenomenon that darkens the image further from the centre of the lens. Below is an illustration of what heavy vignetting looks like (Figure 3.3.1).

This can happen for many different reasons. Due to the geometry of light, even in a perfectly designed lens, light reaching the edges of the sensor has to travel farther and is spread over a larger area, making it less intense. Vignetting can also be influenced by human factors. Accessories like filters and lens hoods can physically block more light at the edges of the lens compared to the centre. Similarly to our discussion on depth of field, using a wide aperture causes light to enter at steeper angles, making it harder for the light to reach the outer areas of the sensor, which can further darken the edges. Even on good quality lenses, subtle vignetting is still present. If we analyse the pixel brightness/luminance values across an evenly-lit lens grid, notice the values fall-off when moving outwards (Figure 3.3.2).

Manually profiling your lens' vignette is fairly simple. By capturing an image of a uniformly lit surface, such as a white sheet of paper, you can easily observe the darkening towards the edges. By subtracting the expected uniform colour from the image, you can isolate and visualise the vignetting effect. (Figure 3.3.3)

Conclusion:
Lenses can be a complex topic, but understanding them is well worth the effort! This was a more technical exploration, but if you're interested in their artistic qualities and use in film and television, I’ve included a list of my favorite resources below.
At first, these terms and concepts may feel intimidating, but don’t rush—learning takes time. In my experience, the best way to improve is by getting hands-on with a camera and seeing the results for yourself. Remember, progress is about direction, not speed.
Best of luck, and happy capturing!
—Harry
Additional Resources
Painting with Light - John Alton
The Filmmake's eye - The Language of the Lens - Gustavo Mercado
The Five C's of Cinematography - Joseph V. Mascelli
Cinematography: Theory and Practice -Blain Brown
Comments