The natural evolution of photography

Photography has always been a blend of technology and art, a way to document the world while also allowing personal expression. From the start, photographers have tried to make their images better, using whatever tools and techniques they could. Today, artificial intelligence (AI) and advanced digital tools dominate the field, leading some to question whether this represents a break from traditional photography. However, history shows us that photographers have always adjusted, enhanced, and interpreted their images. Far from being a radical change, modern tools are simply the next step in this ongoing evolution.

In the early days of photography, cameras were far from perfect. They often captured images that were too dark, too light, or lacked the details the photographer wanted. To fix this, photographers developed techniques in the darkroom, the place where they processed their photos from negatives. One popular technique was “dodging,” which lightened certain parts of the image, and “burning,” which darkened others. These methods helped balance the light and shadow in a photo and make it more visually appealing.

Photographers also took manipulation further when necessary. For example, in 1857, Oscar Rejlander created a famous image called The Two Ways of Life by combining more than 30 different photos into one seamless picture. This was no quick process, it required hours of work, but it allowed him to tell a story that a single photo could not. He also did double self-portraits!

Around the same time, other photographers used related techniques and other processing ideas to create artistic images full of emotion and meaning. These early works show that even back then, photography wasn’t just about recording reality, it was about interpreting and shaping it.

As photography became more advanced, so did the methods for improving and altering photos. In the late 19th and early 20th centuries, some photographers embraced a style called Pictorialism, which was all about making photos look more like paintings. They used special lenses to make their images softer and dreamier and added textures to the printed photos to give them an artistic feel. These photographers didn’t see their work as simple pictures; they saw it as art, created through skill and vision.

Fast forward to the digital age, and things have changed dramatically. Where photographers once spent hours in darkrooms, they can now make adjustments in seconds on a computer or smartphone. Software allows users to fine-tune every detail of a photo, from colours to sharpness, with incredible precision. Today, AI tools go even further, offering features that can automatically brighten a dark photo, fix blurry images, or even remove unwanted objects from the composition or add novel objects.

Some smart phones have the ability to pull together a composite group photo so that everyone in the frame will have just the right expression and be looking at the camera. Something a single frame group shot inevitably lack. Other tools can develop high dynamic range (HDR) photos from a series of exposure bracketed photos so that none of the dark parts of the photo are too black and the none of the bright areas are blown out, allowing detail to come through the whole image.

For many people, these tools make photography more accessible and creative, but they also raise questions about how far we should go when altering an image.

One famous example from the past helps put this into perspective. Ansel Adams, known for his striking black-and-white (monochrome) landscapes, was a master of the darkroom. He would spend hours adjusting his negatives (the original image captured on film) to bring out dramatic details and create the mood he wanted. While his photos were based on real places, the way he presented them was highly selective. He chose what to emphasize and what to downplay, shaping how viewers experienced the image. Today’s AI processing tools do something similar, but much faster and with far greater precision.

Some people worry that these tools, especially AI, cross a line by making it too easy to change a photo. For instance, is it still truthful to replace a cloudy sky with a sunny one using AI? Or to remove distractions like wheely bins or overhead cables from a street scene? These are not entirely new debates; photographers have faced similar questions for over a century and have always masked and cloned the components of their photos to clean them up in incredibly imaginative ways. What is new is how simple these changes have become and how hard they can be to spot.

At its core though, photography has never been just about capturing what’s in front of the camera. Even choosing what to include in the frame or how to position the camera changes how the scene is represented. The world beyond the lens is not a 1/200th of a second freeze-frame, it is ever-changing. Modern tools like AI add more ways to adjust and refine an image, helping photographers fix mistakes or enhance their creative vision.

Instead of rejecting these tools, photographers and viewers can approach them as part of the medium’s evolution. Used thoughtfully, AI can be a powerful partner. It helps solve technical problems—like reducing grainy textures in low-light or high ISO photos or sharpening an image that’s slightly out of focus. They open up more creative possibilities for more people. Rather than replacing creativity, AI should be seen as a new set of tools that expand what photographers can do. Ironically, if photography freezes time in a frame, then photography is dynamic and ever-changing.

Photography is not, of course, simply about the processing of the images one captures. It’s much more than that, it’s the planning, the serendipity, the sometimes being in the wrong place at the right time. The choosing your camera settings, your framing the capture, the pressing the shutter, and the anticipation of working through the photos you’ve taken to find the one bright spark among the unfixables.

Footnote

It is worth mentioning that the thrust of this discussion is about processing tools rather than the generative AI that lets you create those weird fantasy images based on a text prompt. That said there will be blurred lines (pardon the pun) between the various tools as we move forward. One that fixes an image by removing a distracting object, like a wheelie bin, might equally be able to generate a more appropriate replacement for that object, such as a tree. There are many more examples where the processing tools and the generative AI tools are beginning to overlap.