The advent of powerful image-manipulation software, such as Adobe System’s ubiquitous Photoshop, makes it easier than ever for photographers and journalists to tailor their pictures to look just the way they want them. Everything from subtle alterations to color saturation and shadow highlighting to the removal of dust specks and red-eye have become simple enough for anybody with the right software, a little bit of training, and a few minutes of free time to accomplish.
Not everything in life is sunshine and roses, though. Just as it has become easier to accomplish artistic expression for the purpose of producing powerful, evocative images, it has also become increasingly easy to radically alter the content of an image. In short, it is no longer the case that the camera doesn’t lie. Even worse, as the tools get better and better, it becomes increasingly difficult to determine when an image is lying to you.
Of course, doctored photographs are nothing new. David King, in his book The Commissar Vanishes, has produced the seminal work on photo manipulation in the Soviet Union. Stalin made himself famous for having pictures doctored, particularly to airbrush out people who had been photographed with him and subsequently fallen out of favor. More than one once-favored subordinate vanished from the official record at about the same time they were rounded up and killed for crimes against the state.
As time has gone on, the venerable airbrush has been superseded by fancy math and software almost magical in its capabilities. One side effect of this is that almost any photographer can now alter their photos, rather than requiring a team of experts to produce believable results. The issue has been thrust back into the news by the recent discovery of a badly doctored photograph of smoke rising over the site of an Israeli air strike in Lebanon. The photograph had been widely distributed by the Reuters news service.
In the resulting media feeding frenzy, Reuters announced that they had updated their rules for photo editing of images they carry on the news wire. In conjunction with these changes, Wired Magazine is reporting an announcement by Adobe that they are working on a new set of tools for an upcoming version of Photoshop that can be used to detect manipulation of a photograph using the types of techniques that their software has pioneered. These new plug-ins might be important tools for news organizations to try and catch these kinds of incidents before they happen, but they also raise a number of thorny issues.
There are three primary aspects to the new technology. The first, and most obvious, is a set of tools that attempts to identify areas of a photograph that exhibit “excessive sameness”. This means, in effect, that the software does a huge quantity of math and attempts to find patches of the photo that look like they used to contain something else, and then were glossed over. The tools to perform these kinds of edits tend to work by repeating a selected portion from elsewhere in the photo and then “stamping” this sample over the top of the offending content. Because the “new” content is really a copy of existing content, the software can detect areas of the doctored photo that are similar enough to other parts of the image that they are highly unlikely to have happened naturally.
Along side detection of clone-stamped areas of an image, new tools will look for places where an area of an image does not match the areas around it. Because of the way that images are saved as computer files, most photographic file formats produce patterns of relationships between adjacent pixels in the image. This is done as the image is created in order to smooth out the image and produce more realistic-looking color effects and eliminate jagged edges where different objects overlap in the picture. As a result of these relationships, it is almost impossible to alter an image without destroying or altering some of the relationships between the pixels at the edge of the altered area and the original pixels next door.
The third new capability that Adobe is planning to add will allow the software, with help from camera manufacturers, to uniquely identify the camera that took the picture in the first place. This would, in theory, make it easier to construct an “audit trail” for an image, tracing it back to its original source. It is less clear to me how this capability directly affects the detection of alterations to a photograph, but I can see how organizations like Reuters or the Associated Press might find these capabilities useful in managing their images and allowing them to address issues of alteration if and when they are discovered.
As powerful as they might be, these tools are not without their drawbacks. Perhaps the greatest danger is the unacceptably high rate of false positives the manipulation-detection tools produce. They are highly effective at detecting gross manipulation of an image, but they are also likely to be triggered by perfectly legitimate re-touches, such as removing dust spots from an original. It is certainly not reasonable to expect that professional news photographers will not perform any photographic manipulation at all, and these tools have no mechanism for making a distinction between legitimate photo editing and outright deception.
Even in the case of images that have not been altered at all, the tools still produce false indications of manipulation on a fairly regular basis. The Wired article throws out a number as high as ten percent. This is staggeringly high, but even if it is vastly reduced, the danger remains considerable. The Associated Press, for example, handles roughly three quarters of a million photographs each year. Even if the rate of false positives can be hammered down to below one percent, it would still be incorrectly identifying more than seven thousand images a year as being fake. That’s nearly two dozen images every single day.
Currently, photo manipulation is primarily detected by sharp-eyed photo editors noticing something out of place. These new tools might well help them to do a better job at finding inconsistencies in the images they handle. On the other hand, the tools might also serve to lessen the responsibility of photo editors to make these kinds of calls based on their own judgement and years of experience. They might also lead to a general policy of “better safe than sorry”, resulting in huge numbers of photographers having their images rejected, and perhaps even their careers ruined, as the result of a false positive.
Say that the AP decides that they will discontinue their relationship with a photographer if they find two images in a row which demonstrate evidence of tampering. Sure, with a one percent false positive rate, the chances of this happening to any given photographer is only one in ten thousand. However, if you handle 750,000 images, that means, by those numbers, that you’d be unfairly blacklisting 75 people a year. In those terms, that’s suddenly a completely unacceptable rate of failure.
I’m certain that the rate of false positives exhibited by the alteration detection tools is the largest risk involved in these new plug-ins. However, I’m also certain that the danger of being able to specifically identify the camera that took a picture poses the much more subtle threat. What effect will this have on people who are not professional photographers? If I’m a photojournalist, I wouldn’t have any problem with a photo editor being able to confirm that I was the person who took a given picture. On the other hand, what if I’m just an average person?
Sometimes the only evidence we have for atrocities being committed is provided by regular people who just happened to witness something happening. Would you be comfortable submitting a photograph to the media of a soldier shooting a civilian in anger, or the police beating a suspect, if you knew that the image could be traced back to an individual camera? I worry that this could have a profound chilling effect on accidental witnesses, preventing them from becoming whistle blowers. Perhaps worse is if they don’t even realize that the image can be connected to them, and suffer persecution as a result of their exposure of someone else’s misdeeds.
Overall, I’m interested to see how these tools work. I think that it will be a lot of fun to play around with them, and that in the right hands, they can be a great asset to those who seek to ferret out lies perpetrated on us, intentionally or otherwise, by government and the media. The internet, as I’ve stated previously, is a spectacularly sensitive BS detector. The ability to apply things other than the naked eyeball to suspicious images will make it even more so. At the same time, however, any powerful tool can be misused. These are no exception, and caution, sound judgement, and restraint will be required on the part of many of the people involved.