Building a better Commissar detector

The advent of powerful image-manipulation software, such as Adobe System’s ubiquitous Photoshop, makes it easier than ever for photographers and journalists to tailor their pictures to look just the way they want them. Everything from subtle alterations to color saturation and shadow highlighting to the removal of dust specks and red-eye have become simple enough for anybody with the right software, a little bit of training, and a few minutes of free time to accomplish.

Not everything in life is sunshine and roses, though. Just as it has become easier to accomplish artistic expression for the purpose of producing powerful, evocative images, it has also become increasingly easy to radically alter the content of an image. In short, it is no longer the case that the camera doesn’t lie. Even worse, as the tools get better and better, it becomes increasingly difficult to determine when an image is lying to you.

Of course, doctored photographs are nothing new. David King, in his book The Commissar Vanishes, has produced the seminal work on photo manipulation in the Soviet Union. Stalin made himself famous for having pictures doctored, particularly to airbrush out people who had been photographed with him and subsequently fallen out of favor. More than one once-favored subordinate vanished from the official record at about the same time they were rounded up and killed for crimes against the state.

As time has gone on, the venerable airbrush has been superseded by fancy math and software almost magical in its capabilities. One side effect of this is that almost any photographer can now alter their photos, rather than requiring a team of experts to produce believable results. The issue has been thrust back into the news by the recent discovery of a badly doctored photograph of smoke rising over the site of an Israeli air strike in Lebanon. The photograph had been widely distributed by the Reuters news service.

In the resulting media feeding frenzy, Reuters announced that they had updated their rules for photo editing of images they carry on the news wire. In conjunction with these changes, Wired Magazine is reporting an announcement by Adobe that they are working on a new set of tools for an upcoming version of Photoshop that can be used to detect manipulation of a photograph using the types of techniques that their software has pioneered. These new plug-ins might be important tools for news organizations to try and catch these kinds of incidents before they happen, but they also raise a number of thorny issues.

 There are three primary aspects to the new technology. The first, and most obvious, is a set of tools that attempts to identify areas of a photograph that exhibit “excessive sameness”. This means, in effect, that the software does a huge quantity of math and attempts to find patches of the photo that look like they used to contain something else, and then were glossed over. The tools to perform these kinds of edits tend to work by repeating a selected portion from elsewhere in the photo and then “stamping” this sample over the top of the offending content. Because the “new” content is really a copy of existing content, the software can detect areas of the doctored photo that are similar enough to other parts of the image that they are highly unlikely to have happened naturally.

Along side detection of clone-stamped areas of an image, new tools will look for places where an area of an image does not match the areas around it. Because of the way that images are saved as computer files, most photographic file formats produce patterns of relationships between adjacent pixels in the image. This is done as the image is created in order to smooth out the image and produce more realistic-looking color effects and eliminate jagged edges where different objects overlap in the picture. As a result of these relationships, it is almost impossible to alter an image without destroying or altering some of the relationships between the pixels at the edge of the altered area and the original pixels next door.

The third new capability that Adobe is planning to add will allow the software, with help from camera manufacturers, to uniquely identify the camera that took the picture in the first place. This would, in theory, make it easier to construct an “audit trail” for an image, tracing it back to its original source. It is less clear to me how this capability directly affects the detection of alterations to a photograph, but I can see how organizations like Reuters or the Associated Press might find these capabilities useful in managing their images and allowing them to address issues of alteration if and when they are discovered.

As powerful as they might be, these tools are not without their drawbacks.  Perhaps the greatest danger is the unacceptably high rate of false positives the manipulation-detection tools produce. They are highly effective at detecting gross manipulation of an image, but they are also likely to be triggered by perfectly legitimate re-touches, such as removing dust spots from an original. It is certainly not reasonable to expect that professional news photographers will not perform any photographic manipulation at all, and these tools have no mechanism for making a distinction between legitimate photo editing and outright deception.

Even in the case of images that have not been altered at all, the tools still produce false indications of manipulation on a fairly regular basis. The Wired article throws out a number as high as ten percent. This is staggeringly high, but even if it is vastly reduced, the danger remains considerable. The Associated Press, for example, handles roughly three quarters of a million photographs each year. Even if the rate of false positives can be hammered down to below one percent, it would still be incorrectly identifying more than seven thousand images a year as being fake. That’s nearly two dozen images every single day.

Currently, photo manipulation is primarily detected by sharp-eyed photo editors noticing something out of place. These new tools might well help them to do a better job at finding inconsistencies in the images they handle. On the other hand, the tools might also serve to lessen the responsibility of photo editors to make these kinds of calls based on their own judgement and years of experience. They might also lead to a general policy of “better safe than sorry”, resulting in huge numbers of photographers having their images rejected, and perhaps even their careers ruined, as the result of a false positive.

Say that the AP decides that they will discontinue their relationship with a photographer if they find two images in a row which demonstrate evidence of tampering. Sure, with a one percent false positive rate, the chances of this happening to any given photographer is only one in ten thousand. However, if you handle 750,000 images, that means, by those numbers, that you’d be unfairly blacklisting 75 people a year. In those terms, that’s suddenly a completely unacceptable rate of failure.

I’m certain that the rate of false positives exhibited by the alteration detection tools is the largest risk involved in these new plug-ins. However, I’m also certain that the danger of being able to specifically identify the camera that took a picture poses the much more subtle threat. What effect will this have on people who are not professional photographers? If I’m a photojournalist, I wouldn’t have any problem with a photo editor being able to confirm that I was the person who took a given picture. On the other hand, what if I’m just an average person?

Sometimes the only evidence we have for atrocities being committed is provided by regular people who just happened to witness something happening. Would you be comfortable submitting a photograph to the media of a soldier shooting a civilian in anger, or the police beating a suspect, if you knew that the image could be traced back to an individual camera? I worry that this could have a profound chilling effect on accidental witnesses, preventing them from becoming whistle blowers. Perhaps worse is if they don’t even realize that the image can be connected to them, and suffer persecution as a result of their exposure of someone else’s misdeeds.

 Overall, I’m interested to see how these tools work. I think that it will be a lot of fun to play around with them, and that in the right hands, they can be a great asset to those who seek to ferret out lies perpetrated on us, intentionally or otherwise, by government and the media. The internet, as I’ve stated previously, is a spectacularly sensitive BS detector. The ability to apply things other than the naked eyeball to suspicious images will make it even more so. At the same time, however, any powerful tool can be misused. These are no exception, and caution, sound judgement, and restraint will be required on the part of many of the people involved.

Advertisements

5 Responses to Building a better Commissar detector

  1. poetloverrebelspy says:

    This was the topic of my tutorial! Images, Reality, Illusions. Good times.

    This type of analysis came in the news over a year ago when that South Korean scientist was shown to have faked cloning. Science set up new rules for the images allowed in their reports, allowing for absolutely NO corrections (even for lint). http://www.sciencemag.org/about/authors/prep/prep_revfigs.dtl

    Kind of extreme, but then again, the world is linty. Why do we expect perfection?

  2. laikal says:

    Maybe this is a task for the Amazon Mechanical Turk? Perhaps after the software/algorithmic weed-out stage, at any rate.
    http://www.mturk.com/mturk/welcome

    @poetloverrebelspy
    Perfection in all things! We must have it! Now!

    But seriously, I dunno. The world is indeed linty.

  3. Mark says:

    Matt,

    This is a task that I suppose could be well suited to artificial-artificial intelligence. However, I think that trained photo editors are the better choice, here. They have a better idea than some random cog (or whatever individuals working on a Turk project are called), and can be expected to catch more problems. I think that the tools I talked about can be an aide to these people. What I worry about is a tendency for editors armed with this tool to become overly cautious, ruining photographers or just preventing you and I from seeing some great images out of fear of loss of reputation or a lawsuit for their publication.

    As for lint, I agree that the world is chock-full of it. That doesn’t mean I want to have to see it in your images, though. If you go to a warzone, I can reasonably expect a lot of dust to be flying around in the air, but my experience of your image is improved by removing all of the annoying specks, which otherwise distract the viewer from the content of the image. We have to be prepared to deal with a middle ground where reasonable cleanup is allowable, provided that the content doesn’t change. Kind of like quoting somebody. You might not include every tiny speck of what they said, but in removing the dross, you are required by standards of journalistic integrity to not alter the core of the content or meaning.

    Then again, I know that the revised Reuters guidelines suggest that images be sent in their raw, unaltered format to their photo desks, and that those desks and their supervising editors be left to do things like red eye reduction, lint removal, and contrast/brightness tweaks. This allows the AP to be sure that photographers aren’t modifying their images in the field (particularly on laptops, with LCD screens well known for poor color reproduction).

  4. TheGnat says:

    I agree that the 3rd “detection” item is well, comepletely bogus. That’s simply not right On the other hand, are there any reports of false negatives? If not, think about it in this light: if I were a photo editor, I would use the first two tools to narrow down which photos I needed to check. It would save me quite a bit of time. And I would still eyeball them carefully, in the end, my eyes and intuition would be the final say. If I felt a photo fell into a gray zone, where there seemed like there was manipulation, and the tools said so (how qualitative are these tools? do they highlight where the discrepencies appear to be? or just say “hey! manipulation happened!”?), i might email the photographer and ask for the original and/or a statement on what editing he did, before deciding to toss or keep a photo. The tools are just that, tools. When you accidently knock a nail sideways, you don’t blame the hammer for hitting the nail wrong, do you?

    Oh yes, and as someone who’s done alot of photoshop work, that job on the smoke was *pathetic*, an insult to the capabilities of the software! I can even tell what size of brush the hack used!

  5. Mark says:

    Ms. Gnat,

    I entirely agree that it should be up to the judgement of the photo editor to make the final choice. I just worry that given the level of scandal that recent revelations of photographic manipulation have generated, photo editors are going to be inclined to be excessively cautious. You’re right that tools are just that, but given the implications of using these particular tools in a sloppy manner, I felt it worth pointing out the dangers.

    I have no hard evidence of any particular rate of false negatives. However, I think that it would be a much less meaningful statistic. The rate of them would likely be totally dependent on the quantity and variety of edits performed to the photo. A giant, brightly colored “I haxx0r ur photog!” slapped in the middle of the frame is much easier to detect than brushing out the face of someone standing almost entirely in shadow, far in the background.

    If you look at the sample pictures, located in a box high in the right-hand margin of the Wired article I link to, you can see examples of the tool’s output. Those particular images are demonstrating the first tool I talked about, trying to detect clone-stamping. It highlights the areas of the image that look cloned, as well as the spot in the image it thinks the clone sample was taken from.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: