9

The only available tool able to create a RAW file is, as far as I know, a camera, and there are no standard tools to manipulate it. I've heard that's why RAW files are used in different photography competitions to prove a photographer has not manipulated a submitted photograph.

However, RAW files are just data files and any bits can be manipulated.

What makes RAW files difficult to be an object of manipulation?

scottbb
  • 32,685
  • 12
  • 104
  • 188
dzieciou
  • 533
  • 9
  • 21

6 Answers6

15

Nothing makes raw files difficult to manipulate for someone with the right expertise and tools. It's just that there aren't many folks around who have those tools and expertise.

The tools needed to manipulate a raw file into a jpeg are much more widespread and well known than those needed to manipulate a raw file into a different raw file. That is probably where the perception that raw files are more difficult to manipulate comes from: most organizers of such contests are more familiar themselves with how to produce heavily manipulated jpegs from raw files. Most of them are probably not aware that raw data can be manipulated at all, much less how one would go about doing it. I mean, they don't even understand what 300 dpi (doesn't) mean in a display-size agnostic digital environment.

Ironically, the news organization Reuters has it the other way around: They will only accept images that (appear to) have been generated as jpegs in camera at the time the images were shot.

Michael C
  • 175,039
  • 10
  • 209
  • 561
  • 2
    Interesting. JPEGs can be manipulated so much more easily. So why Reuters does it? – dzieciou Dec 01 '16 at 22:09
  • 1
    You'll have to ask that one to those who made the decision at Reuters. – Michael C Dec 01 '16 at 22:10
  • 3
    @dzieciou If memory serves, I remember something about a news agency (possibly Reuters) only accepting JPEGs generated by the camera at the time of the shooting to help identify photos that have been doctored. It's very hard to doctor a JPEG without leaving artifacts, so if the camera made the JPEG, they can reasonably assume it is not altered. – Cort Ammon Dec 02 '16 at 01:03
  • 12
    @CortAmmon But it is fairly easy to create a jpeg from raw data after the fact and then alter the metadata to make it appear to have been generated in camera. – Michael C Dec 02 '16 at 01:25
  • 8
    @MichaelClark If I were to take a guess at the delicate world of image forensics, it may be harder to make a JPEG that looks like it was generated by the camera's particular JPEG compression algorithm than meets the eye. I don't think it'd stop a determined forger (that'd call for cryptographic signing of images), but it'd cut down on a lot of it. – Cort Ammon Dec 02 '16 at 03:04
  • 2
    @CortAmmon Your guess is correct. But I bet my guess would be equally true that none of the people at Reuters who made the "jpeg only" decision have a very thorough understanding of the subtleties of image forensics. – Michael C Dec 02 '16 at 13:04
  • 1
    @MichaelClark You're right, that's a bet I would not take you up on =) – Cort Ammon Dec 02 '16 at 14:52
  • @CortAmmon: Cryptographic signing would not help either since the attacker has the key (it would necessarily be in the camera and only a minor nuisance to extract). – R.. GitHub STOP HELPING ICE Dec 02 '16 at 15:42
  • @R.. That is something which has been dealt with in the security world. There are plenty of examples where physical security is used to protect a key, and that key is then used to cryptographically sign a document. It's more than a minor nuisance when done right. See the Apple v. FBI battle over the FBI's difficulty in extracting a private key from an iPhone. – Cort Ammon Dec 02 '16 at 15:50
  • 4
    I think the Reuters rule was about forbidding photographers from going crazy with post processing that might change the impression of the image and less about outright manipulation. Limiting the photographer to the settings of a camera is a relatively simple rule compared to legislating which photoshop filters are acceptable. – CodesInChaos Dec 02 '16 at 17:07
  • It also limits options to accurately correct for color, contrast, etc. to make the image look more like what a person there would have seen. The Reuters rule appears to be primarily about limiting adding and removing elements of the scene within the field of view via cloning, stamping, or a healing tool rather than about legitimate color correction. The Reuters guide specifically allows for color correction to compensate for the differences between what our eyes see and what a camera sees in the same scene. Limiting to in-camera JPEG severely limits those corrective options that are allowed. – Michael C Dec 03 '16 at 11:03
10

A RAW file is little more than a container for the output of a camera sensor. It has to be processed into an image which gives it full color information at each pixel. As such there are no programs intended to manipulate a RAW file since it is meant as input to RAW conversion software.

Since it is just made of bits like any other digital file one can of course change any portion with a binary editor for example. What is harder would be to make coherent changes to the RAW file which that the changes would appear natural. You cannot crop a RAW file because the image size would no longer produce what the camera does. Adding or removing objects from the scene would require one to do the inverse transformation of a RAW conversion specific to a particular camera.

Itai
  • 102,570
  • 12
  • 191
  • 423
  • There are a few tools that transform raw data and then output that data still in raw form (i.e. non-debayered/non-demosaiced). The camera firmware that creates M-RAW and S-RAW files in some Canon EOS cameras, for instance. The Digital Lens Optimizer tool in Canon's Digital Photo Professional, for another. For more about DLO, please see: http://photo.stackexchange.com/questions/35324/why-does-using-canons-digital-lens-optimizer-double-the-size-of-a-raw-file – Michael C Dec 02 '16 at 16:16
5

There are better formats for lossless, high bit-depth image storage and exchange. The main benefit of raw files is that they contain minimally-processed sensor data. So there's no compelling reason for anyone to put the (fairly significant) effort in to writing the code to write raw files. This means that submitting the raw files is a practical way to demonstrate (i) possession of the original shot and (ii) what the original shot looked like. It's poor evidence of legal ownership of the original.

Chris H
  • 3,812
  • 1
  • 15
  • 19
  • Given how much a RAW image can be changed in post-processing, I'm not sure it is necessarily a good way of demonstrating what the original shot looked like. It's an okay way of demonstrating the original shot crop and content, though. – user Dec 03 '16 at 17:33
  • @MichaelKjörling, that's fair. In things like wildlife photography, editing out bait (for example) may be contrary to the intent of the competition. In an extreme case it would be possible to remove evidence of captivity in post. The raw would show this up. – Chris H Dec 03 '16 at 18:40
4

RAW files are hard to manipulate because there are no tools for this.

There are no tools because manipulating them is pointless.

RAW files don't hold standard images. They hold the data read directly from the sensor of one specific camera model. They need to be processed in a way specific to each camera model to get a standard image.

In order to "display" a raw file, you must have specific details of the camera model it came from—this is why RAW converters like Lightroom need to get an update for each new released camera. In contrast, JPEGs or PNGs are designed to be displayable on any device without needing to know where they came from. They are meant to be able to hold any image.

I hope this makes it clear that there is absolutely no point in producing RAW files in any other way than directly in the camera. (Unless you want to commit fraud, or unless you want to reverse engineer a specific RAW format to understand a camera better, or to produce your own RAW converter for it.)

Szabolcs
  • 2,883
  • 2
  • 24
  • 38
  • So there's no point to the image improvements one can gain using the Digital Lens Optimizer tool of Canon's Digital Photo Professional that applies very detailed lens correction to the raw file and appends it with a second file of equal size containing the "corrected" raw file? https://www.martinbaileyphotography.com/2012/07/14/quick-look-canon-digital-photo-professional-digital-lens-optimizer/ – Michael C Dec 02 '16 at 16:26
  • 1
    @MichaelClark, judging by your own comment in the question you linked, there is no point to the DLO saving as RAW rather than as TIFF. – Peter Taylor Dec 02 '16 at 20:57
  • TIFF "bakes in" black point, white point, color balance, etc. Raw does not. – Michael C Dec 03 '16 at 12:07
  • @PeterTaylor There is if you then do raw conversion and global editing using DPP rather than Lr! – Michael C Dec 03 '16 at 12:09
  • Once you leave the ACR module from within Lr or PS you're no longer working on the raw data anyway, you're working on an internal equivalent of a 16-bit TIFF. – Michael C Dec 03 '16 at 12:13
2

A typical camera sensor does not capture RGB pixels, but instead captures input distinct red-sensing pixels, green-sensing pixels, and blue-sensing pixels at slightly different locations; a raw file will report the values of the individual pixels as captured.

When a raw file is converted to an RGB pixel format, each pixel in the output file will typically be a weighted and filtered average of a number of pixels on the original sensor. Once the data is converted, each pixel in the resulting file will be capable of independently representing any color. If one wants to e.g. adjust the saturation in a file of RGB pixels, each individual pixel's red value can be based upon its blue and green values, and likewise adjust each pixel's blue based upon its red and green, and adjust its green based upon its red and blue.

If one wanted to apply a white-balance adjustment to a raw file, however, one wouldn't be able to adjust the color of individual pixels, since each individual pixel is only capable of sensing a single lightness value. If one wants to reduce the saturation of a raw image of a red object, it wouldn't be possible to increase the blue and green values of all the red-sensing pixels; instead, one would have to increase the reported values for blue-sensing and green-sensing pixels that were near brightly-lit red-sensing pixels. Such operations are not difficult, but each time they are applied will degrade the image a little more. By contrast, the act of converting sensor data to an RGB picture is generally lossy, but such loss need only be incurred once.

supercat
  • 451
  • 2
  • 7
  • 1
    That's a very naive view of the information contained by the data from each pixel in a raw file. There are no red, green, or blue pixels in a raw file. There are only monochromatic luminance values for each pixel. Yes, the pixels have been filtered by the Bayer mask that has three different filters centered on R, G, & B. But light from a much wider spectrum than the single wavelength on which each filter is centered is allowed to pass through the mask. It is much like using a R, G, or B filter in front of B&W film. You still have a B&W negative with tonal values shifted for certain colors. – Michael C Dec 01 '16 at 23:35
  • 2
    @MichaelClark: I clarified the pixels as red-sensing, green-sensing, and blue-sensing, to clarify that the color name describes what the pixel senses. I didn't want to get too deep into the details of how things work beyond noting that the pixels which are used for capturing the red, green, and blue parts of an image are in slightly different places, which means that anything which shifts information between colors will slightly distort its position. – supercat Dec 01 '16 at 23:46
  • But some green light will pass through the R and B filters. Some R and B light will pass through the G filters. Very little R light passes through the B filters and vice-versa. Each pixel well senses more than just red, green, or blue light. Those are just the colors to which they're most attenuated. – Michael C Dec 02 '16 at 00:06
  • 2
    @MichaelClark: While it's true that the pixels that are intended to sense red will also sense a certain amount of blue and green, and likewise for other colors, that doesn't affect the fact that adjusting things like saturation will distort the apparent position of light hitting the sensor, and that repeated adjustments would progressively degrade the image. – supercat Dec 02 '16 at 00:12
  • There is no saturation in a monochromatic luminance value. There is only brighter or darker shades of black/grey/white. Increasing the response from red filtered pixels makes the luminance value brighter for red filtered pixels. There is no such thing as saturation until debayering/demosaicing has occurred. – Michael C Dec 02 '16 at 00:17
  • But I think in the context of this question we're talking about wholesale replacement of massive numbers of pixels, not subtle adjustments to only the red or blue or green attenuated pixels. – Michael C Dec 02 '16 at 00:18
  • @MichaelClark: The point is that desaturating the color in a raw picture would require adjusting pixels of each kind based upon nearby pixels of other kinds. – supercat Dec 02 '16 at 00:27
  • The point is there is no real color in a raw file to saturate/desaturate. There are only brightness values for each pixel well. That's it. If the values of all of the red attenuated pixels, for example, are boosted and recorded as another raw file when that second raw file is debayered/demosaiced that can be entirely counteracted by using different response curves/gamma correction/etc. to produce the exact same results as using other response curves/gamma correction/etc. when debayering/demosaicing the original file. – Michael C Dec 02 '16 at 00:33
  • When you "open" a raw file in an image viewer you aren't looking at the actual raw data. You are looking at a debayered/demosaiced version of that data with various gamma corrections/response curves applied and then compressed to 8-bit per channel to be viewed on your monitor. – Michael C Dec 02 '16 at 00:38
  • 2
    @MichaelClark This continues to seem like a semi-mystical view of how RAW files work. While it is true that the RAW filters aren't perfect primaries, RAW conversion treats them exactly as if they are (for the sensors' native color space). The same debayering / demosaicing algorithms would work with idealized perfect filters. – mattdm Dec 02 '16 at 08:40
  • 1
    It only seems mystical to those who don't understand what information a raw file contains and, more importantly, what it does not.

    http://freefall.purrsia.com/ff300/fv00255.htm

    – Michael C Dec 02 '16 at 13:11
  • @MichaelClark: Indeed so, but that wouldn't make it impossible for an editor to allow a user to select a region in the screen representation and an operation to perform upon it, and then perform that requested operation on the original raw data. It would make it awkward, which is why programs don't generally try to manipulate raw files directly. – supercat Dec 02 '16 at 15:39
  • @supercat Please see: http://photo.stackexchange.com/questions/35324/why-does-using-canons-digital-lens-optimizer-double-the-size-of-a-raw-file – Michael C Dec 02 '16 at 16:17
  • @MichaelClark: What contradicts what I've said? It doesn't sound like the utility described in that other post is really editing the raw image data. – supercat Dec 02 '16 at 17:45
  • DLO does in fact remap the raw image data from each pixel, interpolates monochromatic luminance values for each pixel in the revised data, and write all of that data back into the .cr2 file as a second image file within the .cr2 container. That's why the size of the image file doubles. – Michael C Dec 03 '16 at 12:43
  • The point is, there are applications, some of them even widely available, that do manipulate raw data directly without first debayering/demosaicing it. – Michael C Dec 03 '16 at 12:45
  • @mattdm: To be precise, what do you mean for primaries? [Note: sensor (and eye) sensibility MUST be different from display (and printer) primaries. In any case your idea is correct, one (person or camera) cannot do color correction if filters are not similar to eye's cone sensibility. But offset of colors is important (and part of raw conversion tools). I already tried to read a raw of unreleased (to public, but already announced) cameras, and colors are completely wrong. – Giacomo Catenazzi Dec 06 '16 at 08:25
  • @MichaelClark That comic is cute, but it doesn't add any actual information. Please see http://fastrawviewer.com/viewing-raw-is-not-impossible — even if you doubt me, you must agree that the authors of Libraw understand what information a raw file contains (and doesn't). Remember, the point of Clarke's third law is that it's just technology after all. – mattdm Dec 06 '16 at 11:36
  • @GiacomoCatenazzi Yes, please also see the link above and particularly the part about color spaces. Or look at the Libraw or dcraw source code. – mattdm Dec 06 '16 at 11:38
  • @mattdm The link is just semantics. One person says, "You can't view raw files without doing gamma correction, demosaicing, white balance, etc" and the other side says, "Oh yes you can, as long as you do gamma correction, demosaicing, white balance, etc." – Michael C Dec 06 '16 at 12:06
  • My basic argument at the beginning of this discussion is that all of the light collected by green filtered pixels is not even green, much less one specific wavelength that we call green, and to represent that light as one specific shade of green is not accurate. The same is true of the red and blue filtered pixels. They are luminance values that represent a wide range of wavelengths of light collected in each pixel well. – Michael C Dec 06 '16 at 12:09
  • @Michael The same can be true of any RGB channel — that's where I'm saying you're treating this mystically. Or to put it the other way around, RAW files would work the same way with perfect filters. – mattdm Dec 06 '16 at 12:26
  • Also, on the "semantics" argument — you don't need to do demosaicing, as I showed you earlier. And you don't need to do color correction — it'll work fine, just look color-shifted. That doesn't mean that it's not color. – mattdm Dec 06 '16 at 12:30
  • By that definition a sepia photograph is color. – Michael C Dec 06 '16 at 13:22
1

Because there is no need to do it.

Advanced manipulation programs usually are non destructive, so programs usually use original image, but save manipulation in an additional file (or database). For simple editing, the sidecar .xmp is used.

There is huge advantage to have non destructive workflow. One for all: easier to backup, but you can always have the original information to do further manipulations, without losing anything.

I think UFRaw can also save raw.

Technically it is not difficult to create them: common raw files are just compressed TIFF files, with some well know interpretation of color and geometry of pixels (because it is used by the readers), and with some additional EXIF information.