DSLR is quite popular now, but there are few domestic books or online materials about Raw format. Some time ago, I had the honor to read the book Ra Al _ W O R L D _ C A M E R A W I T H A D O B E P O T O S by master BRU C E F R A S.
And verify the views in the actual document many times. I organized my reading notes and experience into this article, hoping to bring some useful information to the vast number of digital photographers. In addition, if you have a deep understanding of Gamma, Level and Curve in Photoshop, it will be easier to understand this article.
-
1. What is the original file?
RAW file mainly records the original information of digital camera sensor, with some metadata (such as IS0 setting, shutter speed, aperture value, white balance, etc. ) is generated by the camera. Different camera manufacturers will use different coding methods to record the original data, compress it in different ways, and even encrypt the original data in some cases. Therefore, different manufacturers use different file extensions for their original files, such as. Canon's CRW, Minolta's MRW. NEF from Nikon. ORF of Olympus, etc. But their principles and functions are similar.
Second, why choose the RAW format?
The answer is simple. Let's take a look at how most digital cameras generate JPG.
After the Raw data is obtained from CCD/CMOS, the previously set parameters, such as the color space, sharpening value, white balance, contrast and noise reduction of sRGB or Adobe RGB, are transformed by adding a strong S-shaped curve (light and dark) (why? This is because CCD/CMOS captures photon energy based on a linear gamma (gamma 1.0), while human eyes' perception of light is nonlinear. If this operation is not performed, the image will be too dark to see clearly. People who don't know the reason will never buy a digital camera after reading it, and the manufacturer will probably close down), get the converted image, and then compress it according to the JPEG quality you set (such as SHQ, HQ, M, S) to get a JPG file.
When shooting in Raw format, all the settings on the fuselage except ISO, shutter, aperture and focal length will not affect the RAW file, because all the operations mentioned above, such as color space, sharpening value, white balance, contrast and noise reduction, must be specified when converting RAW, and everything is controlled by yourself.
For the simplest example, taking a JPG is like taking a photo of yourself, and then giving it to the film producer of the camera manufacturer to help you take a picture; The RAW format is to take photos, develop negatives and shoot films by yourself. (See why Olympus Studio translated the English version of "Raw Development" into the Chinese version of "Raw Imaging". )
Many people may scoff at post-processing (or ps) and think that the most important thing in pre-shooting is the head behind the lens. Yes, I quite agree with that. However, since there is a spirit of taking photography seriously in the early stage, why can't post-production use this attitude? In the past, we always complained that our films were spoiled by photographers in the printing shop, so we turned to shooting and flipping, which gave us more control. Now that you are in digital, you can completely control yourself from beginning to end. Why do you want to stay away from it? Moreover, due to digital reasons, assuming that the post-production (here is not PS, it is equivalent to the process of film development) is not handled seriously, then no matter how hard you try in the early stage, you may not be able to shoot a really high-quality film.
Thirdly, about sensors.
The mainstream digital camera sensors mainly include CCD, CMOS and Foveon X3. About the working mode of Foveon X3, you can go to Foveon's home page to check for yourself. Here, simply talking about the working mode of CCD/ CMOS is enough for us to use raw.
The sensor of digital camera is a two-dimensional matrix composed of photosensitive elements (CCD or CMOS) densely arranged in horizontal and vertical directions. Common Bayer patterns are arranged as shown below, and each CCD corresponds to one pixel. Among them, R perceives red light, G perceives green light and B perceives blue light. In Bayer mode, G is twice as much as R and B (because our eyes are more sensitive to green).
Figure 1
Each CCD or CMOS in the matrix is only used to feel the energy of photons, generate corresponding charges according to the intensity of incident light, and then collect these charge information, amplify and store it. You should know that raw only records the charge value of each pixel position, and it does not record any color information. So CCD is "color blind", that is to say:
RAW files are just grayscale files!
We can imagine a CCD/CMOS charging in this way, as shown in the following figure:
Figure 2
Therefore, any RAW converter (such as Photoshop's Camera RAW plug-in, Bibble, Phrase One C 1 Pro, RAWShooter essentials 2005, raw conversion software provided by various manufacturers) is used to convert the brightness information recorded by these pixels into color information that can be seen by the naked eye. As for how different manufacturers arrange RGB or CMY on the sensor matrix, we don't need to care. As long as the software you use can support your digital camera, it means that he has understood this problem and knows how to interpret and process the brightness value of each pixel.
Because the principle of CCD/CMOS is different from that of Foveon X3, for CCD/CMOS, in order to obtain the color value of a pixel, it is necessary to obtain information from adjacent pixels to perform an operation called "anti-mosaic" (Foveon X3 does not need this), so as to obtain the color value of this position. Of course, in addition, the following things are controlled by Raw Converter, and these are the principles that we must know when operating Raw.
● White balance-Our eyes can automatically adapt to different ambient light, interpret the brightest place as white, and interpret other colors in turn. However, the sensor does not have the function of human eyes. It must know that white has how bright, so we need to set a white balance to tell it. When shooting with Raw, the sensor only records the brightness value of each pixel, and the white balance is recorded as metadata for later RAW converter conversion. It is equivalent to a starting point and can also be understood as an indispensable parameter in the transformation function. Without it, other colors cannot be explained. Before the Little Revolution, there seems to be an article about the problem that white balance can be restored in the later stage whether it is set or not. My view is absolutely certain in theory. Why is it "theoretical"? From the above discussion, we can know that the setting of white balance only participates in the later conversion. Therefore, even if the correct white balance is not set when shooting, as long as the correct color temperature value of the scene at that time can be input in the later conversion, the original color will be restored. But the question is, how many people can accurately remember the color temperature value at that time during the conversion process. Unless there is a pure white reference in the picture, it can be set correctly by absorbing color with a white balance straw. In fact, it also tells us a skill worth trying: when taking pictures, put a white object in the composition (such as white paper, but be careful not to affect the composition and the exposure reading too much, so that it can be fully illuminated by living light) and take it in. In the later conversion, just suck the white object with a white balance straw and then cut it off.
However, this is not the case for JPG. Because after shooting, it is converted to JPG by the camera, which means it must be operated with the color temperature value. If the white balance is not set before, it must be color cast.
● Color Deduction-If you ask a thousand people which color is red, you may get a thousand different reds. Similarly, for CCD/CMOS, it doesn't know what red, blue and green are. Therefore, when converting RAW files, you must specify the definition of red, blue and green, that is, color space. Different digital cameras have different built-in color spaces, such as sRGB, AdobeRGB and so on. In this regard, my understanding is that it is useless to set sRGB or Adobe RGB in the camera when shooting in RAW format, because the definition of RGB (that is, the target color space of the converted file) is told to RAW Converter when converting RAW files, so there is always an option of color space when converting RAW files, and you must specify this target color space. So, if we shoot in RAW format, don't ask whether it should be set to sRGB or AdobeRGB. You can even convert it to ProPhoto RGB if you like! If you shoot JPG, I'm afraid the pursuit of image quality is not very high, so I'll use sRGB.
● Gamma correction-first of all, you need to know what gamma is. You can search for relevant information on the Internet yourself. The shooting of digital RAW format adopts linear gamma (i.e. gamma 1.0), but the sensitivity curve of human eyes to light is a "nonlinear" curve. Therefore, RAW Converter always applies a Gamma curve to the original data in the conversion process (simply speaking, it is equivalent to converting the original data into an f(x), noting that f(x) is not a linear function) to produce timbre closer to human perception.
● Noise reduction, anti-aliasing and sharpening-Problems will occur when image details just fall on one unit pixel of CCD matrix, or on R-sensitive pixels and B-sensitive pixels. It is difficult to accurately restore the true color of details only by "anti-mosaic" operation, that is to say, details will be lost. Therefore, most RAW converters will perform a series of operations such as edge detection, antialiasing, noise reduction and sharpening during conversion. Because the algorithms used by different softwares are not necessarily the same, the film details produced by different RAW converters are also different.
Hey, I'm really tired after typing so many words. In fact, there are too many things to say about Raw. When taking notes, I directly extracted the original text (in English, the book has 254 pages, and the content about Raw has more than 100 pages), so it is very painful to write this article when reading English. Although, if we can understand the internal working principle of Raw camera, the previous problems can be solved. But perhaps for most people who read this article, the key is to know how to use (convert) Raw well, so I won't talk about the principle (involving more mathematical principles). If you have the energy in the future, write an old fox tutorial "Playing RAW Format"-a software article specifically introducing Camera Raw 2.4. Finally, list the key points that everyone must know (taking Camera Raw 2.4 for Photoshop CS Plugin as an example):
Figure 4
1. Any operation that can be done with Camera Raw should not be left to PS after conversion. The reason here is very simple. There is a fundamental difference between the operation before and after the conversion. Various operations before conversion are actually defining a series of parameters (such as color space, sharpening value, white balance, contrast, noise reduction, etc.). ) and then give them to the conversion function (actually Dcraw, an open source software, Dave Coffin). How good is he? Look at the original converter software below. They are all based on Dcraw:Adobe Photoshop, Bibble, BreezeBrowser, idea Lightbox, cPicture, dcRAW-X, Directory Opus Plugin, Thorsten Lemcke's DPMagicgraphiconverter, IrfanView, IRIS image processor for astronautics、light box、Photo Companion、Photo Jockey、PhotoReviewer、PolyView、power sluck-II、RawDrop、RawView、 SharpRaw, SilverFast DCPro, ViewIt, Viewer n5, VueScan of Duane DeSieno) to generate the color information of the target pixel. Equivalent to an f(x), f (definition of color space, sharpening value, white balance, contrast noise reduction) = the color of the target pixel. As long as this color value does not exceed the color gamut of the target color space, it is valid color information. However, if you switch to a smaller color gamut, some colors will be cut out (that is, the color of the target pixel exceeds the color space (for example, when an image with rich color information is switched to the color space of sRGB). But after the image is converted (developed), the operations in PS, such as grade, curve, hue/saturation, etc. All operations are based on the existing pixel color values and are nonlinear operations, which will inevitably lead to irreversible information loss. For example, if a nonlinear transformation f (x) = x2 (the square of x), x=3 or x=-3, the results are all 9, and the color information must be compressed. For another example, the exposure and shadows provided in Camera Raw 2.4 are equivalent to the white spots and black spots in the level in PS, so suppose we set the point with the brightness value of 245 as the white spot (255), and the result in the level is that all the points from 245 to 255 turn white, which seems to be no big problem. But the most fatal thing is that the original point from 0 to 245 was stretched to 0 to 255. Where does the color information that is not in the middle come from? Those are "fabricated" colors, which are calculated by interpolation algorithm. This is why if you look at the histogram after applying the grade, you will see many discontinuous lines in the middle (I won't explain the picture here, but it should be understandable if I am quite familiar with the grade of PS). However, what's the difference if you set exposure in Camera Raw? In Camera Raw, only one parameter value is given, so that the function can recalculate all pixels, so effective pixel color information can be obtained.
2. Regarding sharpening, is it better to use the sharpening function that comes with the software (Camera Raw 2.4) or the unsharp mask in PS? The answer must be the sharpening function that comes with the software (Camera Raw 2.4). ..... In the image, the edge consists of pixels with different gray levels and adjacent domain points. Therefore, if we want to strengthen the edge, we should highlight the gray changes between adjacent points (Advanced Application of Digital Image Processing in Delphi, Liu Jun), that is to say, the sharpening algorithm is generally based on the operation of gray values. Then, needless to say, you also know that PS operates on the converted pixel values (based on existing pixels). What is the difference between the sharpening function of the software itself?
Let's talk about the sharpening process: gray pixels->; Edge detection->; Gray scale enhancement->; Reduce r, g and b components.
And several pixel graying methods:
1) find the average value of r, g and b of each pixel, and then assign this average value to the r, g and b components of the pixel.
2) Find the maximum value of the three components of each pixel, namely, R, G and B, and then assign this maximum value to the three components of the pixel.
3) According to the color space of YUV, the physical meaning of Y component is a measure, which contains all the information of gray image. And y = 0.299r+0.578g+0.114b.
Which of the above two methods is better, the raw Converter (or even others), which is to detect edges directly from gray raw data or to detect R, G and B of pixels converted into gray?
In his book, B R U C E F R A S E R's view on this issue is to leave room for the unsharp mask in PS (unexpectedly, there are few options in Camera Raw 2.4, only sharpness. Fuzzy mask has quantity, radius, threshold and a lot of space. I have made many attempts on this issue. From an image full of details, no matter how to adjust the amount, radius and threshold, the blur mask can't produce that effect, or the transition is sharpened (some details are just invisible).
Here I also teach a trick of Photoshop sharpening: it is better to change the image to Lab mode, unsharp the L channel mask, and then switch back to RGB mode than to unsharp the image mask directly.
3. If only 800X600 pictures are exchanged online, and 2240 X 1680 (even larger) pictures are readjusted to 800X600, do I need to sharpen the original pictures first? Aren't they all the same after shrinking? No! Practice has proved that the details you can see can still be seen after sharpening and resizing with RAW Converter. Believe it or not, you can definitely see the difference without sharpening!
4. If you take the film out, choose Adobe RGB (in RAW Converter, not in the camera! ), if you communicate online, choose sRGB. Don't choose Adobe RGB conversion first, and then go to PS to convert to sRGB! However, if you want to do further processing after conversion, such as framing or signing, first choose Adobe RGB to convert to PS and then to sRGB.
5. The depth is the same as point 4. Choose 16 bit/channel if you want to take out the film, and choose 8 bit/channel if you want to communicate online. Don't select 16 bit/channel conversion, and then go to PS to convert to 8 bit/channel! However, if further processing is needed after conversion, such as adding frames or signatures, 16-bit/channel should be selected to convert to PS and then to 8-bit/channel (but if many filters are used to operate on the picture, 8-bit/channel should be used directly, because many filters in PS can't be used under 16-bit/channel).
6. If you plan to make a small picture for online communication, try to choose the smallest size (square CCD/CMOS pixel) in Camera Raw 2.4, and don't resize it in PS! For Fuji SuperCCD (hexagon), on the contrary, try to make the size bigger and resize it in PS. (Britain, France, Russia, Spain)
7. The exposure in 7.Camera Raw 2.4 should be reduced rather than increased. If you reduce it, you can restore more details of the highlight area. If you add too much, it is easy to produce noise in the shadow area.
8. The brightness in 8.Camera Raw 2.4 is equivalent to midtone——PS level-medium gray level; Contrast is equivalent to a curve; ; Saturation is somewhat similar to hue/saturation. The adjustment of each item will produce the following results:
Figure 3
9. Be sure to learn to look at the histogram of Raw. No matter which item is adjusted, be careful not to overflow the color (be cut off).
Brightness smoothing in 10 and camerraw2.4 is very effective for removing noise in large color blocks, such as noise in blue sky. The way to remove the same noise with PS is to convert the image into Lab mode, blur the L channel, and then switch back to RGB mode. But the effect is not as smooth as brightness.
1 1 color noise reduction, Camera Raw 2.4, needless to say, to remove noise in the dark.
12, color difference R/C and color difference B/Y are used to remove purple edges.
13 and vignetting are used to adjust the dark angle.