A CCD camera uses a small, rectangular piece of silicon rather than a piece of film to receive incoming light. This is a special piece of silicon called a charge-coupled device (CCD). This silicon wafer is a solid-state electronic component which has been micro-manufactured and segmented into an array of individual light-sensitive cells called "photosites." Each photosite is one element of the whole picture that is formed, thus it is called a picture element, or "pixel." The more common CCDs found in camcorders and other retail devices have a pixel array that is a few hundred photosites high by a few hundred photosites wide (e.g., 500x300, or 320x200), yielding tens of thousands of pixels. Since most CCDs are only about 1/4" or 1/3" square, each of the many thousands of pixels are only about 10 millionths of a meter (about 4 ten-thousandths of an inch) wide!
The CCD photosites accomplish their task of sensing incoming light through the photoelectric effect, which is a characterization of the action of certain materials to release an electron when hit with a photon of light. The electrons emitted within the CCD are fenced within nonconductive boundaries, so that they remain within the area of the photon strike. As long as light is allowed to impinge on a photosite, electrons will accumulate in that pixel. When the source of light is extinguished (e.g., the shutter is closed), simple electronic circuitry and a microprocessor or computer are used to unload the CCD array, count the electrons in each pixel, and process the resulting data into an image on a video monitor or other output media.
The difference between a CCD camcorder and an astronomical CCD camera is that a camcorder must take and display 60 sequential images per second to replicate motion and color from daylight scenes, while an astronomical camera is used to take long-duration exposures (from many seconds up to a few hours long) of very dim starlight to replicate an apparently motionless object. Camcorders make color images by merging the data taken simultaneously by groups of adjacent pixels covered by red, green, and blue filters.
Astronomical CCD cameras also can make color images, but these are made by post-exposure processing and merging of three separate exposures of an object made through red, green, and blue filters.
Finally, there are two characteristics of CCDs which are factors that must be considered in making a final astronomical image: 1) since they are electronic components CCDs are sensitive to heat within the camera as well as light from the object of interest and 2) the individual photosites in the CCD array may vary significantly in their sensitivity to both heat and light. First, this means that the electrons generated by heat rather than by light
need to be subtracted from the final tally of electrons in each pixel so that a truer image can be rendered. This is called "dark subtraction." Second, the variance in electron depth across the CCD array due to inherent differences among the pixels needs to be leveled by dividing each pixel value by the array's average pixel value. This is called "flat fielding."
Dark subtraction is accomplished by subtracting a "dark frame" from the object image (called a "light frame"). The dark frame is created by taking an exposure while the CCD is maintained in complete darkness. This exposure must be the same duration as the light frame and be made with the CCD at the same temperature as during the light frame so that electrons generated during the dark frame replicate the heat-generated electrons present in the light frame. An IP Network C Flat field images are made by taking a picture of an evenly illuminated scene, such as the sky at dusk or the flat gray interior of an observatory dome. The resultant image shows the inherent variances in pixel value across the CCD array due to differences in photosite sensitivity or to dust specks or vignetting in the optical system. Image processing software’s use mathematical algorithms to divide all pixel values in the flat field image by the array's average pixel value. The results are then correlated, pixel by pixel, against the array values in the light image to produce a better representation of the object of interest.
In the final stages of image production, the light frame (object image) is adjusted by first having an appropriate dark frame subtracted and then having an appropriate flat field divided into the image. This process is called image calibration and results in a truer, less noisy image. |