CCD Glossary
From SkyInsight
The following is a reconstruction of the CCD Glossary originally compiled by Ron Wodaski and hosted on his New CCD Astronomy website. The original glossary was hacked and thus removed from the site.
A/D converter
A/D stands for Analog to Digital. The A/D converter is an electronic component in a CCD camera that converts the analog signal from the CCD chip into digital form. You might assume that the CCD chip outputs a digital signal, but that's not true. The CCD chip converts photons into electrons. The more electrons, the larger the net negative electrical charge in a given pixel. The A/D converter measures that charge, and encodes it into ones and zeros.
By: Ron Wodaski
aberration
This is a catch-all term that refers to all kinds of optical problems. The greater the aberration in an optic, the worse the sharpness and contrast will be. The higher the quality of the optic, the fewer and smaller the aberrations is has.
See also: collimation
By: Ron Wodaski
Link: A gentle introduction to optical design
ABG (antiblooming gate)
Refers to a CCD camera that has special circuite to inhibit blooming. The circuits cover part of the active pixel area (about 30% is typical), so ABG cameras are less sensitive than non-antiblooming (NABG) cameras. In practice, however, the ability to take much longer exposures with ABG cameras means that your actual productivity is similar with both types of cameras.
See also: NABG (non-antiblooming gate)
By: Ron Wodaski
ADU
Stands for Analog to Digital Unit. Simply put, one ADU is one brightness level. For example, if a pixel in your image has a brightness level of 2,543, that is 2,543 ADU. Used in technical discussions, in phrases like: 2.3 electrons per ADU or The full well capacity is around 40,000 e-, or about 18,000 ADU.
By: Ron Wodaski
Aggressiveness
Pertains to guiding corrections. A high aggressiveness setting means that the guider will make guide corrections aggressively. A low setting means that the guider will make only part of a correction on each attempt. If your mount is over-responding to guide corrections, try lowering the aggresiveness setting.
By: Ron Wodaski
artifact
When you sharpen an image using whatever method you choose, there is a limit to how far you can sharpen that image. If you sharpen too aggressively, you run the risk of creating artifacts. These are false details that don't actually exist in the image. They are created by the overlyl aggressive sharpening process. This applies to any form of sharpening, including high pass filters, unsharp masking, deconvolution, and so on.
By: Ron Wodaski
Link: The philiosophy of artifacts
Astrograph
An astrograph is a telescope that is designed with photography as its main purpose. The camera and the human eye see things a little differently, so there are diffeences between telescopes designed for visual use, and for photographic use. This doesn't mean you can't use most visual instruments for imaging, and it doesn't mean you can't use many astrographs for visual obesrvation. It just means that the design leans one way or the other. There are some telescopes which are pure astrographs, and can only be used for imaging. The Schmidt camera is one such. Instead of a secondary mirror like a Schmidt-Cassegrain, it has a curved surface that holds a piece of film. There simply is no way to put your eye at the focal plane! The curved surface is another clue to what makes astrograph's special: they often give some consideration to the shape of the focal plane, so that it will match the surface of the film or CCD detector. Many Astrographs have a very precisely flat focal plane, for example, which is ideal for both film and CCD imaging. A telescope designed for visual use can rely on the eye's ability to accomodate a range of focus to compromise the focal plane a bit, and still deliver good results in an economical fashion. It takes extra optics (often expensive optics) or special materials or designs to flatten out the field for imaging, and this is why astrographs often cost more than visual scopes of a similar aperture. In many cases, an astrograph is also a wide-field system. It is harder to achieve a flat, wide field because of the very fast focal ratio. It is the nature of telescopes with a slow focal ratio to have flatter fields, so astrographs are not as common among such scopes.
By: Ron Wodaski
Link: Astrograph magazine
Backlash
Backlash is the free play in a mount's RA and Dec axes. Most mounts used for imaging emply a worm and gear drive mechanism. The worm is like a screw, and it engages the worm gear and drives it very slowly around. All gears require a bit of looseness in order to turn; otherwise, they would lock up. This looseness creates backlash. The name backlash is reasonably descriptive if you exmaine what happens where worm meets gear. As the worm turns, it presses against the gear teeth, causing the gear to turn. Now imagine reversing the worm so it turns the other way. It must turn a small amount before it engages the opposite tooth on the gear. During this time, the mount does not move. Once the worm touches that opposite gear, the mount begins moving in the opposite direction. The delay between forward and backward movement is called backlash - a time when no movement occurs. If you wiggle the mount by hand, you can feel the backlash if it is large enough. If the mount uses other gears besides the worm and worm gear to transmit power from the motor, any looseness in those gears also contributes to backlash. If there are 5-6 gears, the backlash adds up pretty fast. In most cases, if you have any abiltiy to adjust the way any gears mesh, it is usually the worm and worm gear. In most such mounts, the worm is on a plate that can be slid toward and away from the worm gear in very small increments. The RA gears are always moving the mount in order to track with the stars. If you make guding corrections that are slower than this rate, the RA actually never stops moving forward, and backlash never comes into play. Guide rates of .25 to .75X sidereal rate, therefore, are ideal. You should slightly overweight your mount to the east side to take proper advantage of this, however. Otherwise the mount will have to partially unload and reload weight onto the gears when slowing down and speeding up. A well-made mount can actually tolerate an amazingly large amount of backlash in RA without difficulty. Backlash is always a concern in Declination, however. You want to pay special attention to the quality of mesh in Dec. The mount frequently changes direction in Declination, and backlash can become a real hassle if it is not controlled. Some mounts, and many camera control programs, include backlash compensation for this reason. This is a setting that determines the amount of time that the motors move at a higher speed when the mount changes direction. You can determine by trial and error what backlash compensation setting to use. The compensation should be not quite enough to completely remove backlash's effects. NOTE: don't use software backlash compensation for RA unless you are using a guide rate greater than 1X sidereal rate. Otherwise, the mount may jump when the compensation is unnecessarily applied when changing directions!
By: Ron Wodaski
Link: Improving the CG-5 Equatorial Mount (dec axis)
bias frame
An exposure of the shortest possible duration, taken with the shutter closed (dark). Used when scaling dark frames (applying them to images with a different exposure time than the dark frame).
See also: dark frame, flat-field frame
By: Ron Wodaski
Link: An examination of bias frames
binning
The practice of grouping CCD pixels to create a larger virtual pixel from smaller real pixels. For example, if you bin 2x2, you get a four-pixel array that is treated as if it were a single pixel for output. When taking short exposures, such as for rough focusing, binning increases the sensitivity of the camera, and it also allows for much faster downloads. This allows you to visualize what is in your field of view in 5-10 second exposures and quickly assess your framing and field of view. Binning is also useful when you are using a camera/telescope pair that would otherwise have too much resolution. Seeing limits the resoltuion you can actually obtain. If the camera has higher resolution than this, it is wasted. Binning reduces the resolution, increases the sensitivity of the camera, and allows you to make productive use of your time.
By: Ron Wodaski
Link: The fundamentals of binning
black point
This is the lower limit of data displayed. All pixels dimmer than the black point will appear black on your display. The best place for the black point is on the left side of the main data peak in the image histogram. If the main data peak has more than one peak, your image probably suffers from gardients and no single black point will work. Fix the gradients (see chapter 6 in the book) before you try to set an accurate, final black point.
See also: white point, Histogram
By: Ron Wodaski
blurring
A type of smoothing.
See also: smoothing
By: Ron Wodaski
calibration
MaxIm DL uses the word calibration to describe the process of applying bias, dark, and flat-field frames to a light frame. I tend to think of calibration as tuning something to a reference standard. A better term would be reduction, as used by many professional astronomers. This is derived from the phrase data reduction, which is the process of removing noise and errors from the data. Since application of bias, dark, and flat-field frames is largely responsible for removing system noise from the image, I prefer the term image reduction to image calibration.
See also: reduction, bias frame, dark frame, flat-field frame
By: Ron Wodaski
Link: Calibration via image reduction (PDF)
CCD
Stands for Charge-Coupled Device. Charge-Coupled refers to the fact that the pixels on a CCD chip are electrically connected to each other. This allows the chip to be read by shifting the data, one row at a time, toward a read register. The primary advantage of this fact is that it avoids circuitry blocking the incoming light, allowing maximum sensitivity to incoming photons.
By: Ron Wodaski
Link: How to choose the right CCD chip
CMY
A type of color imaging, using Cyan, Magenta, and Yellow filters. The Cyan filters passes blue and green light. The Magenta filter passes red and blue light. The Yellow filter passes red and green light. It is commonly supposed that the fact that the filtes pass twice as much light leads to shorter exposures. However, the process of separating out the red, green, and blue colors adds noise, and so CMY imaging is about as noisy as RGB image. Since RGB is more convenient and easier to do, the large majority of imagers work with RGB instead of CMY. CMY imaging that also uses a luminance layer is called CMYK or WCMY, with W standing for White and K standing for blacK. CMY color is subtractive, whereas RGB color is additive. For example, to get the red color from a CMY set: magenta + yellow - cyan = red because: (red+blue) + (red+green) - (blue+green) = red
See also: RGB
By: Ron Wodaski
Link: CMYK versus RGB
collimation
The state or act of aligning the elements of an optical system. To get the best images from your telescope, the mirrors and/or lenses should be perfectly aligned. Misalignment will create various types of optical aberrations, and reduce the sharpness and contrast of your images. Poor collimaton can also result in elongated star images. How you collimate depends on what type of telescope you have. There are many sources of good collimation instruction for Cassegrain and Newtonian scopes. Refractors, and some other types of scopes, may require a trip back to the factory service center for collimation.
See also: aberration
By: Ron Wodaski
Link: Collimation of a Newtonian
dark frame
A dark frame is an exposure taken with the shutter closed. A dark is normally taken with the same exposure duration and cooling temperature as the light frame to which it will be applied. The purpose of a dark frame is to record the system noise of the camera. It is subtracted from a light frame to remove that system noise. For best results, take multiple dark frames and median combine them. This will reduce the amount of random noise added to the image by the dark frames. Although a dark frame is used to remove system noise, it can add it a small amount of random noise at the same time.
See also: bias frame, flat-field frame, Noise
By: Ron Wodaski
Link: Web page on image reduction
extinction
Depending on the elevation of an object in the sky, the atmosphere absorbs different amounts of color in the light passing through. At a low elevation, blue extinction is pronounced. You see this when the sun sets -- instead of being a bright, yellow/white object, it becomes noticeably ruddy. The same thing is happening at night. The lower an object is, the greater the blue exinction. You can compnesate for this by adjusting the duration of your blue exposure: increase it for objects low to the horizon. Chapter 7 contains suggestions for specific exposure adjustments for various elevations.
By: Ron Wodaski
Link: Solar analog stars and extinction factors
flat-field frame
Just as a dark frame removes the camera system noise from an image, a flat-field frame removes optical sources of noise. This includes dust that casts shadows on the CCD chip; vignetting in the optical systems; unven lighting from internal reflections; etc. When imaging under very dark skies with a non-vignetted optical system, flat-field frames are less necessary. The brighter the sky, the more critical it is that you take high-quality flat-field frames. Unlike darks, flat fields are not subtracted from an image. The flat is scaled to the background level of the image, and then divided into the image to remove the effects of optical noise.
See also: bias frame, dark frame
By: Ron Wodaski
Gaussian blur
A type of blurring that looks more natural. Many other types of blurring, such as simple averaging, do not look as natural as a Guassian blur. The Gaussian blur works by weighting the contributon of surrounding pixels to the blur. The weighting is based on a Gaussian distribution (bell curve). This adds low-frequency data to the blur and is very effective for blurring noisy or background areas in an astronomical image.
See also: smoothing, Noise, Signal-to-noise ratio
By: Ron Wodaski
Link: What's behind a Gaussian Blur?
Histogram
A histogram is a map of the brightness levels in an image. There is a long, detailed discussion of histograms, what they are, and how to use them in chapter 8 of the book. Chapter 9 contains many practical examples of using histogram adjustments to modify the appearance of an image. In general, a histogram is a graph that shows increasing numbers of pixels in the vertical direction, and increasing brightness in the horizontal direction. A high peak to the left means there are lots of dark pixels in the image. This situation is typical of an astronomical photo. There are two types of histogram changes you typically make when processing an image:
- Linear change - This refers to adjusting the black and white points.
- Non-inear change - This refers to distortions of the histogram that emphasize one part of the histogram while de-emphasizing come other part. For example, you can apply a curve to a histogram in Photoshop. You might choose to brighen only the dimmest portions of an image while sacrificing some detail in the brightest portions.
See also: white point, black point
By: Ron Wodaski
Link: A tutorial on histograms
Lab Color Model
A type of color model (like RGB, LRGB, CMY, WCMY, and CMYK). The L layer is a luminance channel (just like L in LRGB and K in CMYK). The a and b channels are chrominance (color) layers. The a channel contains the colors from green to red, and the b channel contains colors from blue to yellow. The a and b layers aren`t very intuitive to work with because of the combined colors - if you examine them, you`ll quickly realize this! The Lab model was originally developed to be a color space that is capable of representing every color perceived by the human eye. It`s main use for astronomical imaging is that the L layer remains separate, allowing you to apply histogram changes, smoothing, and sharpening to the luminance layer alone. You can also apply a blur to the a and b layers to remove excessive color fringing around stars. You can realize the benefits of the Lab model (the ability to edit luminance data while leaving the color data lone) is an RGB image by using separate layers for the L and RGB data.
See also: CYM, RGB, LRGB
By: Ron Wodaski
Link: Lab color model in Photoshop
LRGB
An RGB image that also includes a luminance (white light) image. The primary advantage of LRGB over RGB is that it allows you to collect as much luminance data as you please, then combine that with RGB data. During processing, you apply all histogram and sharpening to the luminance image, and color balance is thereby untouched during processing. When the luminance image is as good as you can make it, you combine the luminance with the RGB data to make the final image. If there is noise in the RGB data, you can smooth the RGB image without much adverse impact on the final image.
See also: RGB], [#smoothing|smoothing
By: Ron Wodaski
Link: RGB versus LRGB
NABG (non-antiblooming gate)
Refers to a CCD camera that has no special circuite to inhibit blooming. NABG cameras have a linear response to light. They generate a charge that accurately reflects the amount of light striking each pixel. This makes NABG cameras ideal for situations involving measurement of the incoming light (photometry).
See also: ABG (antiblooming gate)
By: Ron Wodaski
Noise
Noise is the brightness in your image that is a direct result of anything other than the photons that have traveled from some object in space to your CCD detector. Noise can occur from photons during light collection, or it can occur from electrons after the photons are converted to electrons. It can also occur when you process your image. Noise creates uncertainty in the brightness level of the pixels in your image. If a picture has a lot of noise in it, you can see this visually. There will be variations in brightness in areas where you expect to see a more uniform brightness. Words commonly used to describe this: `grainy` and `gritty.` Technically, noise is random and unpredictable. That's what makes it noise: there is no way to predict it or remove it from the image. We also use the word noise to refer to things that are repeatable and therefore be removed, such as dark current (pixel values that accumulate over time, with or without photons striking the detector) and bias (pixel values that exist even in the shortest possible exposure). Measuring the amount of noise is a job for a statistician; the math will be daunting for all but the mathematically hardy. See the link to Mike Newberry's article on signal and noise below. Mike is the author of the image processing program Mira. Some common sources of noise in CCD imaging include:
- Readout noise - results from collecting, amplifying, and converting pixel data
- Dark count - results from electrons accumulating in pixels even in the absence of light
- Background noise - results from skyglow
- Processing noise - results from various image processing steps. For example, when you subtrat a dark frame, you add the noise from the dark frame even as you subtract the effects of dark current.
See also: Signal, Signal-to-noise ratio
By: Ron Wodaski
Link: Mike Newbury's Sky & Telescope article on signal and noise in CCD imaging
reduction
The process of removing system noise and errors from your images. This is typically accomplished by taking and applying bias, dark, and flat-field frames. Also called image calibration, but I prefer image reduction as it is more accurate.
See also: calibration, bias frame, dark frame, [[#flat-field frame}flat-field frame]]
By: Ron Wodaski
Link: Image reduction, calibration, and analysis
Resolution
Resolution for CCD imaging is normally expressed in arc seconds per pixel. An arc second is 1/60th of an arc minute, which is 1/60th of a degree. As you probably remember from high school geometry, there are 360 degrees in a circle. So an arc second is a very tiny bit of sky. Atmospheric turbulence limits the resoution you can achieve on any given night. Some locations have better seeing than others. The coast of Florida is noted for sub-arc second seeing conditions, which are excellent for fine detail on planets. The eastern slope of the Rockies, on the other hand, is known for frequent turbulence and seeing is often limited to 4-5 arc seconds. You should take your local seeing into consideration when choosing the resolution you want to work at with your camera and telescope. If you have frequent excellent seeing, and want to image planets, a resolution of .25 to .5 arc seconds per pixel makes sense. If you have poor seeing, then someting around 3-4 arc seconds per pixel makes sense. As a general rule, the higher your resolution, the longer your exposure must be. The lower your resolution, the more light that falls on the CCD chip per area of sky, and the shorter your exposures can be. For general purpose imaging, you should shoot for 2-3 arc seconds per pixel. For wide-field imaging, 3-4 arc seconds per pixel is a lot of fun. For galaxy close-ups, 1-2 arc seconds per pixel will yield the most detail, but the higher resolution demands a more competent mount and more attention and care in setting up and guiding. There is no one right answer to the question of what resoultion to use. Below about 1.5 arc seconds per pixel, exposures are very long, and patience and good guiding are essential to success. Above 3 arc seconds per pixel, guiding is much simpler, exposure are comfortably short, and success is easier to achieve.
By: Ron Wodaski
Link: Arc-seconds per pixel calculator (Internet Explorer only, sorry!)
RGB
Red, Green, and Blue colors combine to form the whole gamut of colors we humans can see. Although CCD chips are mostly monochrome, you can put colored filters in front of the chip, take multiple exposures, and then later combine them to form full-color images. Since the eye sees in these three colors (RGB), RGB filter sets are commonly avaiable for color imaging with CCD cameras.
See also: CMY, RGB
By: Ron Wodaski
Link: Assembling a CCD image
sharpening
Sharpening is a good technique to apply to the areas of an image that have very good signal to noise ratio (S/N). Camera control software (CCDSoft, Astroart, MaxIm DL, etc.) generally only allows you to sharpen an entire image. This is often not desireable, as some areas of the image will benefit from smoothing, others from sharpening, and other areas should be left as they are. Image editing software such as Photoshop, Picture Window Pro, Paint Shop Pro, etc. allows you to selectively smooth or sharpen portions of an image.
See also: smoothing, Signal-to-noise ratio
By: Ron Wodaski
Link: Sharpening your image
Signal
Signal is the brightness in your image that is a direct result of photons that have traveled from some object in space to your CCD detector. In an ideal world, one could count these photons one at a time and get an exact result, generating nothing but a signal. But in the real world, there are a variety of noise sources to contend with. Noise create uncertainty in the actual value of the image. To improve the signal level in your image, you can simply use longer exposures. Signal increases faster than noise, so longer exposures (within the limits of your equipment) are going to give you better images. For example, if the signal level is 100 units, and the noise level gives an uncertainty of +/- 5 units, then the signal is ten times larger than the noise level. If the signal level is increased to 1000 units by using an exposure that is ten times longer, and the noise level gives an uncertainty of +/- 20 units, then the signal level is now 50 times larger than the noise level. This is why most CCD imagers are always looking to refine their imaging, guiding, and mount tuning skills to get longer exposures. The length of your exposure is also limited by the saturation level ofyour chip, however.
See also: Noise
By: Ron Wodaski
Link: Mike Newbury's Sky & Telescope article on signal and noise in CCD imaging
Signal-to-noise ratio
This is simply a ratio of the signal in your image to the noise in your image. If the signal is five times larger than the noise, then you have a signal to noise ratio of 5. If the signal is 1000, and the noise is +/- 25 (total of 50), then the signal-to-noise ratio is 20. As you might expect, this ratio (often referred to simple as S/N) varies for every pixel in the image. Actually measuring the signal and noise in your image so that you can quantify the signal-to-noise ratio is a bit complex, however, both in practical terms and in mathematical terms. See the link to Mike Newberry's article below if you really want to get into the math. The simplest way to think of the noise in your image is that it is the uncertainty in the brightness level. If 100 photons arrive at a certain pixel, we might be uncertain whether the real value is 90 or 110. The uncertainty is the result of the various sources of noise. To find the level of noise, you must take many images of the same duration and compare the results statistically. Such a statistical analysis provides a measure of the noise, and allows you to quantify the quality of your images with a S/N rato.
See also: Signal, Noise
By: Ron Wodaski
Link: Mike Newbury's Sky & Telescope article on signal and noise in CCD imaging
smoothing
Often called simply blurring, smoothing is a good technique to apply to the noisy/grainy areas of your images. Camera control software (CCDSoft, Astroart, MaxIm DL, etc.) generally only allow you to either smooth or sharpen an entire image. This is often not desireable, as some areas of the image will benefit from smoothing, others from sharpening, and other areas should be left as they are. Image editing software such as Photoshop, Picture Window Pro, Paint Shop Pro, etc. allows you to selectively smooth or sharpen portions of an image.
By: Ron Wodaski
Link: Blurring in Photoshop
UBVRI
A type of multi-spectral imaging used most commonly by profressional astronomers. Instead of simply doing RGB imaging, the UBVRI filter set contains filters that pass light in five different wavelength ranges:
- U - Ultraviolet
- B - Blue
- V - Visual (around green wavelengths)
- R - Red
- I - Infrared
white point
This is the upper limit of data displayed. All pixels brighter than the white point will appear white on your display. The best place for the white point is on the right side of the main data peak in the image histogram. How far right? Generally, it should be at least 3 to 10 times the width of the main data peak. However, you may use different white points depending on the nature of your image, and what you are trying to do with it. A top for working with color images: adjusting the white points of the color components (red, green, blue) changes the color balance of the image.
See also: black point, Histogram
By: Ron Wodaski