Planetary imaging with CCD cameras has come a very long way in the last few years - the images that are being produced by amateurs around the world of Jupiter and Saturn are simply amazing, in many cases exceeding what used to be "state of the art" earth-based imaging by the largest professional observatories from only 10 or 15 years ago.
For instance, try these amazing Jupiter images on for size.
This rapid change has been brought about mostly through the advent of low cost, high speed CCD cameras that capture many thousands of frames in the space of a few minutes, replacing the older (traditional) system of single frame images that came out of the deep-sky imaging world of the 1980's and earlier.
In the space of just over two years - from early 2003 until the present (May 2005) this new imaging technique has gone from a select few to a worldwide phenomenon among amateur astronomers. I'm sure you have seen some of the images that are coming out as a result, I have some examples elsewhere on this site and also there are many imagers around the world putting up even better ones.
The biggest downside to this sudden interest in high speed CCD imaging has taken most of the commercial industry by surprise, meaning that there is almost nobody manufacturing the "ideal" camera for use in this environment. True, there are some cameras available from sources like Celestron (the NexImager) or Philips (the ToUCam), but none of these are flexible enough to fulfil the traditional long-exposure role of deep sky imaging and the new high-speed role required for planetary maging.
My own recent survey of cameras available in the market show that they all fall into a few categories:
It seems clear to me that the way forward is to design a new breed of camera that is flexible enough to be used both for the long exposure (deep sky) use and short exposure (planetary) use. Normally these two disciplines require completely different cameras because of the different techniques used by astronomers for deep sky vs planetary work, but I see no reason why a single, modular camera can't fulfil both of these roles if it is designed that way from the ground up, and not just a clone of an existing, unsuitable, design.
I talk about the specs of this mythical camera in another article, here.
Planets, on the other hand, are tiny things - around 20 to 40 arc seconds in diameter - that are covered in details that go well below 1 arc-second in size. The key to capturing this detail is to crank up the magnification to the point that these details are large enough to be recorded by the CCD in a camera. It's normal to see "effective" magnifications between 500 and 1000 times used on planets, enlarging them so that they cover about 1/3 of the width of a typical 640x480 CCD.
Unfortunately this magnification also increases the effect of any thermal variations in the light path, both inside the scope and in the air overhead. These thermal eddies cause the image to break up at high magnifications, giving the impression that you're trying to photograph something that's at the bottom of a pool of water. The image will waver, shimmer, and sometimes break up completely depending on the severity of these thermal currents. They are the nemesis of any planetary imager.
The advent of cheap high speed CCD cameras provides a partial solution to this problem. If a large number of frames are captured over a short period (say 2000 frames in 90 seconds) then it becomes possible to selectively discard the frames that show the worst distortions and keep only those frames that were taken during moments of steady seeing. The faster the shutter speed on the camera, the more likelihood there is of capturing those fleeting moments when the air is steady.
This is only possibly because planets are relatively bright - much brighter than deep-sky objects, so it's feasible to use short exposures and still collect enough light to form a usable image.
By averaging together a lot of these "good" frames (a process called "stacking"), the overall noise component in the image can be reduced and subtle details become visible - details that can't be seen directly through the eyepiece, whether due to low contrast or the overwhelming brightness of planets like Jupiter, stand out in these "processed" images that result from stacking together many sharp frames.
This is where the compromises start... A short exposure time is essential to capture those moments of good seeing, coupled with a high enough frame rate to guarantee a sufficient number of good frames are captured, but this reduces the amount of light available for each frame. The higher the framerate, the dimmer each individual frame will be. The lower limit for image brightness is set by the quantum efficiency and other properties of the CCD sensor in the camera - a high sensitivity CCD will provide useful images in lower light (and hence higher framerates) than a low sensitivity CCD.
The light falling on the CCD has been collected by the telescope - refractor or reflector - and so the parameters of this instrument (focal length and focal ratio) have a part to play in this whole process as well. The focal length will determine the magnification of the image, so this has to be high enough for details on the planet to be visible. The focal ratio determines the brightness of the image - there must be enough light falling on the CCD for it to effectively image the planet at all. It is important to realise that I'm talking about the overall focal length and F/ ratio of the instrument, including any barlows etc that are in the optical path.
Finally there is the physical cell size of the CCD itself, usually given as width & height in microns (u). CCDs come in all shapes and sizes and the area of each cell determines how much light it collects and also how large the final image will appear. Generally speaking, a CCD with large cells (say 9u x 9u) will be more sensitive than a CCD with smaller cells (say 5.6u x 5.6x) because each cell has more area and so will collect more light, BUT the larger CCD will produce a smaller image because more details will fall into each cell and become mingled into a single pixel in the final image.
This is worth repeating, because it's an important tradeoff... larger CCD cells are more sensitive (and hence you can use a higher framerate) but at the expense of producing a smaller final image, which may not show as much details. Common cell sizes for current CCD's range from about 3.3u up to 15u. Cameras like the ToUCam and NexImage use a sensor that is 5.6u square, whereas many deep-sky cameras use sensors at the large end of the scale - around 10u - because they are more interested in light than magnification.
While we're on the topic of tradeoffs...you have to choose between a monochrome CCD and a colour CCD. A monochrome CCD will be a lot more sensitive than the colour version because it doesn't have the colour filter mask over the cells, but it means that you will have to provide your own (external) filter wheel and colour filters if you want to take colour images.
A colour CCD provides less real resolution than the equivalent monochrome CCD because of this colour filter - a colour CCD has roughly half it's cells masked to be green, 1/4 of the cells are red and 1/4 are blue. So you are really only imaging at about 1/2 the resolution of your camera. This is normally covered up by the circuitry in the camera, which sends full sized frames to the host PC even though roughly half the data in each frame has been created inside the camera by interpolation.
Many people use colour CCD's for two reasons: (1) It's much more convenient to take a single (colour) video than fool arond with a wheel and filters requiring three separate recordings, and (2) the human eye is not very sensitive in red and blue, so the loss of colour resolution isn't that critical. (on the other hand, losing half the green pixels is important, since they carry a lot of the brightness detail).
We want to use a colour CCD because of its convenience, but we know that a monochrome CCD has better sensitivity and resolution.
Generally speaking, these parameters are all incompatible...high framerates imply dimmer images, and brighter images imply lower magnification. These are the critical tradeoffs, and they are influenced by the focal length and focal ratio of the telescope that you'll be using, since that provides the basic starting point for magnification and brightness.
CCD's don't come with removeable masks, so you can't "convert" a colour CCD into a monochrome one or vice versa.
Perhaps some kind soul might provide me with a table to put here, showing the "ideal" CCD for different sized scopes, but in the meantime I can tell you the details of my current camera and scope, maybe that will be enough for you to make an educated guess for your own situation :-)
My current scope is a 10" f/6 newtonian, and I have used both a ToUCam and a fire-i (monochrome) camera. The ToUCam used the very popular Sony ICX098BQ colour CCD, and the fire-i camera uses the monochrome equivalent, the ICX098BL.
I image through a 4x powermate, so that makes the focal length 6000mm and the f/ ratio of the scope f/24. Both the CCD's listed above are the same size - 5.6u square cells, and this setup seems to be quite satisfactory.
Note: The higher sensitivity of the monochrome CCD is offset by the fact that I use a filter wheel with colour filters for imaging, so the final image brightness is similar to the colour CCD, but I get full resolution in each colour.
So, for me, the important numbers are:
This represents just one point in a multi-dimensional space representing "acceptable" telescope + camera combinations. E.g. if you have a longer focal length, then you might go for a CCD with larger (more sensitive) pixels. Sure, the CCD produces a smaller image, but your longer focal length makes up for that. It comes down to the image brightness as to whether it's an improvement...
I hope this gives you something to think about when contemplating purchasing a new camera for astronomy...
Back to parent article
Back to Birds Astronomy Site