What makes it possible to be able to generate a photo of the Milkyway from what appears to be just a faint trace in the original shot?
It all comes down to the signal vs noise. Whenever we record something, sound, motion, photons, etc… there is always the information you WANT to record (the signal) and various sources of noise.
Noise can have many sources:
- background noise (light polution, a bright moon, sky glow, etc…)
- electronic noise (sensor readout, amp glow, hot pixels)
- sampling noise (quantization, randomized errors)
This noise can be random or steady/periodic in nature. A steady or periodic noise is easy to filter out as it can be identified and isolated because it will be the same in all the photos. However a random noise is more difficult to eliminate due to the random nature. This is where he signal to noise ratio becomes important.
In astrophotography we take not only the photos of the sky, but also bias, darks and flat frames: this is to isolate the various sources of noise. A bias shot is a short exposure to capture the electronic read-out noise of the sensor and electronics. The darks is a long exposure at the same setting as the astronomy photo to capture noise that appears during long exposures due to the sensor characteristics such as hot pixels and amplifier glow. Cooling the sensor is one way to reduce this noise, but that is not always possible. Finally the flat photo is taken to identify the optical noise caused by the characteristics of the lens or mirror as well as any dust that happens to be in the way.
But what can be done about random noise? That is where increasing the number of samples has a large impact. For a random noise, increasing the number of sample points improves the signal to noise ratio by the square root of the number of samples. Hence averaging 4 images will be 2 times improvement than a single photo. Going to 9 will be 3 times better. Etc…
You might be thinking: “Yeah but you are averaging, so the signal is still the same strength.” That is correct, however because my signal to noise ratio is improved I can be much more aggressive on how the image is processed. I can boost the levels that much more before the noise becomes a distraction.
But can’t I just simply duplicate my image and add them together? No that won’t work because we want the noise to be random, and if you duplicate your image, the noise is identical in both.
So even if you are limited to just taking 30-second, even 5-second shots of the night sky and can barely make out what you want to photogram, don’t despair, just take LOTS of them and you’ll be surprised what can come out of your photos.
The simplest form of astrophotography is nothing more than a camera on a tripod shooting long exposures. However by the time you get around to stacking and stretching the levels of your photos to accentuate various elements, such as the Milky Way, the sky gradient will become more apparent. That gradient can come from city lights, the Moon up above and the thicker atmosphere causing light to disperse at low angles to horizon. Normally the wider the field of view, the greater the gradient.
Below is a RAW 20-second exposure of the Milky Way near the horizon taken with a Canon 80D equipped with a 17mm F4.0 lens. The background has a slight gradient; brighter at the bottom. No all that bad.
But once you stack multiple exposures and stretch the levels to get the Milky Way to pop out, the gradient only gets worse.
There are various astrophoto software that can remove the sky gradient. The one that I’m familiar with and have been using is IRIS. I know the software is old, but it does a great job. So after I’ve completed my registration and stacking of images with DeepSkyStacker (see my Astrophotography in the City article), the next step is to open the resulting image with IRIS.
Once the stacked image is loaded in IRIS, head over to the Processing menu and select Remove gradient (polynomial fit) … Actually to get the best results you need to have the background and color corrected as well as trimming the edge of your photo. Got that covered here.
The following menu will appear.
Normally the default settings (as above) will work well. But this image has some foreground content (trees) and that will cause the result to look a little odd. The algorithm is designed to avoid sampling stars, but not so good when there is foreground content like the trees at the bottom of the image.
To correct this you must use the gradient removal function with a mask. The quickest way to create a mask is using the bin_down <value> command. This will change to white all pixels with intensities below <value>, and make black all pixels above it. Areas in black will not be used for sampling, while those in the white areas will. A little trial-and-error is sometimes necessary to select the right value.
In this case, even with the right bin_down value, the trees that I want to mask are not black, hence I will use the fill2 0 command to create black boxes and roughly block out the trees.
Below is the result after using multiple fill rectangles to mask the trees. This does not need to be precise as the mask is simply used to exclude areas from sampling. It is not like a photo-editing mask.
The resulting mask is saved (I called it mask), and I load back the original image, this time using the gradient removal with the mask option selected.
The program generates a synthetic background sky gradient, based on thousands of sample points and an order 3 polynomial. The image below lets you see the synthetic sky gradient the algorithm generated. This is what will be subtracted from the image.
The final image below is much better and more uniform. There are no strange dark and bright zones like the attempt without the mask.
If we compare the original raw images with the new stacked, stretched and sky gradient removed photo the results are pretty impressive.
I have two telescopes, a Skywatcher 80ED (identical to the Orion 80ED – 600mm focal length at F7.5) and a Williams Optics Gran Turismo 71 APO with 420mm focal length at F5.9. Just looking at the numbers it’s easy to see that the GT71 is a smaller and faster telescope, and because of the shorter focal length it should have a larger field of view.
Now I’ve photographed the same part of the sky with both telescopes, and can now overlap the images to see exactly what is the difference between the field of view between these two telescopes.
First I need to say that that GT71 NEEDS a field flattener when imaging with DSLR. The distortions off-center are terrible. Don’t get me wrong, as a three objective lens telescope (including 1 fluorite for color correction), it has provided me with the best lunar photos, however it has issues when using the large DSLR sensor. The SW80ED provides a much flatter field of view for photography out of the box.
The flattener for GT71 is in the plans…
So how does both telescope compare? Below is a photo of open star cluster Messier 38 taken with my GT71 and I’ve overlapped as a brighter box an image taken with the SW80. For those wondering, I used IRIS to register and align both photos using the coregister command.
Both telescopes deliver just about the same field of view with the GT71 providing 1 degree more of horizontal field. But the difference is much less on the vertical.
What did surprise me is how much light the GT71 gathers. Inspecting the photos showed me that even with the smaller setup, the GT71 has great light gathering capabilities. I got down into magnitude 12 with only 15 seconds of exposure, which is nearly similar to the SW80ED at 30 seconds.
In conclusion I would say the GT71 has good photographic potential, but requires a field flattener if it will be used with DSLR. Stay tuned…
Shooting wide angle long exposures of the sky is always fun, because you never quite know what you will get. On an August night I decided to take a few 20 seconds exposures of the constellation Perseus hoping to catch a few open clusters. However got surprised by the faint glow of Messier 33 (Triangulum Galaxy) in the photos. This is the furthest object that can be observed to the naked eye, located 2.7 million light years away, and part of the Local Group which includes Andromeda and our Milky Way.
4 x 20 seconds
August 30, 2019
Most people don’t plan to take photos of the Moon, they just happen. You are outside doing something else and then you spot it over the horizon or high in the sky: “Hey that’s a pretty Moon tonight Maybe I should take a photo!”
I find that normal camera lens, even telephoto don’t do it justice. The setting and focus can be very tricky. The multi-lens setup of telephoto can also cause internal reflections or chromatic aberrations making the resulting photo less appealing.
So just grab the telescope tube and leave the tripod behind. If you have a small APO refractor you can simply hold the tube, but for anything heavier you’ll need to prop yourself up on something like a railing or a car roof.
The photo below is a single shot at 1/250sec and ISO400 with Canon 80D and William Optics Gran Turismo 71 held on the end of my arms.The setup takes only a few minutes and the results are always worth it.
Nothing like leaving the city lights behind and heading to a rural camp ground to check up on our galaxy.
Every summer the galaxy presents itself across the sky in the norther hemisphere, an ideal time to enjoy the view and spot a few open cluster along the way.
Canon 80D 17mm F/4 ISO6400
Stack of 10 x 10 seconds
A few weeks ago after taking some photos of Jupiter, I changed my setup to do some long exposures on an easy target: a globular cluster. Unfortunately I forgot to note down the name of what I had photographed! So a few weeks later when I found the time to process the images I was at a loss to identify what Messier object it was. However, after an evening of matching up stars surrounding the cluster and I was able to correctly identify it as Messier 3.
The above was taken with my Skywatcher 80ED and Canon 80D. It is a stack of 27 x 10sec exposures at ISO3200 on an unguided and roughly aligned mount.
Looking at my archives I found that I had imaged M3 about 10 years ago with the same telescope, so I decided to align both old and new image and see if anything would stand out. And to my surprise, spotted one star that appeared to have shifted. To help identify the star I colorized one of the photos and subtracted from the other (done in GIMP). All the stars within the field of view lined up except this one; the two colored spots are not aligned!
To be sure this wasn’t on an error on my part I did a bit of research and found it to be a know high proper-motion star BD+29 34256.
It’s not everyday someone with amateur backyard astronomy gear can show how a star has moved in 10 years.
The last time Jupiter was in a favorable position for good photos was 2010, so while I have photographed the planet a few times since, the results weren’t really satisfactory. So on July 7th, finally took the equipment out and set my mind to image some planets (Venus was also in a good position).
As luck would have it, the Great Red Spot was pointing our way, and landed my best shot of it yet. We may be past the May 2018 sweet spot for opposition, but that doesn’t mean you should not attempt to observer or photograph the Jupiter. Still plenty of good days ahead.
I took about 11 video sequences of the planet, and sure enough the last one yielded the best result. I guess as the evening progressed, the air cooled and provided for better viewing.
Televue 3X barlow
Vesta Webcam with IR/UV filter
Processing with Registax and GIMP.
After a weeks of clouds, rain and even snow, I finally get a sunny weekend without a cloud in the sky. With the warmer temperatures, time to take the telescope out. Unfortunately no significant sunspot happening on April 21. Just a small region (AR2706) on the western part of the sun.
Canon 80D (ISO 100, 1/400s)
Skywatcher 80ED (80mm F/7.5)