I started out with a 5.1in Newton on EQ3 mount, and developed a passion for photography when I figured how to connect a webcam to the telescope. Ever since I've been hooked. My technology, software and engineering background make this a great match in understanding signal to noise ratio and image processing. Living near a big city, I have little choice but to leave it to the electronic eye and do my "observations" from a computer screen.
This blog is about sharing this passion and knowledge with the community.
Ever since Photoshop (and other editing software) allowed user to manually manipulate pixels there has been edited pictures. And with the computing power available at our fingertips and some built-in tools it’s surprisingly simple to “stich” together two photos. So full disclosure, the image below is “Photoshopped”.
I decided as an exercise to see how to insert into a nighttime skyline a photo of the Moon photo taken with my telescope.
The New York city skyline was taken by me a visiti of the Empire State Building in October last year (pre-pandemic) with a Canon 80D, 17mm F4.0 lens at 1/50s ISO 6400. The Moon is with the same camera body, but paired to a Skywatcher 80ED and I had the settings at ISO 200 and 1/20s. There is no software scaling of either photos, they are stitched “as is”.
This image was done with GIMP, I also inserted 2 “blurred” layers to create a small amount of haze around the moon to make it look a little more natural. The Moon was purposely placed “behind” a skyscraper to give it an element of depth and lowered the color temperature.
So dig through some of your old photos and start experimenting…
What makes it possible to be able to generate a photo of the Milkyway from what appears to be just a faint trace in the original shot?
It all comes down to the signal vs noise. Whenever we record something, sound, motion, photons, etc… there is always the information you WANT to record (the signal) and various sources of noise.
Noise can have many sources:
background noise (light polution, a bright moon, sky glow, etc…)
electronic noise (sensor readout, amp glow, hot pixels)
sampling noise (quantization, randomized errors)
This noise can be random or steady/periodic in nature. A steady or periodic noise is easy to filter out as it can be identified and isolated because it will be the same in all the photos. However a random noise is more difficult to eliminate due to the random nature. This is where he signal to noise ratio becomes important.
In astrophotography we take not only the photos of the sky, but also bias, darks and flat frames: this is to isolate the various sources of noise. A bias shot is a short exposure to capture the electronic read-out noise of the sensor and electronics. The darks is a long exposure at the same setting as the astronomy photo to capture noise that appears during long exposures due to the sensor characteristics such as hot pixels and amplifier glow. Cooling the sensor is one way to reduce this noise, but that is not always possible. Finally the flat photo is taken to identify the optical noise caused by the characteristics of the lens or mirror as well as any dust that happens to be in the way.
But what can be done about random noise? That is where increasing the number of samples has a large impact. For a random noise, increasing the number of sample points improves the signal to noise ratio by the square root of the number of samples. Hence averaging 4 images will be 2 times improvement than a single photo. Going to 9 will be 3 times better. Etc…
You might be thinking: “Yeah but you are averaging, so the signal is still the same strength.” That is correct, however because my signal to noise ratio is improved I can be much more aggressive on how the image is processed. I can boost the levels that much more before the noise becomes a distraction.
But can’t I just simply duplicate my image and add them together? No that won’t work because we want the noise to be random, and if you duplicate your image, the noise is identical in both.
So even if you are limited to just taking 30-second, even 5-second shots of the night sky and can barely make out what you want to photogram, don’t despair, just take LOTS of them and you’ll be surprised what can come out of your photos.
The simplest form of astrophotography is nothing more than a camera on a tripod shooting long exposures. However by the time you get around to stacking and stretching the levels of your photos to accentuate various elements, such as the Milky Way, the sky gradient will become more apparent. That gradient can come from city lights, the Moon up above and the thicker atmosphere causing light to disperse at low angles to horizon. Normally the wider the field of view, the greater the gradient.
Below is a RAW 20-second exposure of the Milky Way near the horizon taken with a Canon 80D equipped with a 17mm F4.0 lens. The background has a slight gradient; brighter at the bottom. No all that bad.
But once you stack multiple exposures and stretch the levels to get the Milky Way to pop out, the gradient only gets worse.
There are various astrophoto software that can remove the sky gradient. The one that I’m familiar with and have been using is IRIS. I know the software is old, but it does a great job. So after I’ve completed my registration and stacking of images with DeepSkyStacker (see my Astrophotography in the City article), the next step is to open the resulting image with IRIS.
Once the stacked image is loaded in IRIS, head over to the Processing menu and select Remove gradient (polynomial fit) … Actually to get the best results you need to have the background and color corrected as well as trimming the edge of your photo. Got that covered here.
The following menu will appear.
Normally the default settings (as above) will work well. But this image has some foreground content (trees) and that will cause the result to look a little odd. The algorithm is designed to avoid sampling stars, but not so good when there is foreground content like the trees at the bottom of the image.
To correct this you must use the gradient removal function with a mask. The quickest way to create a mask is using the bin_down <value> command. This will change to white all pixels with intensities below <value>, and make black all pixels above it. Areas in black will not be used for sampling, while those in the white areas will. A little trial-and-error is sometimes necessary to select the right value.
In this case, even with the right bin_down value, the trees that I want to mask are not black, hence I will use the fill2 0 command to create black boxes and roughly block out the trees.
Below is the result after using multiple fill rectangles to mask the trees. This does not need to be precise as the mask is simply used to exclude areas from sampling. It is not like a photo-editing mask.
The resulting mask is saved (I called it mask), and I load back the original image, this time using the gradient removal with the mask option selected.
The program generates a synthetic background sky gradient, based on thousands of sample points and an order 3 polynomial. The image below lets you see the synthetic sky gradient the algorithm generated. This is what will be subtracted from the image.
The final image below is much better and more uniform. There are no strange dark and bright zones like the attempt without the mask.
If we compare the original raw images with the new stacked, stretched and sky gradient removed photo the results are pretty impressive.
At one point in time we’ve heard the saying that we are all made of star dust. Therefore, our home , the Milky Way, filled with 250 billion stars should be rather dusty. Right? Well it is, and one famous dust lane that we often see even has a name: The Great Rift.
Say that you are out camping this summer, and you spot the MilkyWay as you are amazed how many stars you can see when away from the city. You remember you have your camera and decide to setup for some long exposure shots to capture all this beauty (lets go for 20 seconds at ISO 3200 17mm F4.0) pointing to the constellation Cygnus. A bit of processing and you should get something like this.
Not bad! Lots of stars… a brighter band where the Milky Way arm of the galaxy is located and some darker spots at various places. Those darker areas are gigantic dusk clouds between Earth and the arms of our spiral galaxy that obscure the background stars. If only there was a way to remove all those stars, you could better see these dark areas.
And there is a way to remove stars! It’s called StarNet++, takes a load of CPU power and works like magic to remove stars from photos. Abracadabra!
Behold! The Great Rift! Well actually just a portion of it. With the camera setup I get at most a 70deg field of view of the sky. Nevertheless, the finer details of these “dark nebula” can be appreciated.
Stripping the stars from an photo does have some advantages: it allows the manipulation of the background “glow” and dusk lanes without concern to what happens to the foreground stars. The resulting image (a blend of both the starless and original image) had improved definition of the Milky Way, higher contrast and softer stars that improve the visual appeal.
While there are plenty of stars above us, what defines a nice Milky Way shots is the delicate dance of light and darkness between the billions of stars and the obscuring dust clouds.
Photo Info: Canon 80D 13 x 20 sec (4min 20sec integration time) 17mm F4.0 ISO3200 Deep Sky Stacker IRIS for background gradient removal and color adjustment StarNet++ GIMP for final processing
When observing a comet, what we see is the outer coma; the dust and vapor outgassing from the nucleus as it gets heated from the Sun.
So I decided to take one of my photos taken with my Skywatcher 80ED telescope (600mm focal length) and see if I could process the image to spot where the nucleus is located.
This can be achieved by using the MODULO command in IRIS and viewing the result in false color. The results are better if you do a logarithmic stretch of the image before the MODULO command. It took some trial-and-error to get the right parameters, but the end results isn’t so bad.
For the fun of it I tried to see if I could calculate the size of the comet nucleus using the image. At the most narrow the nucleus on the photo spans 5 pixels. Based on a previous plate-solve result I know that my setup (Canon 80D and Skywatcher 80ED telescope) results in scale of 1.278 pixels per arc-second. Then I used Stellarium to get the Earth-coment distance on July 23rd (103.278 M km)
When I plugged in all the numbers I get a comet nucleus size of approximately 2000 km, which to me seamed a little on the BIG size.
I live in a heavily light polluted city, therefore unless it’s bright, I won’t see it. But boy was I ever happy with the outcome of this comet! In my books C/2020 F3 (NEOWISE) falls in the “Great Comet” category, and it’s by far the most photographed comet in history because it was visible for so long to folks on both sides of the globe.
My last encounter with a bright comet was in 2007 with periodic 17P/Holmes when it brightened by a factor half a million in 42 hours with this spectacular outburst to become visible to the naked eye. It was the largest outburst ever observed with the corona becoming temporarily the biggest visible object in the solar system. Even bigger than the Sun!.
So when the community was feverishly sharing pictures of the “NEOWISE” I had to try my luck; I wasn’t about to miss out on this chance of a lifetime.
I have to say that my first attempt was a complete failure. Reading up when it was the best time to try to photograph this comet most indicated one hour before sunrise was the right time. So I checked on Google Maps where I could setup for an un-obstructed view of the eastern horizon (my house was no good) and in the early morning with my gear ready at 4am I set off. To my disappointment and the “get-back-to-bed-you-idiot” voice in me, it didn’t work out. By the time I got to the spot and had the camera ready, the sky was already too bright. No comet in sight, and try as I might with the DSRL, nothing.
Two evenings later and another cloudless overnight sky I decided to try again, but this time I would make it happen by setting the alarm one hour earlier: 3am. That is all that it took! I was able to set-up before the sky could brighten, and then CLICK! I had this great comet recorded on my Canon SD memory card.
I didn’t need any specialized gear. All it took was a DSLR, a lens set to manual focus, a tripod and 5 seconds of exposure and there was the comet. I snapped a bunch of frames at different settings and then headed back home to catch the last hour of sleep before starting another day of work. Lying in bed I felt like I had accomplished something important.
As the comet swung around our Sun and flipped from a dawn to a dusk object I decided I should try to photograph it once again, but this time with the Skywatcher 80ED telescope. At that point, the comet was dimming so every day that passed would be more difficult. It was only visible in the North-West horizon at sunset, which meant setting up in the front the the house, fully exposed to street lights. Not ideal, but I had nothing to loose trying.
I used our tree in the front yard to act as a screen and was able to locate and photograph this great comet. Polar alignment wasn’t easy, and when I had the comet finally centered and focused with the camera, overhead power lines were in the field of view. I decided to wait out 30 minutes and let the sky rotate to the lines out of the view. Besides, it will get darker anyways which should help which the photo. But I also realized that my “window” of opportunity was small before houses would start obscuring the view as the comet would dip to a lower angle with the horizon.
I’m sure in the years to come people will debate if this was a “Great Comet”, but it my books it’s definitely one to remember. It cemented with me the concept that comets are chucks of “dirty ice” that swing around the sun. Flipping from a dawn to dusk observable object after a pass around the Sun is a great demonstration of the elliptical nature of objects moving in our solar system.
When I first started astro-photography you had people like me who were just starting off and did it on the cheap with a webcam, a small newton telescope and basic mount, or you could fork out an astronomical amount of cash to get really specialized gear.
Below is a photo of Messier 101 the Pinwheel Galaxy taken last week with a $500 Skywatcher80ED telescope and Canon80D DSLR on an unguided mount.
I agree that it’s not as fancy as some of the research grade setups or some other hobbyist out there, but it’s many times better than my first try in 2008 (below).
What has changed? Well for starters the optical quality of beginner and intermediate telescopes has dramatically improved, largely thanks to automated and computerized lens and mirror shaping and polishing. Yes they are made in China, but so are most carbon-fiber bikes and the latest smart-phones. As the process is automated, quality can be tightly controlled and the results are hard to beat. A quality image starts by being able to collect and focus light properly, and for $500 you can get some really descent optics.
Another great boost is improvements in camera sensors. DSLR became a go-to solution because it was a cheap way of getting a large sensor with low read noise and good sensitivity. Of course there are still monochrome specialized astro-gear available for backyard astronomers, but the one-shot color results of a DSLR are hard to match. DSLRs offer ease of use, compatibility with most software and are the biggest bang-for-your-dollar compared to specialized astro-cameras.
And the third major improvement in 10 years is computing power. A night imaging session can easily generate 1GB of RAW images that need to be processed. Transferring and storing data is now cheap, and software has followed in lock-step to handle the increase in image size and quantity. Registering and stacking software can easily handle at the pixel-level hundreds of images each with millions of pixels. Sure it might take 20 minutes to process 120 photos from the DSLR, but that is a far cry from the hours of computer crunching. If your parameters were wrong, you just wasted a hour….
So while light pollution is choking the stars out of the night sky, one easy way to gain access to the universe is through astro-photography. It’s now easier and cheaper than ever to get good results with a simple setup.
Back in March, the astronomy crowd was buzzing about a possible”naked-eye” comet expected in late May 2020. Comet C/2019 Y4 (ATLAS) was first detected at the tail end of December as a very dim magnitude 19.6 object and by mid-March it had brighten to an easy telescope target magnitude of 8. Those not familiar with the magnitude scale, going from 19.6 to 8 is not a doubling in brightness, but around a 4000 times increase!
That dramatic increase in brightness help fuel the hype for the Great Comet of 2020, and there were two other factors that got people excited:
It would be visible at dusk from the Norther Hemisphere, hence within easy viewing to much of the world population.
It was following a similar orbital path as the “Great Comet of 1843“, suggesting that it was from the same original body and could potentially provide the same viewing spectacle. That 1843 comet was visible in daytime!
Well all that went south when the comet’s breakup was observed in late March after peaking momentarily at magnitude 7. It began to dim, along with any hopes of a Great Comet repeat. Below is a graph showing the the original (grey line) and revised (red) comet brightness forecast (dots being observed measurements) on this chart created by Seiichi Yoshida (firstname.lastname@example.org)
Comet C/2019 Y4 (ATLAS) Brightness – Copyright(C) Seiichi Yoshida
Comet C/2019 Y4 is expected to make its closest approach to the sun on May 31st, however most experts believe it will disappear (disintegrate) before that date. Seeing that I had a small window of opportunity to capture the comet I decided to try my luck last Saturday evening.
Below is an extremely processed (and ugly) image that I got by combining 25 photos (15 seconds each at ISO 3200) using my Skywatcher 80ED scope. The photo just about makes out the distinctive blue-green hue and elongated shape of a comet. It is around magnitude 10, very diffuse and about 147 million km away from us the day this photo was taken.
Comet C/2019 Y4 (ATLAS) on April 18, 2020 – Very faint at about magnitude 10. Imaged with 80ED telescope 25 x 15sec
I pushed the image processing so hard that I was able to pick up faint magnitude 13 galaxies!
On to the next comet!
Telescope: Skywatcher 80ED
Camera: Canon 80D
Image: 25 x 15sec at ISO3200 (6 minutes)
I have two telescopes, a Skywatcher 80ED (identical to the Orion 80ED – 600mm focal length at F7.5) and a Williams Optics Gran Turismo 71 APO with 420mm focal length at F5.9. Just looking at the numbers it’s easy to see that the GT71 is a smaller and faster telescope, and because of the shorter focal length it should have a larger field of view.
Comparing size with Skywatcher 80ED
Now I’ve photographed the same part of the sky with both telescopes, and can now overlap the images to see exactly what is the difference between the field of view between these two telescopes.
First I need to say that that GT71 NEEDS a field flattener when imaging with DSLR. The distortions off-center are terrible. Don’t get me wrong, as a three objective lens telescope (including 1 fluorite for color correction), it has provided me with the best lunar photos, however it has issues when using the large DSLR sensor. The SW80ED provides a much flatter field of view for photography out of the box.
The flattener for GT71 is in the plans…
So how does both telescope compare? Below is a photo of open star cluster Messier 38 taken with my GT71 and I’ve overlapped as a brighter box an image taken with the SW80. For those wondering, I used IRIS to register and align both photos using the coregister command.
Messier 38 – Field of view with William Option GT71 and Skywatcher 80ED (brighter box)
Both telescopes deliver just about the same field of view with the GT71 providing 1 degree more of horizontal field. But the difference is much less on the vertical.
What did surprise me is how much light the GT71 gathers. Inspecting the photos showed me that even with the smaller setup, the GT71 has great light gathering capabilities. I got down into magnitude 12 with only 15 seconds of exposure, which is nearly similar to the SW80ED at 30 seconds.
WO GT71 vs SW80ED Optics
In conclusion I would say the GT71 has good photographic potential, but requires a field flattener if it will be used with DSLR. Stay tuned…
10 Days old Moon (April 04, 2020) – Benoit Guertin
The photo above is of a 10-day old Moon taken a few days ago. After the darker “seas” of old lava flow, one particularly bright crater in the southern hemisphere stands out, especially with the rays that appear to emanate from it. That is Tycho, a 85km wide and 5km deep crater and one of the more “recent” ones if you consider 109 million years the not-to-distant past. The Moon is 4.5 billion years old after all… having formed just 60 million years after the solar system. On the Moon, “fresh” material have a higher albedo and hence appear brighter, whiter.
The bright rays surrounding Tycho are made of material ejected (up to 1500km away) from the impact of a 8-10km wide body. In time these rays will disappear as the Moon continues to be bombarded by micro meteorites, which stirs the material on the surface. The rays are more present on the eastern side, as would be expected from a oblique impact.
Tycho is names after the Danish astronomer Tycho Brahe.
The Surveyor 7 space craft landed about 25km north of the crater on January 10, 1968.
Ever wondered how mosaic space photos were done before the invention of powerful software algorithm to stitch them together? Take a look at the series of Surveyor 7 mosaic photos. Someone had to painfully print each photo and lay them on a grid in a specific pattern matching optical field and geometry.
Welcome to a journey into our Universe with Dr Dave, amateur astronomer and astrophotographer for over 40 years. Astro-imaging, image processing, space science, solar astronomy and public outreach are some of the stops in this journey!