There’s been lots of attention over Mars this past week. I can’t really blame all the media coverage, the Mars 2020 Perseverance EDL to the Martian surface was really cool and a great feat for NASA. I enjoyed watching it live on the NASA YouTube feed. But this weekend let’s turn our attention to the Snow Moon; the only full moon in February.
The full moon will occur at 3:17am Saturday, so tomorrow evening will be the best time to catch it. There’s nothing particularly special about this full moon, not a Blue Moon (second Full Moon in the month) or “Super Moon”. The name Snow Moon comes from the Farmer’s Almanac as February is normally the month that receives the most snow in North America.
The great thing about full moons is that you don’t need to stay up all night and wait outside in the frigid cold to see it. At this time of year, in the Northern hemisphere, the Moon is visible for more that 12 hours a day.
If you’re tempted to photograph the Snow Moon, leave the mobile phone behind, it’ll just give poor results and you’ll end up frustrated with frozen fingers. Instead just enjoy the view, paying close attention to the various dark “seas” spanning the lunar surface.
If you do try taking a picture, grab a DSLR or compact camera with manual mode. Set the ISO around 200 and the focus to manual. Your shutter speed should be high, around 1/800s; a full moon is surprisingly bright. You’re get better results by slightly under-exposing your shot. If you have a tripod, use it, else try to steady yourself on something (railing, chair, car roof, etc..) Subtle movement can easily ruin the details in you photos.
Looking back, the “Great comet of 2020″ C/2020 F3 NEOWISE was a fantastic sight and well worth the 3am alarm to snap some photos back in July. But comet images are notoriously difficult to work with. Should I also add that in older times, comets were often seen as a bad omen, the bearer of bad news? Cough, cough COVID-19 cough…
Anyways, back to astronomy… There are essentially two types of photo registration (alignment) software out there: 1) Deep Sky which uses pin-point stars to perform alignment; 2) Lunar/Planetary uses the large “disk” of a planet or Moon to align based on surface details.
So when you capture long wispy comets like the RAW image below, software like DSS or Registax just can’t cope.
RAW image : Canon 80D 300mm f/5.6 5 seconds exposure at ISO3200 (09-jul-2020)
I turned to standard photo-editing software for a manual alignment and stacking. This is essentially opening one “base” image and then adding a 2nd image as a new layer. I change that 2nd layer to be overlaid as a “Difference” and manually align this 2nd layer to match the base layer. Once that is done I change the layer mode to Addition, and then hide this 2nd layer. Repeat the steps for a 3rd, 4th, 5th, etc. layers until you’ve added all your images. Always aligning with the “base” image to ensure no drift.
If you simply add all those layers up, you will get one very bright image because you are adding pixel intensities. You can do that and then work with the Levels and Curves to bring it back down, or if like me, working with GIMP, then use the Py-Astro plug-ins to do the merging and intensity scaling in a single step with a Merge all layers. Py-Astro can be downloaded here. I haven’t explored all that the plugins have to offer, that will hopefully be in another blog.
Stacking 11 individual frames results in an improvement over a single RAW image (image below). With the stacked image, I’m able to work with the intensities to darken the sky while keeping the comet tail bright.
After merging 11 images manually aligned in GIMP
However the sky gradient is pretty bad, due to the camera lens and because at 4am the sun is starting to shine on the horizon. So off to IRIS to correct the background gradient. From GIMP I save the files as a 16BIT FIT that I can use in IRIS. For steps on how to do this, see my blog about how to remove the sky gradient.
After a quick spin in IRIS, I’m back in GIMP for final color and intensity adjustments, I boosted the BLUE layer and adjusted the dark levels for a darker sky.
Final Processed image of C/2020 F3 NEOWISE from 09-JUL-2020
The folks at JPL created a short film showcasing Perseverance’s critical descent phase for the Mars landing. If everything goes according to plan, we shall have a new rover on Mars at 3:40pm EST on February 18, 2021.
Perseverance is currently “cruising” at 84,600km/h through space with Mars as a target. To give you an idea of what kind of speed that is, here are a few benchmarks:
The fastest commercial jet: the Concord flying at Mach 2.04 is just under 2,200km/h
Space Shuttle re-entry speed: 28,100km/h
Voyager 1, leaving our solar system : 61,500 km/h
Parker Solar Probe (fastest man-made object) : +250,000km/h
Perseverance was launched on July 30th, 2020 from Cape Canaveral Air Force Station, Florida, on top of a Atlas V-541 rocket.
Animation of Mars 2020’s trajectory around Sun, Data source: HORIZONS System, JPL, NASA
The only way the rover will be able to decelerate from its current cruising speed is by plunging into the Martian atmosphere at the right angle and using the atmospheric friction to slow it down. That “7 minutes of terror” is the time the rover will spend on re-entry, from approaching Mars at the right angle, to landing in the desired spot on the Martian surface.
Lots of steps need to go right, timed correctly to have a successful landing. Only 22 of the 45 landers sent to Mars have survived a landing. The US is by far the country with the most success (sorry Russia, you’re space program is awesome, but you suck at landing on Mars)
Glancing up at the night sky that February 18, 2021 evening will be very easy to spot Mars, but also the Pleiades star cluster (Messier 45). Mars will be about 5 degrees north of a almost half-illuminated moon. And if you keep looking higher up by 10 degrees you’ll see the famous open star cluster nicknamed the Seven Sisters, also used as the Subaru emblem.
Ever since Photoshop (and other editing software) allowed user to manually manipulate pixels there has been edited pictures. And with the computing power available at our fingertips and some built-in tools it’s surprisingly simple to “stich” together two photos. So full disclosure, the image below is “Photoshopped”.
I decided as an exercise to see how to insert into a nighttime skyline a photo of the Moon photo taken with my telescope.
The New York city skyline was taken by me a visiti of the Empire State Building in October last year (pre-pandemic) with a Canon 80D, 17mm F4.0 lens at 1/50s ISO 6400. The Moon is with the same camera body, but paired to a Skywatcher 80ED and I had the settings at ISO 200 and 1/20s. There is no software scaling of either photos, they are stitched “as is”.
This image was done with GIMP, I also inserted 2 “blurred” layers to create a small amount of haze around the moon to make it look a little more natural. The Moon was purposely placed “behind” a skyscraper to give it an element of depth and lowered the color temperature.
So dig through some of your old photos and start experimenting…
What makes it possible to be able to generate a photo of the Milkyway from what appears to be just a faint trace in the original shot?
The final image (left) and a single frame as obtained from the camera (right)
It all comes down to the signal vs noise. Whenever we record something, sound, motion, photons, etc… there is always the information you WANT to record (the signal) and various sources of noise.
Noise can have many sources:
background noise (light polution, a bright moon, sky glow, etc…)
electronic noise (sensor readout, amp glow, hot pixels)
sampling noise (quantization, randomized errors)
This noise can be random or steady/periodic in nature. A steady or periodic noise is easy to filter out as it can be identified and isolated because it will be the same in all the photos. However a random noise is more difficult to eliminate due to the random nature. This is where he signal to noise ratio becomes important.
In astrophotography we take not only the photos of the sky, but also bias, darks and flat frames: this is to isolate the various sources of noise. A bias shot is a short exposure to capture the electronic read-out noise of the sensor and electronics. The darks is a long exposure at the same setting as the astronomy photo to capture noise that appears during long exposures due to the sensor characteristics such as hot pixels and amplifier glow. Cooling the sensor is one way to reduce this noise, but that is not always possible. Finally the flat photo is taken to identify the optical noise caused by the characteristics of the lens or mirror as well as any dust that happens to be in the way.
But what can be done about random noise? That is where increasing the number of samples has a large impact. For a random noise, increasing the number of sample points improves the signal to noise ratio by the square root of the number of samples. Hence averaging 4 images will be 2 times improvement than a single photo. Going to 9 will be 3 times better. Etc…
You might be thinking: “Yeah but you are averaging, so the signal is still the same strength.” That is correct, however because my signal to noise ratio is improved I can be much more aggressive on how the image is processed. I can boost the levels that much more before the noise becomes a distraction.
But can’t I just simply duplicate my image and add them together? No that won’t work because we want the noise to be random, and if you duplicate your image, the noise is identical in both.
So even if you are limited to just taking 30-second, even 5-second shots of the night sky and can barely make out what you want to photogram, don’t despair, just take LOTS of them and you’ll be surprised what can come out of your photos.
The simplest form of astrophotography is nothing more than a camera on a tripod shooting long exposures. However by the time you get around to stacking and stretching the levels of your photos to accentuate various elements, such as the Milky Way, the sky gradient will become more apparent. That gradient can come from city lights, the Moon up above and the thicker atmosphere causing light to disperse at low angles to horizon. Normally the wider the field of view, the greater the gradient.
Below is a RAW 20-second exposure of the Milky Way near the horizon taken with a Canon 80D equipped with a 17mm F4.0 lens. The background has a slight gradient; brighter at the bottom. No all that bad.
But once you stack multiple exposures and stretch the levels to get the Milky Way to pop out, the gradient only gets worse.
There are various astrophoto software that can remove the sky gradient. The one that I’m familiar with and have been using is IRIS. I know the software is old, but it does a great job. So after I’ve completed my registration and stacking of images with DeepSkyStacker (see my Astrophotography in the City article), the next step is to open the resulting image with IRIS.
Once the stacked image is loaded in IRIS, head over to the Processing menu and select Remove gradient (polynomial fit) … Actually to get the best results you need to have the background and color corrected as well as trimming the edge of your photo. Got that covered here.
The following menu will appear.
Normally the default settings (as above) will work well. But this image has some foreground content (trees) and that will cause the result to look a little odd. The algorithm is designed to avoid sampling stars, but not so good when there is foreground content like the trees at the bottom of the image.
To correct this you must use the gradient removal function with a mask. The quickest way to create a mask is using the bin_down <value> command. This will change to white all pixels with intensities below <value>, and make black all pixels above it. Areas in black will not be used for sampling, while those in the white areas will. A little trial-and-error is sometimes necessary to select the right value.
In this case, even with the right bin_down value, the trees that I want to mask are not black, hence I will use the fill2 0 command to create black boxes and roughly block out the trees.
Below is the result after using multiple fill rectangles to mask the trees. This does not need to be precise as the mask is simply used to exclude areas from sampling. It is not like a photo-editing mask.
The resulting mask is saved (I called it mask), and I load back the original image, this time using the gradient removal with the mask option selected.
The program generates a synthetic background sky gradient, based on thousands of sample points and an order 3 polynomial. The image below lets you see the synthetic sky gradient the algorithm generated. This is what will be subtracted from the image.
Image and the synthetic sky gradient that will be subtracted
The final image below is much better and more uniform. There are no strange dark and bright zones like the attempt without the mask.
If we compare the original raw images with the new stacked, stretched and sky gradient removed photo the results are pretty impressive.
At one point in time we’ve heard the saying that we are all made of star dust. Therefore, our home , the Milky Way, filled with 250 billion stars should be rather dusty. Right? Well it is, and one famous dust lane that we often see even has a name: The Great Rift.
Say that you are out camping this summer, and you spot the MilkyWay as you are amazed how many stars you can see when away from the city. You remember you have your camera and decide to setup for some long exposure shots to capture all this beauty (lets go for 20 seconds at ISO 3200 17mm F4.0) pointing to the constellation Cygnus. A bit of processing and you should get something like this.
The Milky Way centered on the constellation Cygnus.
Not bad! Lots of stars… a brighter band where the Milky Way arm of the galaxy is located and some darker spots at various places. Those darker areas are gigantic dusk clouds between Earth and the arms of our spiral galaxy that obscure the background stars. If only there was a way to remove all those stars, you could better see these dark areas.
And there is a way to remove stars! It’s called StarNet++, takes a load of CPU power and works like magic to remove stars from photos. Abracadabra!
Above image after processing with the StarNet++ algorithm
Behold! The Great Rift! Well actually just a portion of it. With the camera setup I get at most a 70deg field of view of the sky. Nevertheless, the finer details of these “dark nebula” can be appreciated.
Stripping the stars from an photo does have some advantages: it allows the manipulation of the background “glow” and dusk lanes without concern to what happens to the foreground stars. The resulting image (a blend of both the starless and original image) had improved definition of the Milky Way, higher contrast and softer stars that improve the visual appeal.
While there are plenty of stars above us, what defines a nice Milky Way shots is the delicate dance of light and darkness between the billions of stars and the obscuring dust clouds.
Photo Info: Canon 80D 13 x 20 sec (4min 20sec integration time) 17mm F4.0 ISO3200 Deep Sky Stacker IRIS for background gradient removal and color adjustment StarNet++ GIMP for final processing
When observing a comet, what we see is the outer coma; the dust and vapor outgassing from the nucleus as it gets heated from the Sun.
So I decided to take one of my photos taken with my Skywatcher 80ED telescope (600mm focal length) and see if I could process the image to spot where the nucleus is located.
This can be achieved by using the MODULO command in IRIS and viewing the result in false color. The results are better if you do a logarithmic stretch of the image before the MODULO command. It took some trial-and-error to get the right parameters, but the end results isn’t so bad.
Studying the internal structure of comet C/2020 F3 NEOWISE (Benoit Guertin)
For the fun of it I tried to see if I could calculate the size of the comet nucleus using the image. At the most narrow the nucleus on the photo spans 5 pixels. Based on a previous plate-solve result I know that my setup (Canon 80D and Skywatcher 80ED telescope) results in scale of 1.278 pixels per arc-second. Then I used Stellarium to get the Earth-coment distance on July 23rd (103.278 M km)
When I plugged in all the numbers I get a comet nucleus size of approximately 2000 km, which to me seamed a little on the BIG size.
I live in a heavily light polluted city, therefore unless it’s bright, I won’t see it. But boy was I ever happy with the outcome of this comet! In my books C/2020 F3 (NEOWISE) falls in the “Great Comet” category, and it’s by far the most photographed comet in history because it was visible for so long to folks on both sides of the globe.
My last encounter with a bright comet was in 2007 with periodic 17P/Holmes when it brightened by a factor half a million in 42 hours with this spectacular outburst to become visible to the naked eye. It was the largest outburst ever observed with the corona becoming temporarily the biggest visible object in the solar system. Even bigger than the Sun!.
Comet 17P/Holmes November 2, 2007 (Benoit Guertin)
So when the community was feverishly sharing pictures of the “NEOWISE” I had to try my luck; I wasn’t about to miss out on this chance of a lifetime.
I have to say that my first attempt was a complete failure. Reading up when it was the best time to try to photograph this comet most indicated one hour before sunrise was the right time. So I checked on Google Maps where I could setup for an un-obstructed view of the eastern horizon (my house was no good) and in the early morning with my gear ready at 4am I set off. To my disappointment and the “get-back-to-bed-you-idiot” voice in me, it didn’t work out. By the time I got to the spot and had the camera ready, the sky was already too bright. No comet in sight, and try as I might with the DSRL, nothing.
Two evenings later and another cloudless overnight sky I decided to try again, but this time I would make it happen by setting the alarm one hour earlier: 3am. That is all that it took! I was able to set-up before the sky could brighten, and then CLICK! I had this great comet recorded on my Canon SD memory card.
Comet C/2020 F3 (NEOWISE) in the dawn sky on July 9th. (Benoit Guertin)
I didn’t need any specialized gear. All it took was a DSLR, a lens set to manual focus, a tripod and 5 seconds of exposure and there was the comet. I snapped a bunch of frames at different settings and then headed back home to catch the last hour of sleep before starting another day of work. Lying in bed I felt like I had accomplished something important.
As the comet swung around our Sun and flipped from a dawn to a dusk object I decided I should try to photograph it once again, but this time with the Skywatcher 80ED telescope. At that point, the comet was dimming so every day that passed would be more difficult. It was only visible in the North-West horizon at sunset, which meant setting up in the front the the house, fully exposed to street lights. Not ideal, but I had nothing to loose trying.
Setup in front of the house, fully exposed to street lights to catch the comet.
I used our tree in the front yard to act as a screen and was able to locate and photograph this great comet. Polar alignment wasn’t easy, and when I had the comet finally centered and focused with the camera, overhead power lines were in the field of view. I decided to wait out 30 minutes and let the sky rotate to the lines out of the view. Besides, it will get darker anyways which should help which the photo. But I also realized that my “window” of opportunity was small before houses would start obscuring the view as the comet would dip to a lower angle with the horizon.
I’m sure in the years to come people will debate if this was a “Great Comet”, but it my books it’s definitely one to remember. It cemented with me the concept that comets are chucks of “dirty ice” that swing around the sun. Flipping from a dawn to dusk observable object after a pass around the Sun is a great demonstration of the elliptical nature of objects moving in our solar system.
Welcome to a journey into our Universe with Dr Dave, amateur astronomer and astrophotographer for over 40 years. Astro-imaging, image processing, space science, solar astronomy and public outreach are some of the stops in this journey!