August 20, 2021 Not Quite Full

August 20, 2021 Not quite Full

May Lunar Eclipse (Yes a Super Moon)

I hope that some of you will be taking a few minutes this evening to head outside and glance up at the Moon. Not only is tonight a “Super Moon” but depending where you are, you may find the Moon taking on a red hue due to a lunar eclipse.

September 27th 2015 Lunar Eclipse

For tonight’s event, those around the Pacific rim are best located to see the lunar eclipse. On the east coast of North America you might spot the start of the eclipse as the Moon sets in the early morning.

Location of best viewing. Leah Tiscione / S&T; Source: USNO

Even if you are not in a favorable spot, take the time to look at the Moon. There’s this timeless element to it, knowing that it’s been there for millions of years and will continue to be there for many more.

It is also accessible to everyone, no matter how light polluted your sky happens to be.

The best way to see the Moon is with nothing else but your two eyes. Resist the urge to attempt a photo with your phone. That will only end in frustrations. All photographs of the Moon are heavily processed because it’s very hard for a camera to handle both the brightness of a full Moon and the black of the nuit sky, or the glowing halo shining through the thin clouds. And when you do get the brightness under control, all the subtle details of the Moon’s surface is lost. Your eyes are better equipped to handle the large range of brightness and the resolution to really enjoy the sight.

Two separate shots and 15 minutes of processing is required for this, yet your eyes can easily see the details in real time.

2021 Snow Moon

February’s Snow Moon

There’s been lots of attention over Mars this past week. I can’t really blame all the media coverage, the Mars 2020 Perseverance EDL to the Martian surface was really cool and a great feat for NASA. I enjoyed watching it live on the NASA YouTube feed. But this weekend let’s turn our attention to the Snow Moon; the only full moon in February.

The full moon will occur at 3:17am Saturday, so tomorrow evening will be the best time to catch it. There’s nothing particularly special about this full moon, not a Blue Moon (second Full Moon in the month) or “Super Moon”. The name Snow Moon comes from the Farmer’s Almanac as February is normally the month that receives the most snow in North America.

The great thing about full moons is that you don’t need to stay up all night and wait outside in the frigid cold to see it. At this time of year, in the Northern hemisphere, the Moon is visible for more that 12 hours a day.

If you’re tempted to photograph the Snow Moon, leave the mobile phone behind, it’ll just give poor results and you’ll end up frustrated with frozen fingers. Instead just enjoy the view, paying close attention to the various dark “seas” spanning the lunar surface.

If you do try taking a picture, grab a DSLR or compact camera with manual mode. Set the ISO around 200 and the focus to manual. Your shutter speed should be high, around 1/800s; a full moon is surprisingly bright. You’re get better results by slightly under-exposing your shot. If you have a tripod, use it, else try to steady yourself on something (railing, chair, car roof, etc..) Subtle movement can easily ruin the details in you photos.

Clear skies!

Gallery of photos

Manually Processing Comet Images

Looking back, the “Great comet of 2020″ C/2020 F3 NEOWISE was a fantastic sight and well worth the 3am alarm to snap some photos back in July. But comet images are notoriously difficult to work with. Should I also add that in older times, comets were often seen as a bad omen, the bearer of bad news? Cough, cough COVID-19 cough…

Anyways, back to astronomy… There are essentially two types of photo registration (alignment) software out there: 1) Deep Sky which uses pin-point stars to perform alignment; 2) Lunar/Planetary uses the large “disk” of a planet or Moon to align based on surface details.

So when you capture long wispy comets like the RAW image below, software like DSS or Registax just can’t cope.

RAW image : Canon 80D 300mm f/5.6 5 seconds exposure at ISO3200 (09-jul-2020)

I turned to standard photo-editing software for a manual alignment and stacking. This is essentially opening one “base” image and then adding a 2nd image as a new layer. I change that 2nd layer to be overlaid as a “Difference” and manually align this 2nd layer to match the base layer. Once that is done I change the layer mode to Addition, and then hide this 2nd layer. Repeat the steps for a 3rd, 4th, 5th, etc. layers until you’ve added all your images. Always aligning with the “base” image to ensure no drift.

If you simply add all those layers up, you will get one very bright image because you are adding pixel intensities. You can do that and then work with the Levels and Curves to bring it back down, or if like me, working with GIMP, then use the Py-Astro plug-ins to do the merging and intensity scaling in a single step with a Merge all layers. Py-Astro can be downloaded here. I haven’t explored all that the plugins have to offer, that will hopefully be in another blog.

Stacking 11 individual frames results in an improvement over a single RAW image (image below). With the stacked image, I’m able to work with the intensities to darken the sky while keeping the comet tail bright.

After merging 11 images manually aligned in GIMP

However the sky gradient is pretty bad, due to the camera lens and because at 4am the sun is starting to shine on the horizon. So off to IRIS to correct the background gradient. From GIMP I save the files as a 16BIT FIT that I can use in IRIS. For steps on how to do this, see my blog about how to remove the sky gradient.

After a quick spin in IRIS, I’m back in GIMP for final color and intensity adjustments, I boosted the BLUE layer and adjusted the dark levels for a darker sky.

C/2020 F3 NEOWISE from 09-JUL-2020 by Benoit Guertin
Final Processed image of C/2020 F3 NEOWISE from 09-JUL-2020

“7 Minutes of Terror”

The folks at JPL created a short film showcasing Perseverance’s critical descent phase for the Mars landing. If everything goes according to plan, we shall have a new rover on Mars at 3:40pm EST on February 18, 2021.

Perseverance is currently “cruising” at 84,600km/h through space with Mars as a target. To give you an idea of what kind of speed that is, here are a few benchmarks:

  • The fastest commercial jet: the Concord flying at Mach 2.04 is just under 2,200km/h
  • Space Shuttle re-entry speed: 28,100km/h
  • Voyager 1, leaving our solar system : 61,500 km/h
  • Parker Solar Probe (fastest man-made object) : +250,000km/h

Perseverance was launched on July 30th, 2020 from Cape Canaveral Air Force Station, Florida, on top of a Atlas V-541 rocket.

Animation of Mars 2020’s trajectory around Sun, Data source: HORIZONS System, JPL, NASA

The only way the rover will be able to decelerate from its current cruising speed is by plunging into the Martian atmosphere at the right angle and using the atmospheric friction to slow it down. That “7 minutes of terror” is the time the rover will spend on re-entry, from approaching Mars at the right angle, to landing in the desired spot on the Martian surface.

Lots of steps need to go right, timed correctly to have a successful landing. Only 22 of the 45 landers sent to Mars have survived a landing. The US is by far the country with the most success (sorry Russia, you’re space program is awesome, but you suck at landing on Mars)

Glancing up at the night sky that February 18, 2021 evening will be very easy to spot Mars, but also the Pleiades star cluster (Messier 45). Mars will be about 5 degrees north of a almost half-illuminated moon. And if you keep looking higher up by 10 degrees you’ll see the famous open star cluster nicknamed the Seven Sisters, also used as the Subaru emblem.

My First Photoshopped Moon

Ever since Photoshop (and other editing software) allowed user to manually manipulate pixels there has been edited pictures. And with the computing power available at our fingertips and some built-in tools it’s surprisingly simple to “stich” together two photos. So full disclosure, the image below is “Photoshopped”.

I decided as an exercise to see how to insert into a nighttime skyline a photo of the Moon photo taken with my telescope.

The New York city skyline was taken by me a visiti of the Empire State Building in October last year (pre-pandemic) with a Canon 80D, 17mm F4.0 lens at 1/50s ISO 6400. The Moon is with the same camera body, but paired to a Skywatcher 80ED and I had the settings at ISO 200 and 1/20s. There is no software scaling of either photos, they are stitched “as is”.

This image was done with GIMP, I also inserted 2 “blurred” layers to create a small amount of haze around the moon to make it look a little more natural. The Moon was purposely placed “behind” a skyscraper to give it an element of depth and lowered the color temperature.

So dig through some of your old photos and start experimenting…

Signal and Noise

What makes it possible to be able to generate a photo of the Milkyway from what appears to be just a faint trace in the original shot?

The final image (left) and a single frame as obtained from the camera (right)

It all comes down to the signal vs noise. Whenever we record something, sound, motion, photons, etc… there is always the information you WANT to record (the signal) and various sources of noise.

Noise can have many sources:

  • background noise (light polution, a bright moon, sky glow, etc…)
  • electronic noise (sensor readout, amp glow, hot pixels)
  • sampling noise (quantization, randomized errors)

This noise can be random or steady/periodic in nature. A steady or periodic noise is easy to filter out as it can be identified and isolated because it will be the same in all the photos. However a random noise is more difficult to eliminate due to the random nature. This is where he signal to noise ratio becomes important.

In astrophotography we take not only the photos of the sky, but also bias, darks and flat frames: this is to isolate the various sources of noise. A bias shot is a short exposure to capture the electronic read-out noise of the sensor and electronics. The darks is a long exposure at the same setting as the astronomy photo to capture noise that appears during long exposures due to the sensor characteristics such as hot pixels and amplifier glow. Cooling the sensor is one way to reduce this noise, but that is not always possible. Finally the flat photo is taken to identify the optical noise caused by the characteristics of the lens or mirror as well as any dust that happens to be in the way.

But what can be done about random noise? That is where increasing the number of samples has a large impact. For a random noise, increasing the number of sample points improves the signal to noise ratio by the square root of the number of samples. Hence averaging 4 images will be 2 times improvement than a single photo. Going to 9 will be 3 times better. Etc…

You might be thinking: “Yeah but you are averaging, so the signal is still the same strength.” That is correct, however because my signal to noise ratio is improved I can be much more aggressive on how the image is processed. I can boost the levels that much more before the noise becomes a distraction.

But can’t I just simply duplicate my image and add them together? No that won’t work because we want the noise to be random, and if you duplicate your image, the noise is identical in both.

So even if you are limited to just taking 30-second, even 5-second shots of the night sky and can barely make out what you want to photogram, don’t despair, just take LOTS of them and you’ll be surprised what can come out of your photos.

Milkyway from a stacking of 8 x 20 second photos.

Removing Sky Gradient in Astrophoto

The simplest form of astrophotography is nothing more than a camera on a tripod shooting long exposures. However by the time you get around to stacking and stretching the levels of your photos to accentuate various elements, such as the Milky Way, the sky gradient will become more apparent. That gradient can come from city lights, the Moon up above and the thicker atmosphere causing light to disperse at low angles to horizon. Normally the wider the field of view, the greater the gradient.

Below is a RAW 20-second exposure of the Milky Way near the horizon taken with a Canon 80D equipped with a 17mm F4.0 lens. The background has a slight gradient; brighter at the bottom. No all that bad.

But once you stack multiple exposures and stretch the levels to get the Milky Way to pop out, the gradient only gets worse.

There are various astrophoto software that can remove the sky gradient. The one that I’m familiar with and have been using is IRIS. I know the software is old, but it does a great job. So after I’ve completed my registration and stacking of images with DeepSkyStacker (see my Astrophotography in the City article), the next step is to open the resulting image with IRIS.

Once the stacked image is loaded in IRIS, head over to the Processing menu and select Remove gradient (polynomial fit) … Actually to get the best results you need to have the background and color corrected as well as trimming the edge of your photo. Got that covered here.

The following menu will appear.

Normally the default settings (as above) will work well. But this image has some foreground content (trees) and that will cause the result to look a little odd. The algorithm is designed to avoid sampling stars, but not so good when there is foreground content like the trees at the bottom of the image.

To correct this you must use the gradient removal function with a mask. The quickest way to create a mask is using the bin_down <value> command. This will change to white all pixels with intensities below <value>, and make black all pixels above it. Areas in black will not be used for sampling, while those in the white areas will. A little trial-and-error is sometimes necessary to select the right value.

In this case, even with the right bin_down value, the trees that I want to mask are not black, hence I will use the fill2 0 command to create black boxes and roughly block out the trees.

Below is the result after using multiple fill rectangles to mask the trees. This does not need to be precise as the mask is simply used to exclude areas from sampling. It is not like a photo-editing mask.

The resulting mask is saved (I called it mask), and I load back the original image, this time using the gradient removal with the mask option selected.

The program generates a synthetic background sky gradient, based on thousands of sample points and an order 3 polynomial. The image below lets you see the synthetic sky gradient the algorithm generated. This is what will be subtracted from the image.

Image and the synthetic sky gradient that will be subtracted

The final image below is much better and more uniform. There are no strange dark and bright zones like the attempt without the mask.

If we compare the original raw images with the new stacked, stretched and sky gradient removed photo the results are pretty impressive.