My First Photoshopped Moon

Ever since Photoshop (and other editing software) allowed user to manually manipulate pixels there has been edited pictures. And with the computing power available at our fingertips and some built-in tools it’s surprisingly simple to “stich” together two photos. So full disclosure, the image below is “Photoshopped”.

I decided as an exercise to see how to insert into a nighttime skyline a photo of the Moon photo taken with my telescope.

The New York city skyline was taken by me a visiti of the Empire State Building in October last year (pre-pandemic) with a Canon 80D, 17mm F4.0 lens at 1/50s ISO 6400. The Moon is with the same camera body, but paired to a Skywatcher 80ED and I had the settings at ISO 200 and 1/20s. There is no software scaling of either photos, they are stitched “as is”.

This image was done with GIMP, I also inserted 2 “blurred” layers to create a small amount of haze around the moon to make it look a little more natural. The Moon was purposely placed “behind” a skyscraper to give it an element of depth and lowered the color temperature.

So dig through some of your old photos and start experimenting…

Signal and Noise

What makes it possible to be able to generate a photo of the Milkyway from what appears to be just a faint trace in the original shot?

The final image (left) and a single frame as obtained from the camera (right)

It all comes down to the signal vs noise. Whenever we record something, sound, motion, photons, etc… there is always the information you WANT to record (the signal) and various sources of noise.

Noise can have many sources:

  • background noise (light polution, a bright moon, sky glow, etc…)
  • electronic noise (sensor readout, amp glow, hot pixels)
  • sampling noise (quantization, randomized errors)

This noise can be random or steady/periodic in nature. A steady or periodic noise is easy to filter out as it can be identified and isolated because it will be the same in all the photos. However a random noise is more difficult to eliminate due to the random nature. This is where he signal to noise ratio becomes important.

In astrophotography we take not only the photos of the sky, but also bias, darks and flat frames: this is to isolate the various sources of noise. A bias shot is a short exposure to capture the electronic read-out noise of the sensor and electronics. The darks is a long exposure at the same setting as the astronomy photo to capture noise that appears during long exposures due to the sensor characteristics such as hot pixels and amplifier glow. Cooling the sensor is one way to reduce this noise, but that is not always possible. Finally the flat photo is taken to identify the optical noise caused by the characteristics of the lens or mirror as well as any dust that happens to be in the way.

But what can be done about random noise? That is where increasing the number of samples has a large impact. For a random noise, increasing the number of sample points improves the signal to noise ratio by the square root of the number of samples. Hence averaging 4 images will be 2 times improvement than a single photo. Going to 9 will be 3 times better. Etc…

You might be thinking: “Yeah but you are averaging, so the signal is still the same strength.” That is correct, however because my signal to noise ratio is improved I can be much more aggressive on how the image is processed. I can boost the levels that much more before the noise becomes a distraction.

But can’t I just simply duplicate my image and add them together? No that won’t work because we want the noise to be random, and if you duplicate your image, the noise is identical in both.

So even if you are limited to just taking 30-second, even 5-second shots of the night sky and can barely make out what you want to photogram, don’t despair, just take LOTS of them and you’ll be surprised what can come out of your photos.

Milkyway from a stacking of 8 x 20 second photos.

Removing Sky Gradient in Astrophoto

The simplest form of astrophotography is nothing more than a camera on a tripod shooting long exposures. However by the time you get around to stacking and stretching the levels of your photos to accentuate various elements, such as the Milky Way, the sky gradient will become more apparent. That gradient can come from city lights, the Moon up above and the thicker atmosphere causing light to disperse at low angles to horizon. Normally the wider the field of view, the greater the gradient.

Below is a RAW 20-second exposure of the Milky Way near the horizon taken with a Canon 80D equipped with a 17mm F4.0 lens. The background has a slight gradient; brighter at the bottom. No all that bad.

But once you stack multiple exposures and stretch the levels to get the Milky Way to pop out, the gradient only gets worse.

There are various astrophoto software that can remove the sky gradient. The one that I’m familiar with and have been using is IRIS. I know the software is old, but it does a great job. So after I’ve completed my registration and stacking of images with DeepSkyStacker (see my Astrophotography in the City article), the next step is to open the resulting image with IRIS.

Once the stacked image is loaded in IRIS, head over to the Processing menu and select Remove gradient (polynomial fit) … Actually to get the best results you need to have the background and color corrected as well as trimming the edge of your photo. Got that covered here.

The following menu will appear.

Normally the default settings (as above) will work well. But this image has some foreground content (trees) and that will cause the result to look a little odd. The algorithm is designed to avoid sampling stars, but not so good when there is foreground content like the trees at the bottom of the image.

To correct this you must use the gradient removal function with a mask. The quickest way to create a mask is using the bin_down <value> command. This will change to white all pixels with intensities below <value>, and make black all pixels above it. Areas in black will not be used for sampling, while those in the white areas will. A little trial-and-error is sometimes necessary to select the right value.

In this case, even with the right bin_down value, the trees that I want to mask are not black, hence I will use the fill2 0 command to create black boxes and roughly block out the trees.

Below is the result after using multiple fill rectangles to mask the trees. This does not need to be precise as the mask is simply used to exclude areas from sampling. It is not like a photo-editing mask.

The resulting mask is saved (I called it mask), and I load back the original image, this time using the gradient removal with the mask option selected.

The program generates a synthetic background sky gradient, based on thousands of sample points and an order 3 polynomial. The image below lets you see the synthetic sky gradient the algorithm generated. This is what will be subtracted from the image.

Image and the synthetic sky gradient that will be subtracted

The final image below is much better and more uniform. There are no strange dark and bright zones like the attempt without the mask.

If we compare the original raw images with the new stacked, stretched and sky gradient removed photo the results are pretty impressive.

The Great Rift

At one point in time we’ve heard the saying that we are all made of star dust. Therefore, our home , the Milky Way, filled with 250 billion stars should be rather dusty. Right? Well it is, and one famous dust lane that we often see even has a name: The Great Rift.

Say that you are out camping this summer, and you spot the MilkyWay as you are amazed how many stars you can see when away from the city. You remember you have your camera and decide to setup for some long exposure shots to capture all this beauty (lets go for 20 seconds at ISO 3200 17mm F4.0) pointing to the constellation Cygnus. A bit of processing and you should get something like this.

The Milky Way centered on the constellation Cygnus.

Not bad! Lots of stars… a brighter band where the Milky Way arm of the galaxy is located and some darker spots at various places. Those darker areas are gigantic dusk clouds between Earth and the arms of our spiral galaxy that obscure the background stars. If only there was a way to remove all those stars, you could better see these dark areas.

And there is a way to remove stars! It’s called StarNet++, takes a load of CPU power and works like magic to remove stars from photos. Abracadabra!

Above image after processing with the StarNet++ algorithm

Behold! The Great Rift! Well actually just a portion of it. With the camera setup I get at most a 70deg field of view of the sky. Nevertheless, the finer details of these “dark nebula” can be appreciated.

Stripping the stars from an photo does have some advantages: it allows the manipulation of the background “glow” and dusk lanes without concern to what happens to the foreground stars. The resulting image (a blend of both the starless and original image) had improved definition of the Milky Way, higher contrast and softer stars that improve the visual appeal.

While there are plenty of stars above us, what defines a nice Milky Way shots is the delicate dance of light and darkness between the billions of stars and the obscuring dust clouds.

Photo Info:
Canon 80D
13 x 20 sec (4min 20sec integration time)
17mm F4.0 ISO3200
Deep Sky Stacker
IRIS for background gradient removal and color adjustment
StarNet++
GIMP for final processing

Nucleus of Comet C/2020 F3 NEOWISE

When observing a comet, what we see is the outer coma; the dust and vapor outgassing from the nucleus as it gets heated from the Sun.

So I decided to take one of my photos taken with my Skywatcher 80ED telescope (600mm focal length) and see if I could process the image to spot where the nucleus is located.

This can be achieved by using the MODULO command in IRIS and viewing the result in false color. The results are better if you do a logarithmic stretch of the image before the MODULO command. It took some trial-and-error to get the right parameters, but the end results isn’t so bad.

Studying the internal structure of comet C/2020 F3 NEOWISE (Benoit Guertin)

For the fun of it I tried to see if I could calculate the size of the comet nucleus using the image. At the most narrow the nucleus on the photo spans 5 pixels. Based on a previous plate-solve result I know that my setup (Canon 80D and Skywatcher 80ED telescope) results in scale of 1.278 pixels per arc-second. Then I used Stellarium to get the Earth-coment distance on July 23rd (103.278 M km)

When I plugged in all the numbers I get a comet nucleus size of approximately 2000 km, which to me seamed a little on the BIG size.

Sure enough a little research revealed that measurements made by Hubble points to a 4.8 km ball of ice. So yeah, I’m quite far from that… but it was fun to give it a try.

Backyard Astrophoto – Improvements in the Last 10 Years

When I first started astro-photography you had people like me who were just starting off and did it on the cheap with a webcam, a small newton telescope and basic mount, or you could fork out an astronomical amount of cash to get really specialized gear.

Below is a photo of Messier 101 the Pinwheel Galaxy taken last week with a $500 Skywatcher80ED telescope and Canon80D DSLR on an unguided mount.

Messier 101 - Pinwheel Galaxy
Messier 101 – Pinwheel Galaxy (Skywatcher 80ED and Canon 80D)

I agree that it’s not as fancy as some of the research grade setups or some other hobbyist out there, but it’s many times better than my first try in 2008 (below).

My results of Messier 101 in 2008

What has changed? Well for starters the optical quality of beginner and intermediate telescopes has dramatically improved, largely thanks to automated and computerized lens and mirror shaping and polishing. Yes they are made in China, but so are most carbon-fiber bikes and the latest smart-phones. As the process is automated, quality can be tightly controlled and the results are hard to beat. A quality image starts by being able to collect and focus light properly, and for $500 you can get some really descent optics.

Another great boost is improvements in camera sensors. DSLR became a go-to solution because it was a cheap way of getting a large sensor with low read noise and good sensitivity. Of course there are still monochrome specialized astro-gear available for backyard astronomers, but the one-shot color results of a DSLR are hard to match. DSLRs offer ease of use, compatibility with most software and are the biggest bang-for-your-dollar compared to specialized astro-cameras.

And the third major improvement in 10 years is computing power. A night imaging session can easily generate 1GB of RAW images that need to be processed. Transferring and storing data is now cheap, and software has followed in lock-step to handle the increase in image size and quantity. Registering and stacking software can easily handle at the pixel-level hundreds of images each with millions of pixels. Sure it might take 20 minutes to process 120 photos from the DSLR, but that is a far cry from the hours of computer crunching. If your parameters were wrong, you just wasted a hour….

So while light pollution is choking the stars out of the night sky, one easy way to gain access to the universe is through astro-photography. It’s now easier and cheaper than ever to get good results with a simple setup.

Updated how to process RAW Moon photos

When I initially wrote the article on dealing with Canon RAW files in Registax, I mentioned to resize the image when converting to 16-bit .TIF format.  However that is not ideal if you want to keep your target object the same size. Playing around with the Canon Digital Photo Professional 4 software I found out that it’s possible to apply the same cropping parameters to each photo, and when batch processed, they get all cropped. Therefore I’ve updated the article to now include the steps to crop instead of resizing to have images small enough for Registax to process it while retaining the original photo resolution.

So you have a bunch of Moon shots in RAW. Now what?

UPDATED 07-Apr-2020: Cropping instead of reducing image size


The Moon should be your first target when you start off in astro-photography.  It’s easy to find, does not require dark skies and you don’t need specialized gear.

So now that you’ve found yourself will a bunch of RAW photos of the Moon you’re wondering what to do next.  You took them with the RAW setting right? All astro-photo need to be taken in RAW to conserve as much information as possible because all the processing is done at the pixel-level and you want to retain as much detail as possible.

Registax is a great software for Moon and planetary stacking.  Unfortunately I find it has two drawbacks:

  1. Cannot deal with .CR2 CANON RAW files
  2. Crashes or gives a memory fault when dealing with large images from DSLR.

Luckily there are ways around it… You must be wondering, why use Registax if it can’t deal with large RAW CANON files?  It’s because it can align and stack images by sub-dividing your image to address atmospheric turbulence and it has one of the best wavelet analysis tool to sharpen images.

Here is what you must do: convert your RAW files to 16-bit .TIF and reduce the image size (not just the filesize, but the number of pixels in the image).  I use Digital Photo Professional 4 that came with my CANON camera, it can be downloaded. For other camera brands or photo software should allow you to also convert RAW into TIF format.

There are two possible ways to reduce the image size:

1. Resize the image – this is the fastest and simplest

Highlight the desired RAW files and select File – Batch Process

DDP4 - Main window. Selecting the desired files

DDP4 – Main window. Selecting the desired files

In the Batch Process window select to save the files as 16-bit .TIF and ensure that you resize the images.  Normally 50% reduction will do the trick. In my case a reduction to 3000 x 2000 was sufficient.

DPP4 - Batch process window : Saving as .TIF and Resizing the images

DPP4 – Batch process window : Saving as .TIF and Resizing the images

Resizing will reduce the size of the Moon, and Registax has a better chance of dealing with alignment. It’s also a simple way to reduce noise and improve a less-than-perfect  focused image.

2. Alternatively : Cropping the image – more time consuming

If you don’t want to shrink the image, an alternative is to crop the image. With DPP4 it’s possible to apply the same crop setting to all the images, however it must be done one at a time.

First select one of the images and open the Tool palette.  Select the cropping tool and the area you wish to crop.  Once that is done, use the Copy button in the Tool palette to record your crop setting.

DDP4 - Cropping with the Tool palette

DDP4 – Cropping with the Tool palette

You then need to open each file individually and Paste the crop setting using the Tool palette. Once you’ve done all of that, you can select all your images and run the Batch process to save them to 16-bit .TIF as explained above.  No need to resize if you’ve cropped.

DDP4 - Image selection pane shows the crop box around each image.

DDP4 – Image selection pane shows the crop box around each image.

Now on to alignment and stacking with Registax

Then it’s simply a matter of opening the resulting .TIF images in Registax as you would normally.

moon-raw-3

Once the alignment completed and the images stacked, your photo can be saved

moon-raw-4

But before you close the program, head over to the Wavelet panel and tweak the image to get as much detail out of the moon’s cratered surface.

moon-raw-5

If you compare both images it is clear that the 2nd one has sharper details.

As always, the best is to try different things and experiment with your setup to see what works best.

Equipment used for the above photos:
Canon 80D
Skywatch 80ED (600mm F7.5)
1/250sec ISO 200

Moon and Jupiter Through the Clouds

After yesterday’s photo with the smart phone, I decided to go for a more professional shot and grabbed the Canon 80D and capture once again the Moon and Jupiter through the clouds. However this time around took two exposures, and stitched the together.

Moon and Jupiter Through the Cloud - May 27, 2018

Moon and Jupiter Through the Cloud – May 27, 2018

The wide-angle was 24mm F4.0 1/10s ISO-1600. This was to pick up the clouds against a night sky as well as Jupiter. Then a close-up of the Moon, with a shorter exposure and lowered ISO to pick up details of the lunar surface (85mm F5.6 1/250s ISO-200).

Opened them both in GIMP and played with layers, masks and curves to get the desired image.  The close-up Moon photo was scaled down to match the 24mm wide-angle photo to avoid having gigantic moon.

 

Astrophotography in the City – Part 3

In Part 2, I explained the steps involved in improving the signal to noise ratio (SNR) by stacking multiple images and removing camera sensor noise (DARK and OFFSET frames). In this third article I will deal with sky gradient removal and white balance.

IRIS is a powerful astrophotography tool, and learning how to use the numerous commands can lead to fantastic photos. You can find good documentation and procedures on the IRIS website, so I won’t go in too much detail here.

While IRIS can process images in 32-bit, it cannot open the 32-bit FIT files generated with DSS. With my image still opened in DSS from the previous step (or by opening the Autosave.fit created by DSS), I select to save the image as a 16-bit FIT such that it can be opened in IRIS.

Below is the result in IRIS, and two things become apparent: 1) the sky has a gradient due to the light pollution from city lights; 2) the sky has a pink hue. These two elements will be corrected in this article.

iris sky gradient

Note, when I opened the image in IRIS, it was inverted, I had to flip it horizontally (menu bar – Geometry/Flip/Horizontal).

The sky gradient removal tool works best when two elements are addressed: 1) nice clean image edge, 2) the background sky is black

Trim the Edge

The image needs to have a nice edge around the border (i.e. be smooth all the way to the edge). Hence any dark bands, fuzzy or slopping edges needs to be trimmed. Zooming in on the left part of the image, I will trim at the yellow line, keeping the right-hand part.
photo edge trimming

Typing win at the command prompt within IRIS will give you a cursor to select the two corners to crop your image.

A Black Background

The background needs to be black and have an RGB value near 0. To do that, select a small area in a dark portion of your image, with no stars, and use the black command. This will offset the RGB values to be 0 based on the average within the square you selected. Essentially what you are telling the program is that the darkest portion of your image should be black.

White Balance

The sky gradient removal tool can also correct the background sky color, but before doing so, we need to adjust the white balance such that white stars appear white. To do this correctly you will need a star map (Cartes du ciel, C2A, Stellarium) and locate a star in your image that is as close to our own star color: G2V. This is not exactly for beginners, if you don’t know how, skip and do the white balance later in a photo editor. Once the star located, simply selected it with a small box and use the white command in IRIS.

We perceive a white piece of paper in sunlight to be white, hence light coming from a star of the same spectrum as our Sun should also look white in photos. It’s essentially a white balance exercise, but selecting a star in your image to calibrate instead of most programs which uses the average of the whole image.

Sky Gradient Removal

With that done, you can now select from the menu Processing / Remove gradient (polynomial fit) to get the following pop-up

remove gradient

If you have just stars in the image, a Low background detection and Low Fit precision will work.  However if you have intricate details from the Milky Way with dust lanes and all, then a High setting will better preserve the subtle changes. Try various combination to see what works best for your image. You can also do one pass with Low, and then follow it with a 2nd pass at High.

The result of all this is presented below: the sky gradient is gone, and the sky background is now a nicer black instead of a pink hue. And if you did the white balance, then the stars are also of the right color.

iris-completed

ADDED September 11, 2020More information on Sky Gradient Removal

I should mention that the two most important dialog boxes in IRIS are the Command prompt and Threshold. When viewing and performing the various operations, the threshold values (essentially the min/max for brightness and darkness) often needs to be adjusted to get a good image and see the required detail.

iris-command-threshold

The next step will be importing the file in a photo editor for final adjustments. Color saturation, levels and intensity can be adjusted in IRIS, but I find a photo editor to offer better control. And because I will continue my editing in a photo editor do not set the Threshold values too narrow. I prefer a grey sky and then do a non-linear adjustment in a photo editor to get a darker sky.

More to come in another article