Star Trails – Quick and Easy

Creating night-time images with star trails is the easiest and should be your first project when getting into astro photography.

Back in the day of film, you had to stomp down the diaphragm and use a shutter actuator to take one VERY long exposure. And if something happened during that long exposure (bird, plane, clouds, etc…) your photo was ruined. With digital, you can instead take LOTS of short exposures and digitally stitch them together, leaving out the ones that got ruined.

Setup your camera to take a series of short exposure photos, 10 seconds is good. For some tips on how to setup your camera, head over to my Astrophotography Cookbook page.

If you are starting out, or want to simply do this quickly, skip taking Bias, Dark and Flat photos. These are used to improve the final image processing and make more sense when you wish to do some deep sky stacking.

For this exercise I configured the intervalometer of the camera to take 10 second exposures with a 1 second pause between (i.e. the shutter is pressed every 11 seconds and each click is a 10 second exposure). I left the camera operating for a little more than one hour, with the result over 400 photos captured.

It’s important to review all the photos and note down the ones to exclude from the final image, things like camera movement because you knocked the tripod, a plane, clouds, etc. It’s possible that from the 438 photos taken, only the range 5 to 352 will be good to build the star trail image as clouds decided to roll into view on the 353rd photo.

The next step is to import the photos into Deep Sky Stacker. This is done by using the Open picture files… command. As I mentioned earlier, the dark, flat and bias can be skipped, these are not required. But if you have them, they will improve the quality of the final image. Don’t forget to select Check all before moving to the next step, and to uncheck any photos you want to exclude if you did a bulk import.

Once the photos are selected, go straight to Stack checked pictures… In the window that pops-up, hit the Stacking parameters… button and select the following:

  • Result – Standard Mode
  • Light – Maximum
  • Alignment – No Alignment

The remaining tabs can remain with the default setting. Hit OK and the program will now start processing all the photos. Note that DSS will still register each image even if you selected No Alignment. If you know how to prevent this waste of time, please tell me in the comment below.

The end result is something like the image below. Base on your the quality of your sky, the camera setting, color balance, etc… various level of work will be required to make it look nice, but you now have something to import into your photo editor and correct all that.

In my Post-processing section of the Astrophotography Cookbook, I provide some tips on how to correct for things like sky gradient.

Clear dark skies!

December 13 – Trying to catch the Geminids

About a week ago I crossed on my news feed that the Geminid meteor shower was peaking on the 13th and 14th and it should be a good year. At the same time I saw some pretty impressive photos of photographers catching spectacular fireballs as these tiny dust and grains of rock plunge into the atmosphere.

Braving the below freezing weather I setup the Canon 80D on a tripod in the back yard to see what I could catch. I read that the best time for the Geminids is 2am, I wasn’t going to stay up that late on a weeknight, so 10pm would have to do.

Wanting to capture as much of the sky as possible, the zoom lens is set to 17mm and wide open at F4. Note that I live in the city with considerable light pollution (I guess that’s what happens when electricity is cheap) which meant only the brightest meteors would be visible. Playing around with the settings I quickly concluded that at ISO1600 10-seconds of exposure would be the longest I should use to avoid having an over exposed sky. Normally it’s best to have the image intensity peak on the left half of the histogram. This can be quickly checked by viewing a captured image and selecting the Info option.

The camera operated for over an hour and managed to take 304 images before the memory card was full. The camera could have kept going much longer had I wiped the card clean before setting up as the battery still had over 25% charge.

Once the photos transferred on computer I reviewed all the images and identified those that had what appeared to be a meteor, plane or clouds such that I could do the necessary processing later on.

I know the chance of catching a spectacular fireball is slim, but it’s still interesting to review the images for any surprised and explore the various types of processing that can be done.

The easiest and quickest thing to do with all these images is a time-lapse movie. This is essentially a no-brainer. I used Canon Digital Photo Professional 4 to perform some color and brightness corrections on the photos prior to creating the movie. The benefit of this software is that you can save the “recipe” you used on one photo and apply it to all. I also did a batch processing to generate individual JPEG with 1080p of resolution to limit the quantity of GB of intermediate files required for this time-lapse movie.

The clouds that showed halfway through the sequence limited what I could do next with regards to “processing”. My next plan was for star trails!

I selected the longest stretch of images without clouds and then stacked them without alignment, using the ADD MAX operation in DSS. The result will be star trails as well as light trails from any passing plane. The image below is 122 individual 10 second exposures for just over 20 minutes total exposure time.

Tracks from two planes are clearly visible over the arc motion of the stars. A third plane much higher and on a different flight path also crossed the image if you pay close attention.

The timelapse and the star trails are two quick and interesting results from the photo session, but that was not my initial plan. Next I created a “starless” version of my night sky to serve as a background. This was achieved by selecting 8 images 1 minute apart and stacking them using the SIGMA MEDIAN operation. DSS will compare the pixels of all 8 images and if it falls outside a defined sigma distribution, the pixel will be replaced with the median value. As the images are once again not registered or star-aligned, the foreground will remain fix while the stars will move. As the stars move between each image, they will fall outside the sigma distribution and will be replaced by the median value instead.

With my starless image completed, the next step is to use GIMP to blend together the individual meteor trails with the starless nigh sky image. I use a MASK to select just the meteor trail of each photo that I previously identified contained a meteor. Each photo was manually added as a layer to the starless background.

Picture saved with settings applied.

There’s a total of four faint meteor trails as well as one very bright but short lived meteor in the middle. That short bright one ended up being special. Most meteor trails appear only on one frame, but this one left a smoke/dust trail that lingered for a few frames (40 seconds) and can be seen drifting in the high-altitude winds. To best see this, I selected some photos, cropped, enhanced the individual frames and generated an animated GIF.

The last processing I did was select a large sequence of photos that had no clouds or planes but this time register them such that the stars would be aligned between each frame. I simply did an ADD AVERAGE to stack the 62 individual photos, creating the equivalent of a 10 minute exposure of the night sky.

Because the field of view is wide, and I wasn’t in a particularly dark sky area the resulting photo isn’t that interesting, not like some of the other ones of the Milky Way taken while camping away from cities. However I was able to crop the image down to an area that had multiple Open Star Clusters showing up. Swipe to see the photo with the Open Star Clusters identified by their Messier Catalog number.

Click here to enlarge the above photo.

There you have it, a camera outside on a tripod for 1 hour and plenty of interesting results.

2021 Snow Moon

Gallery of photos

Manually Processing Comet Images

Looking back, the “Great comet of 2020″ C/2020 F3 NEOWISE was a fantastic sight and well worth the 3am alarm to snap some photos back in July. But comet images are notoriously difficult to work with. Should I also add that in older times, comets were often seen as a bad omen, the bearer of bad news? Cough, cough COVID-19 cough…

Anyways, back to astronomy… There are essentially two types of photo registration (alignment) software out there: 1) Deep Sky which uses pin-point stars to perform alignment; 2) Lunar/Planetary uses the large “disk” of a planet or Moon to align based on surface details.

So when you capture long wispy comets like the RAW image below, software like DSS or Registax just can’t cope.

RAW image : Canon 80D 300mm f/5.6 5 seconds exposure at ISO3200 (09-jul-2020)

I turned to standard photo-editing software for a manual alignment and stacking. This is essentially opening one “base” image and then adding a 2nd image as a new layer. I change that 2nd layer to be overlaid as a “Difference” and manually align this 2nd layer to match the base layer. Once that is done I change the layer mode to Addition, and then hide this 2nd layer. Repeat the steps for a 3rd, 4th, 5th, etc. layers until you’ve added all your images. Always aligning with the “base” image to ensure no drift.

If you simply add all those layers up, you will get one very bright image because you are adding pixel intensities. You can do that and then work with the Levels and Curves to bring it back down, or if like me, working with GIMP, then use the Py-Astro plug-ins to do the merging and intensity scaling in a single step with a Merge all layers. Py-Astro can be downloaded here. I haven’t explored all that the plugins have to offer, that will hopefully be in another blog.

Stacking 11 individual frames results in an improvement over a single RAW image (image below). With the stacked image, I’m able to work with the intensities to darken the sky while keeping the comet tail bright.

After merging 11 images manually aligned in GIMP

However the sky gradient is pretty bad, due to the camera lens and because at 4am the sun is starting to shine on the horizon. So off to IRIS to correct the background gradient. From GIMP I save the files as a 16BIT FIT that I can use in IRIS. For steps on how to do this, see my blog about how to remove the sky gradient.

After a quick spin in IRIS, I’m back in GIMP for final color and intensity adjustments, I boosted the BLUE layer and adjusted the dark levels for a darker sky.

C/2020 F3 NEOWISE from 09-JUL-2020 by Benoit Guertin
Final Processed image of C/2020 F3 NEOWISE from 09-JUL-2020

My First Photoshopped Moon

Ever since Photoshop (and other editing software) allowed user to manually manipulate pixels there has been edited pictures. And with the computing power available at our fingertips and some built-in tools it’s surprisingly simple to “stich” together two photos. So full disclosure, the image below is “Photoshopped”.

I decided as an exercise to see how to insert into a nighttime skyline a photo of the Moon photo taken with my telescope.

The New York city skyline was taken by me a visiti of the Empire State Building in October last year (pre-pandemic) with a Canon 80D, 17mm F4.0 lens at 1/50s ISO 6400. The Moon is with the same camera body, but paired to a Skywatcher 80ED and I had the settings at ISO 200 and 1/20s. There is no software scaling of either photos, they are stitched “as is”.

This image was done with GIMP, I also inserted 2 “blurred” layers to create a small amount of haze around the moon to make it look a little more natural. The Moon was purposely placed “behind” a skyscraper to give it an element of depth and lowered the color temperature.

So dig through some of your old photos and start experimenting…

Signal and Noise

What makes it possible to be able to generate a photo of the Milkyway from what appears to be just a faint trace in the original shot?

The final image (left) and a single frame as obtained from the camera (right)

It all comes down to the signal vs noise. Whenever we record something, sound, motion, photons, etc… there is always the information you WANT to record (the signal) and various sources of noise.

Noise can have many sources:

  • background noise (light polution, a bright moon, sky glow, etc…)
  • electronic noise (sensor readout, amp glow, hot pixels)
  • sampling noise (quantization, randomized errors)

This noise can be random or steady/periodic in nature. A steady or periodic noise is easy to filter out as it can be identified and isolated because it will be the same in all the photos. However a random noise is more difficult to eliminate due to the random nature. This is where he signal to noise ratio becomes important.

In astrophotography we take not only the photos of the sky, but also bias, darks and flat frames: this is to isolate the various sources of noise. A bias shot is a short exposure to capture the electronic read-out noise of the sensor and electronics. The darks is a long exposure at the same setting as the astronomy photo to capture noise that appears during long exposures due to the sensor characteristics such as hot pixels and amplifier glow. Cooling the sensor is one way to reduce this noise, but that is not always possible. Finally the flat photo is taken to identify the optical noise caused by the characteristics of the lens or mirror as well as any dust that happens to be in the way.

But what can be done about random noise? That is where increasing the number of samples has a large impact. For a random noise, increasing the number of sample points improves the signal to noise ratio by the square root of the number of samples. Hence averaging 4 images will be 2 times improvement than a single photo. Going to 9 will be 3 times better. Etc…

You might be thinking: “Yeah but you are averaging, so the signal is still the same strength.” That is correct, however because my signal to noise ratio is improved I can be much more aggressive on how the image is processed. I can boost the levels that much more before the noise becomes a distraction.

But can’t I just simply duplicate my image and add them together? No that won’t work because we want the noise to be random, and if you duplicate your image, the noise is identical in both.

So even if you are limited to just taking 30-second, even 5-second shots of the night sky and can barely make out what you want to photogram, don’t despair, just take LOTS of them and you’ll be surprised what can come out of your photos.

Milkyway from a stacking of 8 x 20 second photos.

Removing Sky Gradient in Astrophoto

The simplest form of astrophotography is nothing more than a camera on a tripod shooting long exposures. However by the time you get around to stacking and stretching the levels of your photos to accentuate various elements, such as the Milky Way, the sky gradient will become more apparent. That gradient can come from city lights, the Moon up above and the thicker atmosphere causing light to disperse at low angles to horizon. Normally the wider the field of view, the greater the gradient.

Below is a RAW 20-second exposure of the Milky Way near the horizon taken with a Canon 80D equipped with a 17mm F4.0 lens. The background has a slight gradient; brighter at the bottom. No all that bad.

But once you stack multiple exposures and stretch the levels to get the Milky Way to pop out, the gradient only gets worse.

There are various astrophoto software that can remove the sky gradient. The one that I’m familiar with and have been using is IRIS. I know the software is old, but it does a great job. So after I’ve completed my registration and stacking of images with DeepSkyStacker (see my Astrophotography in the City article), the next step is to open the resulting image with IRIS.

Once the stacked image is loaded in IRIS, head over to the Processing menu and select Remove gradient (polynomial fit) … Actually to get the best results you need to have the background and color corrected as well as trimming the edge of your photo. Got that covered here.

The following menu will appear.

Normally the default settings (as above) will work well. But this image has some foreground content (trees) and that will cause the result to look a little odd. The algorithm is designed to avoid sampling stars, but not so good when there is foreground content like the trees at the bottom of the image.

To correct this you must use the gradient removal function with a mask. The quickest way to create a mask is using the bin_down <value> command. This will change to white all pixels with intensities below <value>, and make black all pixels above it. Areas in black will not be used for sampling, while those in the white areas will. A little trial-and-error is sometimes necessary to select the right value.

In this case, even with the right bin_down value, the trees that I want to mask are not black, hence I will use the fill2 0 command to create black boxes and roughly block out the trees.

Below is the result after using multiple fill rectangles to mask the trees. This does not need to be precise as the mask is simply used to exclude areas from sampling. It is not like a photo-editing mask.

The resulting mask is saved (I called it mask), and I load back the original image, this time using the gradient removal with the mask option selected.

The program generates a synthetic background sky gradient, based on thousands of sample points and an order 3 polynomial. The image below lets you see the synthetic sky gradient the algorithm generated. This is what will be subtracted from the image.

Image and the synthetic sky gradient that will be subtracted

The final image below is much better and more uniform. There are no strange dark and bright zones like the attempt without the mask.

If we compare the original raw images with the new stacked, stretched and sky gradient removed photo the results are pretty impressive.

The Great Rift

At one point in time we’ve heard the saying that we are all made of star dust. Therefore, our home , the Milky Way, filled with 250 billion stars should be rather dusty. Right? Well it is, and one famous dust lane that we often see even has a name: The Great Rift.

Say that you are out camping this summer, and you spot the MilkyWay as you are amazed how many stars you can see when away from the city. You remember you have your camera and decide to setup for some long exposure shots to capture all this beauty (lets go for 20 seconds at ISO 3200 17mm F4.0) pointing to the constellation Cygnus. A bit of processing and you should get something like this.

The Milky Way centered on the constellation Cygnus.

Not bad! Lots of stars… a brighter band where the Milky Way arm of the galaxy is located and some darker spots at various places. Those darker areas are gigantic dusk clouds between Earth and the arms of our spiral galaxy that obscure the background stars. If only there was a way to remove all those stars, you could better see these dark areas.

And there is a way to remove stars! It’s called StarNet++, takes a load of CPU power and works like magic to remove stars from photos. Abracadabra!

Above image after processing with the StarNet++ algorithm

Behold! The Great Rift! Well actually just a portion of it. With the camera setup I get at most a 70deg field of view of the sky. Nevertheless, the finer details of these “dark nebula” can be appreciated.

Stripping the stars from an photo does have some advantages: it allows the manipulation of the background “glow” and dusk lanes without concern to what happens to the foreground stars. The resulting image (a blend of both the starless and original image) had improved definition of the Milky Way, higher contrast and softer stars that improve the visual appeal.

While there are plenty of stars above us, what defines a nice Milky Way shots is the delicate dance of light and darkness between the billions of stars and the obscuring dust clouds.

Photo Info:
Canon 80D
13 x 20 sec (4min 20sec integration time)
17mm F4.0 ISO3200
Deep Sky Stacker
IRIS for background gradient removal and color adjustment
StarNet++
GIMP for final processing

C/2020 F3 (NEOWISE) Thanks for Swinging By

I live in a heavily light polluted city, therefore unless it’s bright, I won’t see it. But boy was I ever happy with the outcome of this comet! In my books C/2020 F3 (NEOWISE) falls in the “Great Comet” category, and it’s by far the most photographed comet in history because it was visible for so long to folks on both sides of the globe.

My last encounter with a bright comet was in 2007 with periodic 17P/Holmes when it brightened by a factor half a million in 42 hours with this spectacular outburst to become visible to the naked eye. It was the largest outburst ever observed with the corona becoming temporarily the biggest visible object in the solar system. Even bigger than the Sun!.

Comet 17P/Holmes November 2, 2007 (Benoit Guertin)

So when the community was feverishly sharing pictures of the “NEOWISE” I had to try my luck; I wasn’t about to miss out on this chance of a lifetime.

I have to say that my first attempt was a complete failure. Reading up when it was the best time to try to photograph this comet most indicated one hour before sunrise was the right time. So I checked on Google Maps where I could setup for an un-obstructed view of the eastern horizon (my house was no good) and in the early morning with my gear ready at 4am I set off. To my disappointment and the “get-back-to-bed-you-idiot” voice in me, it didn’t work out. By the time I got to the spot and had the camera ready, the sky was already too bright. No comet in sight, and try as I might with the DSRL, nothing.

Two evenings later and another cloudless overnight sky I decided to try again, but this time I would make it happen by setting the alarm one hour earlier: 3am. That is all that it took! I was able to set-up before the sky could brighten, and then CLICK! I had this great comet recorded on my Canon SD memory card.

Comet C/2020 F3 (NEOWISE) in the dawn sky on July 9th. (Benoit Guertin)

I didn’t need any specialized gear. All it took was a DSLR, a lens set to manual focus, a tripod and 5 seconds of exposure and there was the comet. I snapped a bunch of frames at different settings and then headed back home to catch the last hour of sleep before starting another day of work. Lying in bed I felt like I had accomplished something important.

As the comet swung around our Sun and flipped from a dawn to a dusk object I decided I should try to photograph it once again, but this time with the Skywatcher 80ED telescope. At that point, the comet was dimming so every day that passed would be more difficult. It was only visible in the North-West horizon at sunset, which meant setting up in the front the the house, fully exposed to street lights. Not ideal, but I had nothing to loose trying.

Setup in front of the house, fully exposed to street lights to catch the comet.

I used our tree in the front yard to act as a screen and was able to locate and photograph this great comet. Polar alignment wasn’t easy, and when I had the comet finally centered and focused with the camera, overhead power lines were in the field of view. I decided to wait out 30 minutes and let the sky rotate to the lines out of the view. Besides, it will get darker anyways which should help which the photo. But I also realized that my “window” of opportunity was small before houses would start obscuring the view as the comet would dip to a lower angle with the horizon.

C/2020 F3 (NEOWISE) July 23, 2020 – Skywatcher 80ED (Benoit Guertin)

I’m sure in the years to come people will debate if this was a “Great Comet”, but it my books it’s definitely one to remember. It cemented with me the concept that comets are chucks of “dirty ice” that swing around the sun. Flipping from a dawn to dusk observable object after a pass around the Sun is a great demonstration of the elliptical nature of objects moving in our solar system.

Now waiting for the next one…