Processing in PixInsight and Photoshop – M 101

January 24, 2018

This post discusses the steps taken in processing M 101, the Pinwheel Galaxy.

This post will give an overview of preprocessing, and dig deeper into postprocessing.

For the postprocessing description we do not include screenshots of our various PixInsight settings. The reality is that settings are different for each image. We will, however, provide links to online resources which give descriptions of the techniques we use. These resources often provide settings to get you started.

While the individual subexposures were saved in the .fit file format (a long time standard for astrophoto files), once in PixInsight we use their native .xisf format in 32-bit. When moving files back and forth between PixInsight and Photoshop, we use 16-bit TIFF images.

Preprocessing

When preprocessing images, we follow a subset of these steps (depending on the reqirements of the data):

  1. Image Calibration (use Pixinsight’s ImageCalibration process to apply super bias, master dark and master flat; perform RBI mitigation if necessary)
  2. Blink Selection (use PixInsight’s Blink process to quickly identify and eliminate out of focus, low contrast, or otherwise poor frames; this is also a good time to review the effectiveness of calibration on each sub)
  3. Cosmetic Correction (use Pixinsight’s CosmeticCorrection process to remove hot pixels, cold pixels and defective sensor columns)
  4. Subframe Selection (use PixInsight’s SubframeSelector script to measure quality of each sub, eliminate outliers, and apply a quality weighting value to each sub based on FWHM, Eccentricity and SNRWeight factors; also to note the highest quality sub for each filter for use during local normalization)
  5. Image Registration (use PixInsight’s StarAlignment process, including drizzle data if desired)
  6. Local Normalization (use PixInsight’s LocalNormalization process for including localized contrast data; improves signal to noise ratio and results in cleaner background transitions after integration)
  7. Image Integration (use PixInsight’s ImageIntegration process, applying quality weighting and local normalization data from earlier steps, and updating drizzle data if desired)
  8. Drizzle Integration (use PixInsight’s DrizzleIntegration process if desired, applying local normalization data from earlier steps)
  9. Dynamic Crop (remove any black edges due to tracking movement during imaging; applies an exact duplicate crop to each of our stacks so that our registration remains intact)

For this M 101 image, we followed steps: 1, 2, 5, 6, 7, 8, 9. The quality and uniformity of this dark site data was such that quality weighting wasn’t necessary. There were no hot or cold pixel issues that couldn’t be handled with rejection algorithms during integration.

Most of our preprocessing techniques are derived from this excellent tutorial from Light Vortex Astronomy.

The final image comprises 24 hours 20 minutes accumulated over 73 LRGBHa subs. Processing of this image took 25 hours.

Here are the final stacked preprocessed images (Top: Red, Green, Blue; Bottom: Lum, Ha)

(NOTE: Images may be clicked on for larger size)

Postprocessing – Luminance Channel

Great care was taken during the processing of this image to eliminate as much noise as possible while the image was in a linear state (before permanent stretching). Stretching can exaggerate noise, and make it much more difficult to denoise later. Additionally, various masks were used throughout the process to protect and enhance data.

Dynamic Background Extraction (DBE)

DBE is a process in PixInsight for subtracting light pollution gradients from an image.  It is not a substitute for proper flat field calibration. Care must be taken when applying DBE not to eliminate faint background detail. Our M 101 data from a dark sky site in Texas doesn’t show strong light gradients, however even this data benefits from DBE. The background is clearly neutralized, allowing faint detail in the outer galaxy to emerge.

Here is a comparison showing DBE results on the Lum stack (Left: Before DBE; Right: After DBE)

An instance of the DBE process was saved, and later applied (with some settings modifications) to the R, G, B and Ha stacks. Details of the DBE process and workflow can be found in Light Vortex Astronomy’s post on Preparing Monochrome Images for Color-Combination and Further Post-Processing.

Linear Denoising

Attacking noise while the data is in its linear (unstretched) state is a critical step in processing. Once noise is minimized, the data can be stretched much further than if the noise was being stretched along with the signal. We’ve recently begun using techniques described in this excellent post by Jon Rista. We won’t go into specific details here, as his article provides all the detail needed.

Removing noise while protecting signal requires masks. Fortunately, PixInsight has some great tools for creating masks. Here are the three masks required during the linear denoising processes (Left: Luminance mask; Center: Total Generalized Variation (TVG) mask; Right: MultiscaleMedian Transformation (MMT) mask):

PixInsight’s TVGDenoise process is used to remove high frequency noise. The MMT process is used to remove medium to low frequency noise. The goal is not to remove every bit of noise, as that would certainly damage the signal as well as give the image a fake plastic-smooth look. Here is a 1:2 zoom showing Lum stack noise reduction (Left: Before; Right: After):

Linear Sharpening – Deconvolution

Deconvolution is a process for characterizing any distortions present in your image and subtracting them out. Sources of distortion could be atmospheric (before the light enters your imaging system) or mechanical (after the light enters your imaging system). Removing these distortions essentially sharpens your image, revealing detail otherwise lost through diffraction or other distortions. This post on sharpening techniques from Light Vortex Astronomy describes the deconvolution workflow in detail. Procedures for making masks used in this and other processes can be found in Light Vortex Astronomy’s post on selection and masking.

PixInsight’s Deconvolution process should only be used on linear (unstretched) data, and after noise reduction. Deconvolution derives its distortion model from a Point Spread Function (PSF) based on point light sources (stars) in your image. The first step in this procedure is to create a synthetic PSF using PixInsight’s DynamicPSF process. Here is a 2:1 zoom showing the PSF for our M 101 Lum data:

This PSF is then applied using PixInsight’s Deconvolution process. Masks are needed to limit deconvolution to only those areas we wish to target, for example detail areas within the galaxy itself. We want to exclude the image’s background which would only serve to deconvolve any remaining noise. We also need to protect the stars from this process, otherwise we will get dark rings around every star. To provide protection, we create and modify masks.

First we create a range mask to isolate the galaxy using PixInsight’s RangeSelection process. Note that RangeSelection will likely leave numerous brighter stars as white in the background. We send a TIFF version of the RangeSelection mask to Photoshop were we use the paintbrush in large size to paint black over these stars (this is quicker and more efficient than the Clone tool in PixInsight), and we often blur the mask using Gaussian Blur (Left) before sending the mask back to PixInsight. We then create a star mask to protect the stars using the StarMask process (Right):

To properly deconvolve the galaxy without affecting the stars within the galaxy, we must subtract the star mask from the range mask. We do this using PixInsight’s PixelMath process (Left). In order to better protect the halo areas around stars during deconvolution, we need to make each of the black dots larger. While this is possible in PixInsight with the CloneStamp process, it is not fun to do. Instead, the combined mask image is saved as a TIFF and opened in Photoshop, where the paint brush with a round, soft-edged shape is used to enlarge each star (Center). Use the [ and ] shortcut keys in Photoshop to quickly decrease or increase the size of your brush so that it’s 10-20% larger than the black dot’s existing size, then just click on the black dot to make it larger. Finally, the TIFF is reopened in PixInsight, where the AtrousWaveletTransform process is used to softly blur the mask (Right). When masks are applied to an image, any part of the mask that is white allows you to act on the underlying data, anywhere that is black protects the data underneath. Areas that are gray (for example the blurred edges in our final mask) allow a soft transition when acting on the underlying data.

Now that we have our PSF and our masks, we use PixInsight’s Deconvolution process to sharpen targeted areas of the image. This 1:2 zoom shows the result of deconvolution. Notice that the galaxy details have been enhanced, but the stars remain soft with no dark rings. (Left: Before; Right: After):

(Note: Recently we have been using a different deconvolution workflow using only PixInsight processes. As with almost every processing technique, there are multiple ways to accomplish the same result).

Another Round of Noise Reduction

Sometimes, no matter how well your masks protect your image, deconvolution exaggerates some of the noise which remained after earlier denoising. While this step isn’t always required, in this case we ran a toned-down version of the denoising techniques described above. We used the same masks as earler, and carefully tested settings in PixInsight’s TVGDenoise and MMT processes using preview boxes. The result is subtle, but important to the overall quality of our luminance image (Left: Before; Right: After):

Stretching the Image

Up to this point, we have been working with our luminance data in its linear (unstretched) state. The ScreenTransferFunction process has allowed us to see what the image would theoretically look like in its nonlinear state, but we need to stretch the image using PixInsight’s HistogramTransformation process to permanently apply our desired stretch. Light Vortex Astronomy’s post on Stretching Linear Image to Non-Linear details several methods for stretching images, including HistogramTransformation.

Stretching the image takes time and care. You want to bring out as much faint detail as possible without blowing out highlights. You also don’t want to clip the darkest data in the image, otherwise you’ll be left with an ink-black background which destroys the faintest signal. An initial stretch of our M 101 data reveals an inherent problem with objects that have a very high dynamic range. In order to bring out the faint details in the outer rings, we blow out the detail in the galaxy’s core:

To deal with this, we performed three stretches, allowing us to target the core (Left), the stronger middle arms (Center) and the fainter outer arms (Right):

These three images were saved as TIFF, and in Photoshop we used layers and masks to combine them into a single image showing off detail across the entire galaxy:

Alternatively, shorter exposures could be taken of the galaxy to properly expose the core, and these exposures could be processed separately then masked into the long-exposure image. However this requires a lot of additional image acquisition and processing. We find that as long as the core data is not blown out, we are able to achieve our goal with multiple stretches.

Depending on the image, it may be desirable to apply some nonlinear sharpening. We didn’t feel that this image needed additional sharpening, and were concerned that any more might once again enhance remaining noise. It also may be desirable to run a final HistogramTransormation or CurvesTransformation process, but we felt that the contrast and background in this image didn’t need to be enhanced. This processed luminance image is now ready to be combined with color.

Postprocessing – RGB and Ha Channels

Many steps in postprocessing our color and narrowband data are the same as for the luminance channel, so they won’t be covered in detail here.

Linear Fit

After running each of the R, G, B and Ha channels through DBE, PixInsight’s LinearFit process was used to align the histograms of each channel, using the Blue stack as a reference since its histogram was furthest to the right (it was the brightest stack). While the visual changes in the stacks are not noticeable, aligning the histograms is critical to proper color balance when the channels are merged.

RGB Combination

PixInsight’s ChannelCombination process was used to combine the individual color channels into a single RGB image. Details of the Color Combination process can be found in Light Vortex Astronomy’s post on Color Combination and Applying Luminance.

At this stage we are not concerned about color saturation, we just want to examine overall colors to see whether we have any type of mismatch in color levels. Our stars as well as the galaxy should begin to show natural colors. Here is our combined RGB image, still in its linear state:

Before any further processing, we wanted to take advantage of the Hydrogen Alpha data by adding it to our Red data. To do this, the ChannelExtraction process was used to split the RGB channels into individual component channel images. A mask, created by severely clipping a duplicate of the Red channel using PixInsight’s HistogramTransformation process, was applied to the Red channel, and PixelMath was used to add the Ha data into the Red channel (left) through this mask (right).

The enhanced Red + Ha (RHa) channel was then recombined with the G and B channels using ChannelCombination. The difference is subtle, but will help bring out the star forming Ha regions in our final image. (Left: RGB; Right: RHaGB):

Note: Ha data is often added to the Luminance channel as well for additional detail, however in this case we saw no visible difference with or without the Ha in the L channel, so we left it out (no matter how good the mask was, some undesirable noise was coming along with the Ha data).

Color Calibration

Color calibration is critical to the final color balance of an image. Details on these processes can be found in Light Vortex Astronomy’s post on Color Calibration.

There are normally three steps in color calibration. First, PixInsight’s BackgroundNeutralization process equalizes average red, green and blue components of the background resulting in a neutral gray background. While the theoretical result is gray, any remaining gradients or variations in chromatic noise in the image may result in minor background color gradients. These will be dealt with later. The second step is the ColorCalibration process, where you identify a black (background) area of the image and a white area of the image, and these are used to adjust overall white balance of the image. A starless section of the background is used as the black reference, and the entire galaxy is used as the white reference (while each star in a galaxy has its own color temparature, the average color of a galaxy’s stars approximates white). The third step is the SCNR process, which is used to remove any green cast which results from the color calibration process. Here is the result of Color Calibration of our M 101 RHaGB image (Left: Before; Right: After):

(Note: We have recently been using PixInsight’s PhotometricColorCalibration (PCC) process instead of the color calibration process described here. PCC applies a white balance to your image using a true white based on average galaxies, and the star colors in a solve of your image, as references. The results have been fantastic).

Linear Denoising

Using new Lum, TVGDenoise and MMT masks, we repeat the denoise process on the RHaGB image. We can afford to be more aggressive with our noise removal on the color image since all of our final image detail will come from the sharpened Lum image. Here is the result (Left: Before; Right: After):

Stretching the Image and Enhancing Color Saturation

HistogramTransormation to permanently stretch the color information did not require combining multiple stretches. The dynamic range of the color image was not as great as the Lum image, so the outer rings came out nicely without blowing out the core. After stretching the image, CurvesTransformation process was used multiple times, first to push the overall color saturation (Left), then to pump up the reddish-pink color of the Ha regions and the blue color of the outer arms using targeted masks. Finally, a round of nonlinear noise reduction was done using PixInsight’s ACDNR process (Right). Some people blur the color image at this point before combining with luminance detail, however in this case we did not.

With our Lum and RHaGB images both finalized (Left, Center), we use PixInsight’s LRGBCombination process to add luminance detail to our color (Right).

Postprocessing – Final Contrast, Color and Detail

We use Photoshop for final color correction/enhancement and for extracting additional detail out of our images. We bring in our range, star and combination masks from PixInsight as TIFF images to provide any necessary masking in Photoshop.

The following photo sequence shows our workflow in Photoshop. The LRHaGB image from PixInsight is here for reference (Top Left). In Photoshop, we first use Tonal Contrast in Google’s Nik Collection Color Efex Pro plugin. Approximately 40 custom control points were used to target critical areas within the galaxy. The result is the addition of depth to the image (Top Right). Second we use Detail Extractor from Color Efex Pro, using control points to target only the fine points of filimentary and edge detail in the galaxy (Bottom Left). Care has been taken during detail and contrast enhancement to properly mask the stars and background of the image by limiting the range of the control points to just the area they are located. Finally, we make any final color adjustments, using Nik’s Viveza to carefully target any background color gradients for desaturation, or enhance color in specific locations within the galaxy (Bottom Right).

Google’s Nik Collection Photoshop plugins have very powerful masking capabilities. Using control points (see the small hollow dots in the image on the Right) we target specific detail areas in the galaxy (such as dust lanes) or other areas of interest (such as small background galaxies). Live masking shows you how the mask will look as you move the control point and change its size and settings. Global settings within the plugin are then tailored for each control point based on that point’s settings. Areas without control points (or areas with negative control points) are masked out creating areas in the image where no changes occur. As of January 2018, the Nik Collection is owned by DXO and is available as a free download.

Our image processing is complete. Full image details can be found in the Images section of our website.

A Note on the Data

M 101 data was collected at the Dark Sky Observatory Collaborative (DSOC) in Ft. Davis, TX using SC Observatory’s remote Planewave CDK 17″ f/6.8 scope, in collaboration with:

John Kasianowicz, Josh Balsam, Mike Selby, Dhaval Brahmbhatt, Scott Johnson, Mike Bushell, Rich Johnson

Image processing: Andy Chatman and Stefan Schmidt

Categories
Tags
WordPress › Error

There has been a critical error on this website.

Learn more about troubleshooting WordPress.