This post discusses the steps taken in processing NGC 4567 and NGC 4568 – Siamese Twins.
Subexposures for this image were taken on 15 different nights from our remote observatory in Fort Davis, Texas, using a Planewave CDK 17″ f/6.8 telescope on an Astrophysics 1100GTO mount and an FLI PL16803 CCD camera. ACP was used to take 120 20-minute subexposures in LRGB, totaling 40 hours of integration.
PixInsight and Photoshop were used to process this image. PixInsight excels at preprocessing and integration, noise reduction, deconvolution, color calibration and color combination. Photoshop excels at targeted detail extraction, sharpening, noise reduction and final color balancing.
Individual subs taken with our imaging systems are saved as FITS files. Once we begin processing these subs in PixInsight, we use their .xisf file format. .XISF files are saved with the following settings.
Finally, we use 16-bit TIFF images to move files back and forth between PixInsight and Photoshop.
Our preprocessing of this image began using PixInsight’s Batch Preprocessing (BPP) script. This was the first time we didn’t first create master bias and master dark frames for our preprocessing. Instead we added all bias and dark subs into the BPP script. The reason for this is to avoid clipping when calibrating the dark frames or master dark frame (see section 4 of this post for details: https://pixinsight.com/forum/index.php?topic=11968.msg73522#msg73522). We use the BPP script only for image calibration. Although the script can also perform image registration and integration, we include several other manual steps in our preprocessing before registration.
Once we have our images calibrated, we review them using PixInsight’s Blink process. This process reveals any subs that should be thrown away because of apparent issues such as clouds, poor tracking, and other visual anomalies. We are careful to only blink subs from a single filter so that there is a valid visual comparison across subs. Rejected subs are moved into their own sub-folder (the Blink process handles this) so they won’t be used in successive steps.
PixInsight’s CosmeticCorrection process is optional, and is primarily used to remove hot/cold pixels. We don’t have serious hot/cold pixel issues with our cameras, and hot/cold pixels are usually effectively rejected during image integration. However, we do use this process to remove defect columns. This hasn’t been an issue with our PL16803 CCDs, so we did not perform CosmeticCorrection on the Siamese Twin subs. We usually require column defect correction for our ML16200 and ML/PL29050 CCDs.
Next we grade our subexposures using the SubframeSelector (SS) process. This shouldn’t be confused with the SubframeSelector script which has been a part of PixInsight for years. The SS PCL process is relatively new. If you do not see the SS process listed in your process menu, you will have do download and manually install the PCL process. You can download the SubframeSelector PCL process for both Mac and PC from this post: https://pixinsight.com/forum/index.php?topic=11780.0. Make sure to read through the entire discussion so that you download the latest version of the script. The SS process runs a lot more quickly than the script, saving a great deal of time during preprocessing. It also caches your image data, so returning to a group of subexposures doesn’t require remeasuring the set. To use the SS process, make sure to enter the system parameters for your system, camera and bin level. Run one filter/bin combination at a time. After measuring the subexposures, we review the graphs and charts to exclude any subs that don’t meet our requirements. This can be done automatically with an Approval formula however we prefer reviewing the data ourselves. For example, we look at the Star Support graph or table. If all of the subs have 4,000 stars, but a few have 1,000, then it’s pretty clear there was either a focus issue, or clouds, or humidity for the subs with fewer stars. We can eliminate those subs without fear of throwing away good data. In short, we look for outliers that don’t fit with the bulk of the data. We hate throwing away data, but adding poor data is worse than integrating less data.
Once we have excluded any undesirable subs, we use the following Weighting formula to output the approved subs, with the weight captured in a new FITS keyword “SSW”:
We do one additional critical step during the SS process: we note the best subexposure for each filter. We usually select the subexposure with the most stars, but we also make sure that the FWHM for the selected sub is good. The selected sub for each filter will be used in a later step when we apply the LocalNormalization process, and the best L sub will be used as the reference image during StarAlignment. The subs we selected for the Siamese Twins image were:
Note: for those still using the SubframeSelector script, the above formula will not work. Instead, we use a spreadsheet posted in various locations online by David Ault to create our formula based on SubframeSelector script measurements.
We use the StarAlignment process to align all of our subexposures. We use the best L frame as our reference image, and keep the rest of PixInsight’s defaults. For this image we did not Drizzle the data.
After registration, we run the LocalNormalization process on our subs, one filter at a time. For each filter group, we use the best subexposure of that group (as noted during the SubframeSelector process) as the reference image. Running this process now will provide better rejection and noise reduction when the subexposures are integrated. This process takes a bit of time to run, but is fully automated. The only change we make to the process settings is to set the scale to 256 (from the default 128).
We are finally ready to integrate our individual filter subexposures using the ImageIntegration process.
This screenshot shows our typical settings for ImageIntegration:
Notes on ImageIntegration settings:
Following are steps we took preparing the LRGB image in PixInsight.
PixInsight’s DynamicCrop process allows you to precisely crop all of your master images to the same crop region. To do this, open all of the master images (L, R, G, B). Using the DynamicCrop process, draw your desired crop region on any one of the images (here we’ve drawn it on L). Take care to observe how this crop would look on all four images, making sure that all dark edges will be cropped out from all images with the crop you are drawing. Once you are satisfied with the crop region, DO NOT apply the crop. Instead, drag the Instance triangle from the bottom left of the DynamicCrop process window onto the PixInsight workspace. This will save the precise location/dimension settings of the crop area. You can rename this saved instance by right clicking on it, and even save it to your hard drive (this might be useful if you intend to add more data in the future and would want to match the crop of this old data). Once you have the instance on your workspace, cancel the DynamicCrop process without applying it to the L image. Now, all you have to do is drag the DynamicCrop instance on each of the four images, and the precise crop will be applied to each of them.
You’ll want to save the newly cropped images with a new name (we typically name it “L_crop”). Here is the L channel before and after applying Dynamic Crop:
Processing of the cropped L (detail) channel typically follows these steps:
The DBE process is unique for each image, and for each channel. We didn’t save an interim image showing our DBE for the Siamese Twins. There are many good tutorials about DBE on the web, but this is one area where we could use some improvement. In any case, DBE should be used to remove as much of the uneven background as possible (from external light sources, imperfect flat field removal, etc.).
Noise reduction while the image is in its linear state is critical, as it will allow you to more aggresssively apply deconvolution and stretch the image without exaggerating noise. we use the noise reduction techniques described in this excellent tutorial for linear noise reduction: https://jonrista.com/the-astrophotographers-guide/pixinsights/effective-noise-reduction-part-1/. Here is a comparison of the L image before and after noise reduction. Note, this is PixInsight’s default stretch:
As you can see, there is a definite improvement in the overall noise profile. Not all of the noise is removed. We will take additional steps for noise reduction later.
In March, 2018 we were contacted by Adam Block asking us if we would test out his new deconvolution video. The video is part of his suite of PixInsight processing tutorials, so we cannot share it here. His detailed demonstration and explanations have greatly improved the results we get from deconvolution. In short, the process requires creating a Point Spread Function (PSF) from stars in the image, an overall image luminance mask for targeting areas for sharpening, and a local support mask to protect larger stars from the deconvolution process. Each image requires different tweaked settings. Adam’s video explains how the settings work, and shows how to build each of these three components required for the process. Here is the L channel before and after deconvolution:
Detail has clearly been enhanced. Just as importantly, larger stars remain untouched. Stars in front of the galaxy display no dark ringing (this is due to a well made local support star mask, and using proper local deringing settings within the Deconvolution process.
At this stage in processing, we sometimes do another round of noise reduction. However, in this case we did not.
The HistogramTransformation process is used to stretch the image. With some galaxy images which have very high dynamic range we perform three stretches (one for the core, one for the main portion of the galaxy and one for the faint outer areas), save each of the stretches as a TIFF file, and blend them together using masks in Photoshop before bringing them back into PixInsight for further processing (see our M 101 processing walkthrough for an example). For the Siamese Twins this wasn’t necessary; the initial stretch brought out detail everywhere without blowing out the core. Here is the image after stretching:
At this point, we carefully applied a round of nonlinear noise reduction in two steps. To protect areas of detail in the image, we created a clipped luminance mask. We first duplicated the original image, then used the auto clipping shadows and highlights buttons in the HistogramTransformation process to exaggerate the contrast.
This high contrast image was then applied to the original image and inverted as a mask. The red areas will be protected from noise reduction processes:
We applied two processes to the masked image: AtrousWaveletTransformation and ACDNR. Our settings are shown here:
Here is a before/after of our stretched L image showing noise reduction results:
The L image is now set aside while we process the color information.
Processing of the cropped R, G and B (color) channels typically follows these steps:
However in reviewing the processing of this image, for some reason we changed the order of the first three steps. We began with linear noise reduction, then performed a linear fit and finally DBE. This order is not our standard process, but we’re pleased with the results.
Linear noise reduction for each of the color channels follows the exactly same process as described above for L channel linear noise reduction. We can afford to be more aggressive with our noise reduction, as our image detail will come primarily from the L image. Here is the Red channel before and after linear noise reduction:
PixInsight’s LinearFit process is used to ensure that the relative strength of signal in each of the color channels is similar. This helps to ensure a proper color balance when combining the channels into an RGB image. Using the HistogramTransormation process, we examine the histogram of each of our R, G and B images. The image with the signal furthest to the right in its histogram will become our reference, and the other two images will be LinearFit to match the reference image. In this case, the R image had the strongest signal so using that as the reference image we applied LinearFit to G and B, and saved these. Visually we don’t see any difference in the images, but the histograms will now match better when we combine the channels.
As mentioned earlier, normally we would not apply DBE after linear fit, but for whatever reason we did. We usually first set up DBE on the R channel, but before applying it we save an instance of the DBE process to the workspace. Later when applying DBE to the G and B channels, we can double click this saved instance, and it will provide a very good starting point for DBE’s point placement on these other channels. Make sure when applying DBE to select Subtraction as the Target Image Correction. Applying DBE can make it appear that the image is gaining noise, but in reality removing gradients is merely revealing noise which is already present. Here is the Red channel before and after DBE (note: these are default stretches):
The individual color channels are now ready to be combined. The ChannelCombination process is used with default settings to combine the R, G, and B channels (right) into a single RGB channel (left). You can begin to see a hint of color in the combined image but it’s clear that much more color work is required:
Color calibrating the image is a critical step in attaining proper color balance. We used to use two processes to do this: BackgroundNeutralization and ColorCalibration. However, PixInsight has added a PhotometricColorCalibration process, which uses actual color balance data from the region of the sky your image comes from to apply a proper white balance. To use this process, enter your system’s focal length and your camera’s pixel size, and search your object’s coordinates using the search tool. Applying the process will plate solve your image, calculate the proper white balance, and then apply the color calibration to your image:
The RGB image is now ready for the HistogramTransformation (stretching) process. As you can see, after stretching there is definitely color present (especially the yellowish star near the top), but at this stage we pay more attention to exposure than color saturation when stretching. If we push color too much in the HistogramTransformation processing we risk oversaturating the image, resulting in an unnatural appearance. We want to maintain translucency in the fainter portions of the galaxy.
We use the CurvesTransformation process to push the color hidden within the image. We sometimes look at various images online to get a feeling of how blue or dusty brown a galaxy should be, and tweak individual color channels and overall saturation using the CurvesTransformation process. Here is our image before and after CurvesTransformation. First, the entire image, then a closeup of the Siamese Twins:
We use the stars as our main indicator of how far to push the curves, and while we know there is more color to be had, we will attack this in a more controlled manner using Photoshop and Nik Plugins.
Sometimes during stretching and pushing the color with curves we end up with color areas in the background that clearly are not correct; where they should be neutral gray they take on either red, green or blue color. This often results when DBE hasn’t done a perfect job at removing background gradients. When the gradients don’t match from R to G to B, you end up with patches of background that take on the color of the channel where the background contains excess data when highly stretched. There are different ways to deal with this. You could apply a luminance mask before pushing the color, only allowing the color to be pushed on galaxies and stars. Or you could just let it happen. We find that later, in Photoshop, we can use the Nik Viveza plugin to neutralize the background or desaturate, by targeting only those areas which have taken on such colors, and even out the background brightness differences using Nik ColorEfex Pro, targeting those brighter areas with control points and bringing them down with levels or curves to match the rest of the image background. Of course if you have nebulosity or IFN in your image background you must be very careful when using these methods. True signal should never be destroyed.
Processing of the final LRGB image in PixInsight typically follows these steps:
The LRGBCombination process is used to blend our Luminance detail image into our RGB color image. To use this process, we open the final L and RGB images, and we make certain that the RGB image is the active one by clicking it. This will ensure the L gets added into the color, rather than the other way around. In the LRGBCombination process window we select only our L image, then we apply the process to the color image. We play with the Lightness and Saturation sliders (lower is more), and we leave the Channel Weights untouched. When playing with this tool, we uncheck Chrominance Noise Reduction to speed up testing, but once we settle on our final slider numbers we turn this noise reduction back on. This image shows our LRGBCombination settings, with the L and RGB images at bottom, and the resulting LRGB image behind them:
We apply one more round of CurvesTransformation to enhance overall color. In some cases, we also apply an additional round of HistogramTransformation to lighten or darken the background, but in this case we didn’t need to. Here is the before/after of our final CurvesTransformation (note: we are very careful not to lose fine detail when pushing color):
The LRGB image is saved as a 16-bit TIFF for use in Photoshop. Make sure to uncheck the “Associated Alpha Channel” box.
Following are steps we took finalizing the LRGB image in Photoshop. We used several Nik Collection plugins and tools, as well as some Photoshop tools. The changes we made were mostly subtle, with the most noticeable being drawing out the detail and signal toward the outer edges of the Siamese Twins.
Here is the image before and after Photoshop processing:
After these adjustments, we made a final crop for a tighter presentation of the Siamese Twins, and then fractally upsized the image. Here is the final cropped image (full image details can be found here):