So I’ve had PixInsight for a while now, but I still haven’t used it enough to get a really good workflow for my images – I’ve done 2 images that I’m kind of happy with, but most of the time when I start using it, I get fixated on some particular flaw in the image and go down the rabbit hole of trying to fix it perfectly.  This usually results in 2 things: an over-processed image,  and and image that never gets “done.” It’s easy to look at all the great images on AstroBin or anywhere and start comparing a half-done image to those, which really doesn’t make a whole lot of sense… It also starts to take away the enjoyment of the actual process – neither science or art get very far if you’re only working in terms of how other people are going to see it.

Rather than get stuck in an eternal imaging black hole I’m going to share my process for how I’m working through these images and try to add on bit by bit, rather than try to learn everything 100% before sharing a final product. Learning one thing at a time will also save time – PixInsight isn’t terribly intuitive, and I’d like to start learning what each process actually does as opposed to using someone else’s suggested presets/values (without knowing why they would work for me).

The first image I want to work on is T Tauri – the star for which a category of pre-main sequence variable stars is named. T Tauri, like many of the other stars in its class, are found in molecular clouds, which makes them really cool nebulosity targets. My other scope spends almost all of it’s time right now (or will once the CCD gets back from repair…) imaging variable stars, so I thought it would be cool to get a much longer exposure image of one of the more “photogenic” ones. Sometimes it’s easier to convey how fascinating variable stars are with an image like this. 😀

This is what T Tauri is doing when it’s not sitting around looking pretty:

light curve

AAVSO data (V band) for T Tauri from 9/25/09 to 12/11/17

It has a light curve period of 2.81 days, but you can definitely see that it’s also really dynamic over longer periods too – this data covers a little over an 8 year span and the average magnitude has been decreasing over time.

I’m starting this one off after I’ve already:

  • Calibrated with bias/dark/flat frames
  • Did Cosmetic Correction
  • Registered the subs with StarAlignment
  • Stacked (used Drizzle Integration)
  • Cropped with Dynamic Crop
  • Ran DynamicBackgroundExtraction
  • Used Linear Fit to balance the histograms
Initial channels

LRGB with initial processing

I’m starting this post off at this point for a couple reasons. One is that while messing around with this image I went through calibration process at least 5 times to try to fix some of the weirdness (turned out to be bad flats). My calibration frames were old and it was causing vignetting and also had the effect of making all the dust donuts look like they were raised/embossed… Next time I’ll be using Batch PreProcessing, so I figured I would save those steps for next time, otherwise this will get way too long.

I have a few initial concerns on the frame so far, but I’m not sure if they’re an acquisition issue or processing issue at this point. The green frame has 2 leftover dust donuts, even after throwing out a bunch of junk subframes that were skewing the final stacked image. These did go away after I combined the RGB image with Luminance (not entirely sure why), so I’ll add that to my future problems list. The blue frame is also a little hazy looking, but I’ve already deleted any of the blue subframes that I’m pretty sure had some high level clouds, so I think at this point that’s just the nebulosity.

One of the tricky things about this target is that there isn’t a whole lot of plain background, but I think there’s enough to work with. Since I’ve already done all the initial processing and stacking, I’m going to jump to combining the RGB images. I just use Channel Combination with the color space set to RGB and click Apply Global to combine the images.

Channel Combination settings

Channel Combination settings

RGB frame

Initial RGB image

Color Calibration

The initial RGB result is kind of washed out, and the background is super noisy. I also think it’s a bit too red. For color balancing I really like the new PhotometricColorCalibration (PCC) process in PixInsight, which uses a variety of star types and galaxies as standardized white references – this is intended to help you get closer to the “real” color without needing a white reference taken from your own image, which is nice if your image is nebula-heavy and doesn’t have a great section to use as a white reference.

To use PCC, the first thing you need to pick is the white reference. Average spiral galaxy works most of the time, although I wanted to use G2V star instead since this isn’t really a galaxy. It’s good to try both though to see which looks better. The next thing you need is the image coordinates. At this point the image is in PixInsight’s XSIF format so the image acquisition metadata can’t be found to grab the image coordinates from the file. The Search Coordinates option though makes it pretty easy to search for T Tauri though. The next thing you have to enter is the focal length and pixel size for the plate solving – for the pixel size I used 9 microns since the image is binned 2×2 (5.4 micron pixels), and I also increased the focal length since Drizzle Integration upsampled the image.

The plate solving parameters I left at default, and the photometry parameters only needed the saturation threshold adjusted. To set the saturation threshold, remove the screen stretch to show the full image.

Image with screen stretch removed before PhotometricColorCalibration

Checking the lowest saturation point of a star in the image

Move the cursor over one of the saturated stars, which still show up bright in the unstretched image. The lowest saturation point I found was this star (.6821, in green), which happens to be T Tauri itself. I set the saturation point to something lower than this value (.65) so I don’t use any saturated data.

PCC also performs background neutralization so it’s a nice 2 in 1 tool. PixInsight has pretty good documentation on the usage for this, so these steps are from their info here. First thing is to disable the RGB link on the Screen Transfer Function and reapply the stretch.

Next, create a preview over an area with only background. There isn’t a lot of background in my image so my preview is pretty small. Transfer the coordinates to the Region of Interest by dragging the title bar for the preview over to the tool. Apply the process to the RGB image and voila – color calibration done. After the process is complete PCC shows you a graph of the photometric colors of the stars in the image and the linear fit to the points that are used to calibrate the color. Re-link the RGB in the screen stretch and apply the auto stretch.

After doing Photometric Color Calibration

Before and after PCC

The last thing I want to do right now with the color is to use the SCNR process to reduce the green – all default settings except I changed the amount to .5, since the full amount was a bit aggressive for this image. I’ve used it for other images with success though, so it just depends.

Original, after PCC, and after SCNR

I think up to a point color is kind of subjective in these images, not that the object doesn’t have an absolute color, but there’s an infinite amount of fine tuning you can do. At the end of the day it comes down to your data, personal preference/processing technique, and your computer monitor. Especially when you start adding in narrowband data… It sometimes helps to look on Astrobin at the search results page for your object to get a general idea of what the color should kind of be, but if I start looking at all the other images in detail I end up trying to get mine to match those, which doesn’t always work with my data.

For future – I’m still trying to figure out the “best” black for my background. At this stage it’s still pretty noisy so I cringe zooming in too much on it, but I still need to make sure it’s heading the right direction. The final image in my set seems to have a blacker background, but I can’t tell if that’s true color or just the noise is trending to the blue instead of red like the original image.

One of my favorite things about the color calibration steps is getting to see which stars pop out bluer or redder. It’s pretty cool to see that visual representation of how their temperature/sizes/ages can differ without having to know anything else about those particular stars. It’s a great representation of a basic astronomy concept – one of the only other times you can see a difference like that without a telescope is when Orion is up for the winter and you can compare Betelgeuse and Rigel, given that your eyesight is great, the seeing conditions are good, and you can pick out the color. Otherwise everything just looks like a bunch of white pinpoints…the reality is much more interesting.