What am i doing wrong?? cant get decent image

Questions and answers about processing in StarTools and how to accomplish certain tasks.
Bobby_1970
Posts: 9
Joined: Fri May 15, 2020 11:44 am

What am i doing wrong?? cant get decent image

Post by Bobby_1970 »

This is my first post, please bare with me :-)

I keep having trouble with grainy/noisy/blotchy images. Regardless of how I use startools, or so it seems.

I have recently being using a ZWO 178MC with 72mm semi apo scope. Using live stacking within sharpcap (including dark frame subtraction).

The following image is a result of my attempt at "processing" 30x 60s exposures, gain 200. Im not sure how to do flats with my 178 so haven't bothered yet.
M51 processed
M51 processed
M51 140520 JPG.jpg (191.9 KiB) Viewed 5404 times
I follow, what I suspect is a fairly regular workflow.

Autodev, crop, wipe, dev, contrast, colour, HDR, Life etc, then tracking off and noise reduction.

I can just never seem to get a nice even, dark (but not too dark) background.

Almost at the end of my tether tbh lol.

Am I ok to attach the original Fits output from Sharpcap somehow??

perhaps someone can have a go with it and point out where I have gone wrong all this time.


Help me Startools gurus, you're my only hope.
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: What am i doing wrong?? cant get decent image

Post by admin »

Hi,

Firstly, if you haven't done so, have a look at this section of the website about starting with a good dataset.

Issues with background unevenness almost always find their origin in issues with with acquisition or pre-processing, the most common being neglecting to calibrate with flats or not dithering between frames.

Please feel free to share a link to a stacked dataset (Google Drive, Dropbox, Microsoft One Drive, etc.) so we can have a look!

Thank you in advance,
Ivo Jager
StarTools creator and astronomy enthusiast
Bobby_1970
Posts: 9
Joined: Fri May 15, 2020 11:44 am

Re: What am i doing wrong?? cant get decent image

Post by Bobby_1970 »

admin wrote:Hi,

Firstly, if you haven't done so, have a look at this section of the website about starting with a good dataset.

Issues with background unevenness almost always find their origin in issues with with acquisition or pre-processing, the most common being neglecting to calibrate with flats or not dithering between frames.

Please feel free to share a link to a stacked dataset (Google Drive, Dropbox, Microsoft One Drive, etc.) so we can have a look!

Thank you in advance,
Thanks Ivo, that's really kind of you. I have looked at some of the tutorials previously, perhaps it is just the quality of my data :-(

Hopefully this will work:-

https://drive.google.com/file/d/1k8tmT- ... sp=sharing


Thanks again
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: What am i doing wrong?? cant get decent image

Post by admin »

Many thanks for uploading.
As surmised, in this case too, the thing holding you back most is failing to dithering between frames - clear, correlated pattern noise and hot-pixel steaks can be seen in the vertical direction. If you go for any fainter objects, dithering is not really optional with most instruments. For DSOs, flats are certainly not optional.

Your dataset will also improve by choosing a different (outlier) rejection algorithm when stacking. At the very least, it will get rid of the satellite trail. Using a stacker that allows you to turn white balancing off (e.g. the latest versions of DSS) may also help give you a slight boost in signal fidelity.

The image you posted is not too bad considering the dataset. It is, however, oversampled, so you stand to gain by binning it. This yields a better signal at a lower resolution, but without loss of detail. The better signal, in turn, can be used to process the image better (for example with deconvolution).

This sort of advice (and even examples that show issues similar to yours) can be found in the section I linked to earlier, so definitely have a read.

Finally, see if you can improve your tracking - your stars are elongated, and any detail is similarly smeared out in the vertical direction making your image a lot softer and blurrier than need be. Deconvolution can also restore some of that detail, but only if the signal is clean and good.

In general, you will find that you need to spend a lot less time in post-processing or working around easily avoidable issues, if those issues don't exist in the first place!

I tend to try to avoid giving workflows that focus mostly about working around dataset issues; you don't really learn anything from them you can use going forward. So below is a workflow that is as simple/standard as possible given the limits of the dataset;

--- Auto Develop
To see what we got. Some issues noted above.
--- Crop
Better framing of the pair.
Parameter [X1] set to [775 pixels]
Parameter [Y1] set to [251 pixels]
Parameter [X2] set to [2010 pixels (-1086)]
Parameter [Y2] set to [1179 pixels (-901)]
--- Bin
To convert oversampling into better signal.
Parameter [Scale] set to [(scale/noise reduction 50.00%)/(400.00%)/(+2.00 bits)]
Image size is 617 x 464
--- Wipe
Parameter [Dark Anomaly Filter] set to [5 pixels]
--- Auto Develop
RoI over slice of the pair.
Parameter [Ignore Fine Detail <] set to [6.2 pixels]
Parameter [RoI X1] set to [192 pixels]
Parameter [RoI Y1] set to [222 pixels]
Parameter [RoI X2] set to [397 pixels (-220)]
Parameter [RoI Y2] set to [291 pixels (-173)]
--- HDR
Default (Reveal All)
--- Deconvolution
Auto-generate mask.
Attempt to counter some of the smearing out of detail due to bad tracking;
Selected (click) a not-masked-out star as secondary PSF. Now Decon has an idea of how a point light (star) is smeared out.
Parameter [Secondary PSF] set to [Dynamic Star Sample Small x Primary]
Parameter [Primary PSF] set to [Moffat Beta=4.765 (Trujillo)]
Parameter [Tracking Propagation] set to [During Regularization (Quality)]
Parameter [Primary Radius] set to [4.5 pixels]
Parameter [Iterations] set to [21]
--- Color
Default color balance is too green, this is a common problem when a stacker does not align channels properly (either to due to the stacker's effectiveness or some sort of chromatic aberration in the optical train). Indeed you can see stars that have red edges at the top, but blue edges at the bottom.
Flip the Color module into MaxRGB mode, so you can see aberrant green dominance (green dominance is rare - see here on more information about how to color balance in MaxRGB mode).
Parameter [Green Bias Reduce] set to [1.47]
Parameter [Cap Green] set to [100 %]
--- Wavelet De-Noise (turn Tracking off, choose grain removal)
Parameter [Grain Size] set to [8.6 pixels]
--- Wavelet De-Noise
Default.
Stack_32bits_33frames_1980s.jpg
Stack_32bits_33frames_1980s.jpg (32.21 KiB) Viewed 5382 times
Hope this helps!
Ivo Jager
StarTools creator and astronomy enthusiast
Bobby_1970
Posts: 9
Joined: Fri May 15, 2020 11:44 am

Re: What am i doing wrong?? cant get decent image

Post by Bobby_1970 »

admin wrote:Many thanks for uploading.
As surmised, in this case too, the thing holding you back most is failing to dithering between frames - clear, correlated pattern noise and hot-pixel steaks can be seen in the vertical direction. If you go for any fainter objects, dithering is not really optional with most instruments. For DSOs, flats are certainly not optional.

Your dataset will also improve by choosing a different (outlier) rejection algorithm when stacking. At the very least, it will get rid of the satellite trail. Using a stacker that allows you to turn white balancing off (e.g. the latest versions of DSS) may also help give you a slight boost in signal fidelity.

The image you posted is not too bad considering the dataset. It is, however, oversampled, so you stand to gain by binning it. This yields a better signal at a lower resolution, but without loss of detail. The better signal, in turn, can be used to process the image better (for example with deconvolution).

This sort of advice (and even examples that show issues similar to yours) can be found in the section I linked to earlier, so definitely have a read.

Finally, see if you can improve your tracking - your stars are elongated, and any detail is similarly smeared out in the vertical direction making your image a lot softer and blurrier than need be. Deconvolution can also restore some of that detail, but only if the signal is clean and good.

In general, you will find that you need to spend a lot less time in post-processing or working around easily avoidable issues, if those issues don't exist in the first place!

I tend to try to avoid giving workflows that focus mostly about working around dataset issues; you don't really learn anything from them you can use going forward. So below is a workflow that is as simple/standard as possible given the limits of the dataset;

--- Auto Develop
To see what we got. Some issues noted above.
--- Crop
Better framing of the pair.
Parameter [X1] set to [775 pixels]
Parameter [Y1] set to [251 pixels]
Parameter [X2] set to [2010 pixels (-1086)]
Parameter [Y2] set to [1179 pixels (-901)]
--- Bin
To convert oversampling into better signal.
Parameter [Scale] set to [(scale/noise reduction 50.00%)/(400.00%)/(+2.00 bits)]
Image size is 617 x 464
--- Wipe
Parameter [Dark Anomaly Filter] set to [5 pixels]
--- Auto Develop
RoI over slice of the pair.
Parameter [Ignore Fine Detail <] set to [6.2 pixels]
Parameter [RoI X1] set to [192 pixels]
Parameter [RoI Y1] set to [222 pixels]
Parameter [RoI X2] set to [397 pixels (-220)]
Parameter [RoI Y2] set to [291 pixels (-173)]
--- HDR
Default (Reveal All)
--- Deconvolution
Auto-generate mask.
Attempt to counter some of the smearing out of detail due to bad tracking;
Selected (click) a not-masked-out star as secondary PSF. Now Decon has an idea of how a point light (star) is smeared out.
Parameter [Secondary PSF] set to [Dynamic Star Sample Small x Primary]
Parameter [Primary PSF] set to [Moffat Beta=4.765 (Trujillo)]
Parameter [Tracking Propagation] set to [During Regularization (Quality)]
Parameter [Primary Radius] set to [4.5 pixels]
Parameter [Iterations] set to [21]
--- Color
Default color balance is too green, this is a common problem when a stacker does not align channels properly (either to due to the stacker's effectiveness or some sort of chromatic aberration in the optical train). Indeed you can see stars that have red edges at the top, but blue edges at the bottom.
Flip the Color module into MaxRGB mode, so you can see aberrant green dominance (green dominance is rare - see here on more information about how to color balance in MaxRGB mode).
Parameter [Green Bias Reduce] set to [1.47]
Parameter [Cap Green] set to [100 %]
--- Wavelet De-Noise (turn Tracking off, choose grain removal)
Parameter [Grain Size] set to [8.6 pixels]
--- Wavelet De-Noise
Default.
Stack_32bits_33frames_1980s.jpg
Hope this helps!

Hi Ivo

I have to say a huge thanks to you, for having a look and also taking the time to explain a few things regarding my data.

Currently my mount is a Skywatcher AZGTi in EQ mode. I realise this isn't an officially supported mode by skywatcher and they say its purpose isn't really intended for astrophotography. However I have noted others using this mount to great effect so thought I would give it a go.

My polar alignment (using sharpcap) was less than 2 arc minutes, this puts it in the "GOOD" zone according to the sharpcap polar alignment routine.

Recently, and for this image, I have just being using the live stacking feature within sharpcap, I suspect that I will almost certainly be better stacking the individual frames in DSS rather than live stacking, this is something I will try in the future for sure.

One thing I will say is this, when I capture a single 60s frame in sharpcap, the stars within that frame look fine, with no elongated shapes at all. I initially assumed this meant my tracking could not be too bad??? Could it be that the live stacking is leading them to be elongated? Perhaps DSS would do a better job?

I also do not dither at all at the moment, as I don't have a guide scope/camera. I assume this would help improve my tracking if I go down this route?

Many thanks for all the input, I bought a license for Startools a couple of years back, and have struggled with it since lol. I certainly feel more informed now I have had your input and tbh the workflow you shared has made an amazing difference even with my dodgy data lol. I would be more than happy if all of my images where as good as the one you produced from my data :-)

Many thanks again

Bobby
happy-kat
Posts: 372
Joined: Sun Feb 01, 2015 11:31 am

Re: What am i doing wrong?? cant get decent image

Post by happy-kat »

Kappa-Sigma Clipping is good on light frames when have more than 30 odd for removing stuff and rounding stars in DSS 4.2.3 is the latest
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: What am i doing wrong?? cant get decent image

Post by admin »

Bobby_1970 wrote: Recently, and for this image, I have just being using the live stacking feature within sharpcap, I suspect that I will almost certainly be better stacking the individual frames in DSS rather than live stacking, this is something I will try in the future for sure.
It is much better to just have SharpCap store the frames and do stacking later. That's because you can use better stacking methods if the full stack is known/available, rather than coming in real-time. As happy-kat suggests, Kappa-Sigma Clipping as found in DSS typically makes a significant difference.
One thing I will say is this, when I capture a single 60s frame in sharpcap, the stars within that frame look fine, with no elongated shapes at all. I initially assumed this meant my tracking could not be too bad??? Could it be that the live stacking is leading them to be elongated? Perhaps DSS would do a better job?
It's possible. Stacking in real-time (as alluded too above), thouhg pretty neat, is fundamentally sub-optimal.
I also do not dither at all at the moment, as I don't have a guide scope/camera. I assume this would help improve my tracking if I go down this route?
You actually don't really need a guide scope/camera to do dithering. The idea is just to move the FOV slightly between frames (a spiralling-out pattern is recommended) so that different (not co-located!) photosites get a chance to measure the signal. The errors inherent to each photosite then gets averaged out by the stacking algorithm, yielding a very clean datasets without correlated noise or streaks. Craig Stark's PHD is a popular solution. [/quote]
Many thanks for all the input, I bought a license for Startools a couple of years back, and have struggled with it since lol. I certainly feel more informed now I have had your input and tbh the workflow you shared has made an amazing difference even with my dodgy data lol. I would be more than happy if all of my images where as good as the one you produced from my data :-)
A good dataset is key. But as you can see, the workflow is very, very simple with many defaults. This lets you build on that to the benefit of your own artistic vision.

Clear skies!
Ivo Jager
StarTools creator and astronomy enthusiast
Bobby_1970
Posts: 9
Joined: Fri May 15, 2020 11:44 am

Re: What am i doing wrong?? cant get decent image

Post by Bobby_1970 »

Just wanted to say another big thanks Ivo.

Followed your advice and also stacked in DSS. Using your workflow as a guide I managed to get this, which, despite still not dithering I am sort of happy with, its certainly better than my other efforts I think.
M106 190520 v2 JPG.jpg
M106 190520 v2 JPG.jpg (45.15 KiB) Viewed 5299 times
Thanks again
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: What am i doing wrong?? cant get decent image

Post by admin »

"Now we're cooking with gas!" (as my South African friend would say) :thumbsup:

That shows really nice detail How did you do the colouring though? Typically spiral galaxies have a yellowish core, but a bluer outer rim. Purple/pink HII areas are dotted around in the arms. Here your galaxy is mostly yellowish white with stars either white or blue...
Ivo Jager
StarTools creator and astronomy enthusiast
Bobby_1970
Posts: 9
Joined: Fri May 15, 2020 11:44 am

Re: What am i doing wrong?? cant get decent image

Post by Bobby_1970 »

admin wrote:"Now we're cooking with gas!" (as my South African friend would say) :thumbsup:

That shows really nice detail How did you do the colouring though? Typically spiral galaxies have a yellowish core, but a bluer outer rim. Purple/pink HII areas are dotted around in the arms. Here your galaxy is mostly yellowish white with stars either white or blue...
Hi Ivo

Im not sure at all why the colour isn't looking as you describe.

I use DSS to stack and I do not have the "align RGB" checked anywhere. No matter which mode I use in the colour module it doesn't seem to look right at all. It is usually vey green initially so I use the "max RGB" option and use the bias sliders to get rid of most of the green. I also Cap green.

Link to FITS of the above image.

https://drive.google.com/file/d/1w_TldH ... sp=sharing




One thing I noticed. In Sharpcap the bayer matrix would appear to need to be set to RGGB, live stacks in sharpcap appear to be the correct colour to my eyes.

However, I loaded one of the individual FIT files into a viewer (AvisFV) and it reports the bayer matrix as GRBG?

So what should I set in DSS ???

I changed the setting in DSS to GRGB, and also ticked "align RGB in final image".

This was the resulting image after my attempt with startools:-
M106 DSS GBRG RGB align JPG.jpg
M106 DSS GBRG RGB align JPG.jpg (67.77 KiB) Viewed 5267 times


I also had another go at the RGGB version I initially posted with less binning also:-
M106 v2 DSS RGGB JPG.jpg
M106 v2 DSS RGGB JPG.jpg (127.14 KiB) Viewed 5263 times
Post Reply