Page 2 of 3

Re: Eliminating white balancing when using DSS

Posted: Sat Nov 12, 2016 3:27 am
by admin
ecuador wrote:Interesting thread. I was under the impression that when using RAW files DSS does calibration before debayering, which, as I understand the process, is the right way to go and should make a difference. So, using DCRAW first would debayer the images, so they would not be whitebalanced, but their calibration would not be as good. Am I correct in that?
It is true that calibration is performed before debayering, and that this can in theory have an effect on the quality of the calibration of the light frames.
In order to quantify this effect (and to weigh up the pros and cons of this DSS-specific workaround), we'd have to look at how the debayering is performed (there are various kinds) and whether any data/measurements that were present in the calibration frames (such as hot pixels and gradients) can reasonably be assumed to survive the debayering stage.

When it comes to dark and bias frames, there may indeed theoretical consequences; instead of, for example, subtracting a warm pixel's bias as a single pixel, we're now subtracting its debayered equivalent. What that equivalent looks like is dependent on the chosen debayering algorithm. For DSS bilinear interpolation algorithm, it will be now be "smeared out" over neighbouring pixels that partially used it for interpolation.

However, in the case of the light frame, the warm pixel's unwanted signal is "smeared out" in a similar fashion and can therefore still be subtracted.

This, however, become a lot more complicated if we start using an algorithm like AHD or VNG which tries to "intelligently" infer detail by looking at the gradients of neighbouring pixels. Obviously these gradients are different between the dark/bias and light frames as the light frame has actual celestial detail mixed in. This is were things would start going wrong. However, given that DSS only supports AHD as an alternative to bilinear (and Luc does not recommend its use), this scenario does not come in play; bilinear interpolation does not take into account gradients.

In the case of flat frames, we're dealing with low frequency (e.g. large scale) gradients or out-of-focus dust specks and donuts. Given debayering artifacts only occur on very small scales, the difference is negligible. Case in point; often you can even significantly blur a master flat without consequences.
Also, the problem with DSS white-balancing has only to do with the colors of our image when processing Star Tools? I.e. if I can get the colors I want out of an image even though DSS has whitebalanced it, would there be a benefit from trying to get non-whitebalaned data?
Now for the benefits of not color balancing (yet);
  • 1. luminance noise levels are still "virgin" and not impacted by the (arbitrary) scaling of red, green and blue balancing.
  • 2. given that gradient subtraction still needs to take place, more detail can potentially be extracted from the highlights by not scaling up some channels beyond clipping yet.

Explanation for 1. Before color balancing, StarTools can know noise levels are 1:1:1 in the red, green and blue channels. Giving StarTools color balanced data (for example 2.4:1:1.4) , all bets are off (since the color balancing will have multiplied the signal AND noise). Given that StarTools processes luminance and color data separately when applicable to keep noise propagation down (both in real terms and psychovisually) as much as possible, you are giving yourself an unnecessary disadvantage by providing StarTools with color balanced data; there is no way StarTools can know how to create a 1:1:1 weighting for luminance purposes from the data if it has been color balanced. It will be assigning too much weight to (in most cases) the blue and red channels, adopting their noise in the process. It's actually worse than that (given that there are 2 samples for the green channel, making it way more accurate than red and blue - we're missing out way more!), but that's the gist of it.

Explanation for 2. As outlined above, color balancing scales up some channels, (necessarily) clipping their data. However, in almost all cases (sky glow, light pollution, etc.) we need to subtract a bias from the data. We can subtract this bias from non-clipped (not-color balanced data), but not from clipped (due to scaling) parts of the data. Of course, subtracting bias from parts of the data that are clipped in both will yield the same (undefined) result.

Obviously, not having DSS color balance the data would be highly preferable but for some weird reason that's not an option. :(
This whole convoluted dcraw work flow is the next-best thing.

Re: Eliminating white balancing when using DSS

Posted: Thu Nov 17, 2016 12:36 pm
by ecuador
I did the test. I had DSS process RAW and then used the result as "already whitebalanced" with ST, then processed the exact same set, with the same settings from the DCRAW-converted TIFF and processed that as "not whitebalanced" on ST. There is no way I can get as good a result as I get with the RAW when using the TIFF set. I don't know whether I am doing something wrong due to the fact I'm used to processing stacks from RAW, so my workflow is not that great for the TIFF, but if anyone feels adventurous the stacks are here: https://drive.google.com/open?id=0B1c3j ... mR6STVyd1E

Interesting fact I found out: Processing (from RAW) the same set (identical settings) with DSS 3.3.4 and 3.3.6 does not give identical results. In fact, the 3.3.6 result in the final denoise of ST smooths out more when applying the same settings, which might be a good thing? In any case, in that archive you can find the RAW stack from 3.3.4, the RAW stack from 3.3.6 and the TIFF stack (also from 3.3.6) in case anyone wants to play around.

Re: Eliminating white balancing when using DSS

Posted: Thu Nov 17, 2016 3:08 pm
by Guy
Ivo, thanks for the detailed explanation.

Ecuador, thanks for posting the data. I'm interested in seeing the differences.
Can you say the dcraw command line you used to convert from RAW to TIFFs?

Also, the latest version of DSS I can find is 3.3.4 (which as I understand it just 3.3.2 with updated DSLR support). Do you know where can I get 3.3.6?

Guy

Re: Eliminating white balancing when using DSS

Posted: Thu Nov 17, 2016 4:39 pm
by ecuador
Guy wrote:Ivo, thanks for the detailed explanation.

Ecuador, thanks for posting the data. I'm interested in seeing the differences.
Can you say the dcraw command line you used to convert from RAW to TIFFs?

Also, the latest version of DSS I can find is 3.3.4 (which as I understand it just 3.3.2 with updated DSLR support). Do you know where can I get 3.3.6?

Guy
The DCRAW command was dcraw -v -r 1 1 1 1 -4 -T -o 0 -q 0 -t 0 *.CR2, since it works fine for my Canon 600D. I did try the dcraw -v -r 1 1 1 1 -4 -T -S 32767 -k 0 -o 0 -q 0 -t 0 *.CR2 as well, it doesn't seem to be better. BTW it is Heart Nebula with an Equinox 80, TeleVue TRF-2008, Canon 600D full spectrum, Optolong CLS-CCD at 17x5min lights, 17 flat, 46 dark, 33 bias. And I must have moved the scope before or after the flats so one dust bunny is left which you have to mask out in wipe ;)
DSS 3.3.6 was posted briefly on the website and then pulled. That's why I assumed it was buggy and was using 3.3.4, until I thought of testing it out of curiosity. I could give it to you if you can't find it, although I'd ask Ivo first if he knows whether it's OK to put up a link here. I mean technically it was released freely for a while, but for example PixInsight LE was also free but now they don't allow you to distribute it...

Re: Eliminating white balancing when using DSS

Posted: Fri Nov 18, 2016 1:04 am
by admin
Interesting indeed. The AutosaveTIFF stack indeed shows less faint detail., however shows some improved stellar profiles.

What stacking algorithm was used for this? It is as though the AutosaveTIFF stack had more samples rejected than the AutosaveRAW stack... :think:

Re: Eliminating white balancing when using DSS

Posted: Fri Nov 18, 2016 1:20 am
by ecuador
admin wrote:Interesting indeed. The AutosaveTIFF stack indeed shows less faint detail., however shows some improved stellar profiles.

What stacking algorithm was used for this? It is as though the AutosaveTIFF stack had more samples rejected than the AutosaveRAW stack... :think:
Kappa sigma was used for all. Do you make something out of the 334 vs 336?

Re: Eliminating white balancing when using DSS

Posted: Fri Nov 18, 2016 1:29 am
by ecuador
I just did the same with Regim https://drive.google.com/open?id=0B1c3j ... U0tZWdGMzQ but I haven't checked it out yet... Going to bed...

Re: Eliminating white balancing when using DSS

Posted: Fri Nov 18, 2016 3:27 am
by admin
ecuador wrote:
admin wrote:Kappa sigma was used for all.
Ah - there's your problem :)

Code: Select all

Kappa-Sigma Clipping
This method is used to reject deviant pixels iteratively.
Two parameters are used: the number of iterations and the standard deviation multiplier used (Kappa).
For each iteration, the mean and standard deviation (Sigma) of the pixels in the stack are computed.
Each pixel which value is farthest from the mean than more than Kappa * Sigma is rejected.
The mean of the remaining pixels in the stack is computed for each pixel.
The standard deviation will be different between the two stacks as well as between the three channels.

Probably a better comparison would be to stack both stacks with median stacking, or averaging.

Re: Eliminating white balancing when using DSS

Posted: Fri Nov 18, 2016 5:04 pm
by ecuador
My bad, DSS does not work with TIFF off a network drive (one of those weird DSS glitches, it creates zero byte master calibration files), so I had to use another machine and I replicated all the settings... except the stack 100%, so it used a smaller sample. I am redoing it and will upload it, as well as "median" versions, although the kappa-sigma should be a good comparison as in practice I use it more than median.
Sorry about that!

Re: Eliminating white balancing when using DSS

Posted: Fri Nov 18, 2016 8:31 pm
by ecuador
Aaannnd, the correct TIFF-processed image with κ-σ is here: https://drive.google.com/open?id=0B1c3j ... FkxX3E5SjQ. I also re-did the RAW and ticked the "Set black point to 0" in the RAW settings, which has given me trouble in the past, but here it seems to be probably a little beneficial: https://drive.google.com/open?id=0B1c3j ... 0hrV2JCWjQ, so these two can make a good comparison.

While the median mode TIFF and RAW are here: https://drive.google.com/open?id=0B1c3j ... 1hqQklIUDA

I still can't see an advantage of going the longer TIFF route to be able to use the "not whitebalanced" option in ST, in fact I can always get more faint nebulosity to come out in that image from RAW. It may depend a lot on workflow & personal preference, so I'd be interested to hear other people's conclusions, but to test side by side I tried workflows going through mostly ST tool "defaults", so the RAW seems to have the advantage when on minimum effort. Perhaps I'll try to do a comparison with a galaxy as well.

Oh, and the Regim I added above, I followed Guy's instructions from another thread (did not use DCRAW), but the result seems very weird, so something went wrong there...