I'm definitely not proposing we should be doing 16-bit stacking, however the assumption is that the source material is *multiple subs* of 10, 12, 14 or 16-bit quantized data, where values fell within the range of 0 to /1024, 4096, 16384 or 65536. Stacking multiple subs will - in essence - add 2^(stacked subs) precision to this (e.g. 0.5, 0.25, 0.125, 0.0625 for 2, 4, 8, 16 subs, etc.).
While very useful for intermediary calculations and representation. Floating point operations and encoding will more readily allow ranges outside of the expected range of (0..unity) and let software (or humans) erroneously interpret out-of-bounds data or encode potentially destabilizing rounding errors, while in the real world, we only captured integers and we know that there is a range beyond which data numbers should not occur (or are not trustworthy).
For example, I believe Ron's dataset encoded numbers/pixels past 1.0, which if 1.0 was taken as unity, should not really occur. In an attempt to establish what "pure white" (unity) is, StarTools assumes the highest number in the dataset is unity (unless it encounters a FITS keyword that says otherwise) - it goes to show how this introduces unnecessary ambiguity. This ambiguity can have real consequences.
Mike's graph is super helpful
in trying to demonstrate the issue;
The 2600MM 20s L sub is clearly over-exposing and correctly encodes the over-exposing pixels as 65535. Those pixels that are over-exposing are not reliable information anymore - the sensor well was full. Yet somehow they have been given values that are no longer 65535 in the final PI MasterLight. This is because the stacking algorithm has decided to - in effect - average out 65535 with some subs where the same pixels read something
lower than 65535. The reasons why some pixels in some subs may have read lower than 65535 are numerous, but even a slight mis-alignment of the sub will do it. Nevertheless the end result is a "spike" that does not really exist.
OptiDev's algorithm can ferret out the spike and sees the enormous difference between where the spike pixels start and where the real stellar profile ends. The enormous difference that it detects, means that it will allocate more dynamic range just for that spike to make that difference visible.
The way OptiDev works is - roughly - as follows;
- For each pixel, establish a measure of local entropy (how "busy" the local area is). This is our proxy for "detail"; it gives us a number. If not much happens (for example in the gradual stellar profile or in an over-exposing all-65535 core) that number is low. If a lot happens, for example in the transition from stellar profile to artificial spike that number is high.
- Divide up the full dynamic range into brightness "tranches" (4096 before, 65536 now). For each tranche, tally up all the "busyness" numbers for pixels that fall into that tranche.
- Expand (or contract) each tranche's start and end points (in terms of the dynamic range it occupies) from being evenly distributed, to being non-evenly distributed, based on how "busy" the tranche is. Busy tranches get more dynamic range, tranquil tranches get less dynamic range.
Does that help/make sense?