Donut stars

Questions and answers about processing in StarTools and how to accomplish certain tasks.
dx_ron
Posts: 253
Joined: Fri Apr 16, 2021 3:55 pm

Re: Donut stars

Post by dx_ron »

What Mike said - thank you, Ivo!

Haven't had time to do much beyond download the new beta yet, but a quick look seems similar to Mike's reaction. Different, but definitely improves (cures, even?) the dark-ring effect.

As for 16/32, it's a bit awkward in Siril to keep swapping back and forth (really, to remember to go a couple of menu levels deep to make the switch) to calibrate and register in 16-bit then stack in 32-bit, so I just had it default to 32-bit. I will have a go at changing my habits.


Mike: is it just me reading too much into it, or is it weird that PI returns 55k, 63k 62k 57k for four consecutive pixels that each have the same raw reading (65k)?
decay
Posts: 443
Joined: Sat Apr 10, 2021 12:28 pm
Location: Germany, NRW

Re: Donut stars

Post by decay »

Mike in Rancho wrote: Sat Mar 23, 2024 8:25 pm But really this pops up across a large cohort, with many different optics, sensors, and stackers.
I would assume the same. But I cannot prove, of course. And Ivo's testsets don't seem to be affected that much, as he wrote. I wonder if it would be helpful to reread some older threads to identify some candidates. :think:
Mike in Rancho wrote: Sat Mar 23, 2024 8:25 pm and then how to achieve it. Like a donut or PSF-protection slider.
Yeah, I thought about the same. Maybe better than something built-in which maybe has downsides for not affected datasets.
Mike in Rancho wrote: Sat Mar 23, 2024 8:25 pm Want to include more praise for the Profile Viewer v3.
Thanks. Glad if I can help :mrgreen:
admin wrote: Sun Mar 24, 2024 12:39 am I was also able to "hide" it by increasing Optidev's curve resolution (the amount of points it calculates to interpolate the constructed curve between) 16-fold (from 4096/12-bit points to 65536-bit points), which should not be necessary unless something has stuffed lots of data/"detail" in a tiny (1/4096th) slice of the - in this case - upper dynamic range (giving it very high importance, and thus making it "deserving" of more dynamic range).
Thanks for chiming in and for taking some time, Ivo. And thanks for the detailed explanation. But I'm sorry, I think I don't understand :( At least not everything. My assumption was, that this shoulder is a result from having assigned too little dynamic range? So this is the case, because there's another (more upper) range, which 'eats up' (has assigned) most of the available dynamic range? So nothing left over for the range of the shoulder? :confusion-shrug:

The curve has nearly no inclination in the range of the plateau? Is this right?

I would have thought to mitigate this by decreasing the resolution of the curve? :confusion-shrug:
admin wrote: Sun Mar 24, 2024 12:39 am Will do some more tests, but I may be able to roll out 16-bit OptiDev ADC support without affecting "regular" operation much.
Are you sure, this is because of 16-bit ADCs? :think: My datasets are affected as well, and I'm using a Canon DSLR (EOS 2000D) having a 14-bit ADC.

Stack
https://c.web.de/@334960167135216273/WN ... BTbxf2gZrA

Single frame
https://c.web.de/@334960167135216273/b1 ... 7Tv8BfnNLQ
2024-03-24 16_10_18-StarTools.jpg
2024-03-24 16_10_18-StarTools.jpg (89.71 KiB) Viewed 523 times
admin wrote: Sun Mar 24, 2024 12:39 am There is still the question of whether "hiding" issues like this is desirable or not...
admin wrote: Sun Mar 24, 2024 12:39 am I'd still like to figure out what's going on with these particular datasets though...
I too assume it would be better to understand what's happening. And then decide how to deal with it. Let us/me know if we/I can be of any help.

Best regards, Dietmar.
dx_ron
Posts: 253
Joined: Fri Apr 16, 2021 3:55 pm

Re: Donut stars

Post by dx_ron »

decay wrote: Sun Mar 24, 2024 3:10 pm
admin wrote: Sun Mar 24, 2024 12:39 am I was also able to "hide" it by increasing Optidev's curve resolution (the amount of points it calculates to interpolate the constructed curve between) 16-fold (from 4096/12-bit points to 65536-bit points), which should not be necessary unless something has stuffed lots of data/"detail" in a tiny (1/4096th) slice of the - in this case - upper dynamic range (giving it very high importance, and thus making it "deserving" of more dynamic range).
Thanks for chiming in and for taking some time, Ivo. And thanks for the detailed explanation. But I'm sorry, I think I don't understand :( At least not everything. My assumption was, that this shoulder is a result from having assigned too little dynamic range? So this is the case, because there's another (more upper) range, which 'eats up' (has assigned) most of the available dynamic range? So nothing left over for the range of the shoulder? :confusion-shrug:

The curve has nearly no inclination in the range of the plateau? Is this right?

I would have thought to mitigate this by decreasing the resolution of the curve? :confusion-shrug:

Take the notion of decreasing the resolution of the curve to the absurd extreme and give it just one bit. Now every pixel in the 'stretched' image will either be On or Off and (obviously) all the detail in the original data is lost.

Is increasing resolution 'necessary' (at the top end, but I can see how it might only make sense to implement globally)? I think that, in the overall scheme of trying to display the actual target, the answer should be no. But if the bright stars have obvious flaws they can attract all the attention away from the nebula/galaxy.
decay
Posts: 443
Joined: Sat Apr 10, 2021 12:28 pm
Location: Germany, NRW

Re: Donut stars

Post by decay »

dx_ron wrote: Sun Mar 24, 2024 5:01 pm Take the notion of decreasing the resolution of the curve to the absurd extreme and give it just one bit. Now every pixel in the 'stretched' image will either be On or Off and (obviously) all the detail in the original data is lost.
Thanks, Ron. This is probably not my day. :(

Ivo wrote "the amount of points it calculates to interpolate the constructed curve between". My assumption was that we have N points to interpolate the curve using a spline function or similar. So two points would allow to construct a line. This would be the identity transformation with regards to the stretching curve.

:?: :confusion-shrug:
Mike in Rancho
Posts: 1153
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: Donut stars

Post by Mike in Rancho »

dx_ron wrote: Sun Mar 24, 2024 2:30 pm As for 16/32, it's a bit awkward in Siril to keep swapping back and forth (really, to remember to go a couple of menu levels deep to make the switch) to calibrate and register in 16-bit then stack in 32-bit, so I just had it default to 32-bit. I will have a go at changing my habits.


Mike: is it just me reading too much into it, or is it weird that PI returns 55k, 63k 62k 57k for four consecutive pixels that each have the same raw reading (65k)?
I could be wrong, but I thought Ivo was just looking to see the actual raw sub from the camera? I guess I better go re-read that post. I'm not sure going all retro and stacking at 16-bit is warranted? Not that 16-bit is anything to sneeze at, but calibration subs get combined-averaged too, so why not use the greater precision, and then the same when all the calibrated lights are stacked?

And yeah I saw that line too in my raw sub - all middle pixels blown up. Hey, point source! :lol:

I pulled that sub because the PI log said it was the chosen reference, so I assume a reasonably better one. But I am unsure how that all pans out after many subs are calibrated, normalized, star-aligned (which will include some form of sub-pixel splining or interpolation), and averaged. I guess I should look at a few other subs.

I suppose I can also make a graph, I mean it is a spreadsheet. I had been looking for some kind of obvious sensor non-linearity?

decay wrote: Sun Mar 24, 2024 3:10 pm
Thanks for chiming in and for taking some time, Ivo. And thanks for the detailed explanation. But I'm sorry, I think I don't understand :( At least not everything. My assumption was, that this shoulder is a result from having assigned too little dynamic range? So this is the case, because there's another (more upper) range, which 'eats up' (has assigned) most of the available dynamic range? So nothing left over for the range of the shoulder? :confusion-shrug:

The curve has nearly no inclination in the range of the plateau? Is this right?

I would have thought to mitigate this by decreasing the resolution of the curve? :confusion-shrug:
I certainly don't have a grasp of all this myself. Might have to mull it all over while pushing the mower around later today. :D

I was thinking more in terms of histogram curves and dynamic range allocation. I'm sure you guys have seen me say a half dozen times I'd like to figure out what OptiDev is doing to curve the hightlights. But, I for sure hadn't realized the plateau/illusion until now, and hadn't thought of it in terms of quantization. Although, I suppose I should have, that's what histograms show right? What a dummy.

So I must ponder this all, probably right from the get go. If we use gain on the sensor, does that result in bigger ADU buckets on the top end (setting aside non-linearity)? Then what happens with the 32-bit transformation of stacking? Then we start working on that, and at some point, not sure where, ST knocks it back down to 16-bit...but how? Then for display sRGB is some kind of 8-bit per channel color right, and also undoubtedly a gamma curve is thrown at things depending on screen/monitor?

Piece of cake. ;)
Mike in Rancho
Posts: 1153
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: Donut stars

Post by Mike in Rancho »

Here's a quick chart with colored buckets for each pixel across the middle of the star. Interesting graphic representation of stretching, huh?

When I have time I'll try to add more distance on either side to get the full star profile down to the background level. And who knows maybe add some other stretching styles and programs.

Stretching buckets.jpg
Stretching buckets.jpg (143.23 KiB) Viewed 505 times

EDIT: Note that the "trouble" areas weren't starting out way at the top end, but down pretty low and only up to 20K or so ADU, or maybe even 10K. :think:
dx_ron
Posts: 253
Joined: Fri Apr 16, 2021 3:55 pm

Re: Donut stars

Post by dx_ron »

Link to the raw single sub - https://www.dropbox.com/scl/fi/gdnc1l46 ... 1tc0t&dl=0 although it probably does not matter any more.

I think 16-bit ADCs are now fairly common, with the IMX571 having become so popular, though maybe that's just the people who post a lot on CN.

Dietmar, I started trying to write out how our different analogies could point in the same direction, but I got stuck because I don't think I understand how it works :think: :?:
Last edited by dx_ron on Sun Mar 24, 2024 10:28 pm, edited 1 time in total.
dx_ron
Posts: 253
Joined: Fri Apr 16, 2021 3:55 pm

Re: Donut stars

Post by dx_ron »

Mike in Rancho wrote: Sun Mar 24, 2024 8:40 pm Here's a quick chart with colored buckets for each pixel across the middle of the star. Interesting graphic representation of stretching, huh?
Colors! Now we're talking! Can you do one with 8x10 color glossy photographs, with circles and arrows and a paragraph on the back of each one?
decay
Posts: 443
Joined: Sat Apr 10, 2021 12:28 pm
Location: Germany, NRW

Re: Donut stars

Post by decay »

dx_ron wrote: Sun Mar 24, 2024 9:27 pm Dietmar, I started trying to write out how our different analogies could point in the same direction, but I got stuck because I don't think I understand how it works
Hi Ron, that's my problem as well. There's something Ivo wrote, that I do not understand (as I wrote in my reply to Ivo). Maybe Ivo will give us some hints and we can try to figure out, step by step. Sorry if I caused even more confusion.

(And I still have to work up your discussion with Mike. Tomorrow ;-) )

Best regards!
User avatar
admin
Site Admin
Posts: 3369
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: Donut stars

Post by admin »

I'm definitely not proposing we should be doing 16-bit stacking, however the assumption is that the source material is *multiple subs* of 10, 12, 14 or 16-bit quantized data, where values fell within the range of 0 to /1024, 4096, 16384 or 65536. Stacking multiple subs will - in essence - add 2^(stacked subs) precision to this (e.g. 0.5, 0.25, 0.125, 0.0625 for 2, 4, 8, 16 subs, etc.).

While very useful for intermediary calculations and representation. Floating point operations and encoding will more readily allow ranges outside of the expected range of (0..unity) and let software (or humans) erroneously interpret out-of-bounds data or encode potentially destabilizing rounding errors, while in the real world, we only captured integers and we know that there is a range beyond which data numbers should not occur (or are not trustworthy).

For example, I believe Ron's dataset encoded numbers/pixels past 1.0, which if 1.0 was taken as unity, should not really occur. In an attempt to establish what "pure white" (unity) is, StarTools assumes the highest number in the dataset is unity (unless it encounters a FITS keyword that says otherwise) - it goes to show how this introduces unnecessary ambiguity. This ambiguity can have real consequences.

Mike's graph is super helpful :obscene-drinkingcheers: in trying to demonstrate the issue;
Image
The 2600MM 20s L sub is clearly over-exposing and correctly encodes the over-exposing pixels as 65535. Those pixels that are over-exposing are not reliable information anymore - the sensor well was full. Yet somehow they have been given values that are no longer 65535 in the final PI MasterLight. This is because the stacking algorithm has decided to - in effect - average out 65535 with some subs where the same pixels read something lower than 65535. The reasons why some pixels in some subs may have read lower than 65535 are numerous, but even a slight mis-alignment of the sub will do it. Nevertheless the end result is a "spike" that does not really exist.

OptiDev's algorithm can ferret out the spike and sees the enormous difference between where the spike pixels start and where the real stellar profile ends. The enormous difference that it detects, means that it will allocate more dynamic range just for that spike to make that difference visible.

The way OptiDev works is - roughly - as follows;
  • For each pixel, establish a measure of local entropy (how "busy" the local area is). This is our proxy for "detail"; it gives us a number. If not much happens (for example in the gradual stellar profile or in an over-exposing all-65535 core) that number is low. If a lot happens, for example in the transition from stellar profile to artificial spike that number is high.
  • Divide up the full dynamic range into brightness "tranches" (4096 before, 65536 now). For each tranche, tally up all the "busyness" numbers for pixels that fall into that tranche.
  • Expand (or contract) each tranche's start and end points (in terms of the dynamic range it occupies) from being evenly distributed, to being non-evenly distributed, based on how "busy" the tranche is. Busy tranches get more dynamic range, tranquil tranches get less dynamic range.
Does that help/make sense?
Ivo Jager
StarTools creator and astronomy enthusiast
Post Reply