Donut stars

Questions and answers about processing in StarTools and how to accomplish certain tasks.
Mike in Rancho
Posts: 1153
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: Donut stars

Post by Mike in Rancho »

Cool, Dietmar!

Indeed, forget all this stuff and enjoy a nice Easter weekend. Everyone should. It'll still be here later. :D

I took a quick peek at the project -- all Greek to me. :lol:
Guess I need to go back to school. And look at the specs for that dll, since I'm guessing that sets up the TIFF capabilities.

On the new version I'm also not getting 5 columns, still the same 3. Maybe I did something wrong with my unquarantines. I'll have to re-read your update to see what the new version was supposed to do different anyway.

I don't think I ran across the mapping issue you mentioned, I'll have to look at that again also. I started with a mono L file of M33 (not that I think it matters since a tracking save will be "mono" TIFF right (all three channels identical))? I just saved it out after cropping to 2000x2000 to make things smaller. I skipped Wipe for now since I want to hold off on figuring out what that might change. Then I just did a new OptiDev, no IFD or ROI or anything, and saved that. Pre and Post. Then ran the tool.

Graphs (LibreCalc) seem to be potentially problematic, as there as so many entries on the X-axis that I'm sure it is skipping, interpolating, or something. Still interesting. I zoomed in on the linear column and it looked like a normal linear skyfog histogram (line chart). I also graphed the stretched column as vertical bars. Also looked (more or less) as expected.

The actual data in the stretched column though is pretty interesting, with how many values are skipped past (meaning zero pixels mapped to it), and there would only be entries every few levels. Until you get to the meat of where the detail is. Maybe that's the expansion/compression going on? Must ponder.

I'll throw a simple screen stretch at my same cropped linear ST save-out, in PI or Siril or both, and save to 16-bit TIFF to see how they write things out.

:think:
Mike in Rancho
Posts: 1153
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: Donut stars

Post by Mike in Rancho »

decay wrote: Fri Mar 29, 2024 2:03 pm Hi Mike @Mike in Rancho please use with caution - I found a problem. As I wrote, when applying a global stretch, each brightness mapping should be unique. But it isn't. I just implemented a check. I should have done earlier. :( :oops:

The differences are quite large, for example:

13412 - 12870
13412 - 13296
13412 - 12582
13412 - 12716
13412 - 13440
13412 - 12827
13412 - 12672
13412 - 12968
13412 - 13002
13412 - 13147
13412 - 13260
13412 - 13090
13412 - 13326
13412 - 12640
13412 - 12556
13412 - 13547
13412 - 13326
13412 - 12699
13412 - 13498

This is way too much to be explained by rounding errors?! This would mean, that OptiDev does not only do a global stretch, but something more. I will have to think it over.

But even worse, I found mappings where the output value is 0. So a high brightness of for example 6417 is mapped onto 0 - which is black in our post-stretch image. I will have to check the concrete pixels. Something is wrong :(
Hi Dietmar (for after Easter :D ),

I am wondering what your columns above mean as far as what is mapping or what might be wrong?

But I don't think I ran into any problems, even with the 3-column version 1 of the tool. I ran several tests and put a pixel pointer on things in PI and everything appeared to check out.

The "bucketing" going on is rather interesting. New OptiDev is improved over that, especially once you get into the meat of the detail. I saw similar (to Old OptiDev) even doing other stretches like STF and GHS. But that was all starting with that initial crop and linear save in ST. :think:

So then I did the exact same crop in PI to see if I could dig into things. A few items of note: First, gotta make sure you are saving as an RGB TIFF, regardless of how you started (mine was mono L), or the tool throws an exception. No problem with ST as it always saves this way, but for Siril and PI you'd likely have to convert to color.

Also, I think I am seeing that the mere fact of using ST (might be the Open, might be the save) seems to lop off data below a black point. Unknown if it does it on it's own or is reading a header. Once past that black-pointing, the data seems mostly the same but is not identical. Possible slight differences in 32 to 16 bit conversions might explain that.

Finally, I think there might be a rather large difference in the granularity of the resulting stretch (and thus "post" file in the spreadsheet) depending on whether one started out in 16 or 32 bit and if that maintained in the program despite save-outs. I have to test that some more but I think caution is in order with these working bit depths, even if everything ultimately ends up at 16-bit for the comparisons.
dx_ron
Posts: 254
Joined: Fri Apr 16, 2021 3:55 pm

Re: Donut stars

Post by dx_ron »

At the risk of derailing this thread about star cores with a post about star cores :lol:

565 is not, it turns out, a super-magic bullet against flat-disk star cores. But I think you have to work at it. How do I know? Last month I started on am M97/M108 project. I took some test subs and pushed the exposure until I had almost 500 saturated pixels. I thought, at the time, that this was a good idea and that it would maximize dynamic range for the non-stellar parts of the frame. I ended up at 120s HCG (gain 100) - which is quite a bit more exposed than most people say is useful for broadband.

The failing in my line of thought is that there are only a few stars in the field - and only 3 are brighter than mag 8 (one mag 6, 2 mag 8). What happened was that those star cores were quite overexposed (rather than a lot of stars each having the very central couple of pixels saturated).

Still, much, much less of the optical illusion peak+flat disk effect. But I did have to add in some deringing fuzz.
Owl_HCG_160x120s_rgb_v1_bp.jpg
Owl_HCG_160x120s_rgb_v1_bp.jpg (481.26 KiB) Viewed 471 times
In the end, I doubt I succeeded at increasing snr for the galaxy (or all those background galaxies), because I'm sure the B7 skyglow was increasing at the same rate. Not done with the image yet (hopefully). ~ 5 hours, but both M108 and the background galaxies deserve more snr. Especially the cool face-on spiral upper-center, UGC 6211. It's hard to see its structure in the 500k jpg, but it is cool.
Mike in Rancho
Posts: 1153
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: Donut stars

Post by Mike in Rancho »

Oh, just visual star core stuff rather than deep star core stretching theory? ;)

500 may be a little high if there are few bright stars to be had, though broadband on my Horsehead region was probably pushing 1,000 max pixels per sub.

I guess sacrificing stars with gain (or longer exposures) could spread the non-stellar detail across more buckets? But I haven't really thought about that and if there would be noticeable improvement once you get into 32-bit stacks anyway.

This was a 32-bit stack, right? I can't quite recall what you were doing there with the stacking bits.

Anyway, flat-disking may be a bit different than the spike-and-plateau and donut illusion being looked at earlier. I don't see the spiking here, though you said you applied some deringing so maybe it was there.

ol owl flat top.jpg
ol owl flat top.jpg (122.68 KiB) Viewed 469 times

I probably should have used the blueish star but went to the rust-colored one instead (or so I thought, it looks pretty purple zoomed in). Flat but not really plateaued with central spike. There's some color issues that could be causing a similar perceptual illusion though? The blue is relatively bloated, and the red seems slightly displaced. The green a slight bit also, but that appears to have the narrowest profile.

:think:
dx_ron
Posts: 254
Joined: Fri Apr 16, 2021 3:55 pm

Re: Donut stars

Post by dx_ron »

Despite the fact that Ekos added the saturation count quite a while ago (a year-ish, I think?), I have not done many broadband projects in the time since and I'm still playing around with combinations. With the M97 I intentionally pushed exposure longer than is (obviously) prudent for the stars. I wasn't thinking about 'buckets', but the basic idea that longer exposure and the low-read-noise mode might lead to better snr / dynamic range for the galaxy is similar to thinking about how many buckets, I guess.

I don't think it actually works that way, unfortunately. I think I am limited in the obtainable per-sub weak-signal snr by the light pollution floor. And there seems to be a real cost to overexposing too many star pixels. It's not just the flat, maxed out cores, either. Comparing the stars in the M97 stack vs the M3 stack - the M97 image has the color artifacts you point out, but the M3 stack seemingly does not (or at least they're way more subtle).

The best test would be to re-shoot M97 with the same settings as M3 and the same total integration (5.3 hours) as the M97 image above and see if the star colors behave better. But we all know I won't want to do that (as opposed to continuing to collect more data with the same settings), because ST would not be happy about combining data taken under such different exposures and I don't want to toss 5 hours of data.

The next best thing will be to shoot some other galaxy field with more M3-like settings: low-conversion gain and <100 saturated pixels. Which I will do if I ever get another clear moonless night.

There is related possibility rattling around in my head. Both the M3 and M97 stacks were rgb aligned in Siril. I wonder if overexposed star cores could give Siril problems with the quite small shifts in registration during that realignment?

The somewhat bloated blue channel is no doubt real. Could that be solved by a mono camera? Only spending money can answer that question for certain...
decay
Posts: 443
Joined: Sat Apr 10, 2021 12:28 pm
Location: Germany, NRW

Re: Donut stars

Post by decay »

Hi Mike @Mike in Rancho ! Back again ;-)

Yes, the DLL is used to read the TIFF files.
https://bitmiracle.com/libtiff/

There are a lot of examples how to use. It’s a .NET port of the famous libtiff library. TIFF sounds simple, but in fact it’s a complex format with lots of incarnations. It is possible to read and write TIFF files using .NET build-in capabilities, but processing is limited to 8-bit only.

Please be sure to get and use the second version of the tool. As I tried to describe (pls. see above), a single level of the pre-image can be mapped onto a range of levels in the post-image.
For example, an input level of 72 may be mapped on 23425 up to 23486 on output for different pixel locations.
The first version of the tool just tool just took the first encountered occurrence (pixel location). The second version calculates an average value of all occurrences (pixels) and min and max values. This can be used to get some kind of error estimation. Please give me a hint is this is still unclear, and I will try to explain in more detail, as it is important.

Correct, before Color module is used, ST saves out RGB TIFFs having all three colours the same value. Actually, my tool only considers the R channel right now. And it only reads RGB images for now.

And yes, LibreCalc charts may be problematic or at least inconvenient to use. I wonder if we can use gnuplot (http://www.gnuplot.info/), but of course one needs a bit of time to get comfortable with it. But it is very powerful. I also saw, there’s a wrapper to include gnuplot into .NET apps. But for now, I think LibreCalc will do the trick.

My _first_ intention in writing this tool was to compare the new and the old versions of OptiDev. First, because I still do not fully understand what Ivo wrote. And there _is_ a dramatic difference between old and new and I would like to understand, why this is the case. I wonder, if this “bucketing” we see in the old version is the assignment of additional or less dynamic range. And I wonder why we do not see it any more in the new version.

Are you interested in this too? I’ll have to ponder a bit more. Haven’t had the time yet.

Dietmar.
jlh
Posts: 20
Joined: Sat Nov 04, 2023 1:06 pm

Re: Donut stars

Post by jlh »

I do not understand the details of these discussions despite my efforts in rereading them several times. I am, however, very interested in the improved OptiDev results with the changes in last month's 1.9.565 Beta and would love to read more about that.

Thanks, Jeff
Mike in Rancho
Posts: 1153
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: Donut stars

Post by Mike in Rancho »

jlh wrote: Wed Apr 03, 2024 2:38 pm I do not understand the details of these discussions despite my efforts in rereading them several times. I am, however, very interested in the improved OptiDev results with the changes in last month's 1.9.565 Beta and would love to read more about that.

Thanks, Jeff
Hi Jeff,

Well yes that's what we are all trying to dig into I think and from two perspectives. One is the visual effect on data, and then the other is the literal changes to pixel values as sort of the "why" behind it all.

Several of us may all be a little bit lost though. :think:

Experiment with the two versions, especially if you have stars that ended up plateaued-and-spiked, and let us know what you think!

I believe OptiDev was working as designed, but certain dataset features were perhaps causing unintended consequences. Everything matters with what is being sampled, and thus your ROI if any coupled with outside influence, along with the IFD that could possible cause those problem-inducing details to "fall through."

If you look at the colored graph on say, top of page 4, you can see the way it was maintaining a saturated spike, but enhancing it by lowering the outer edge just a bit, but also greatly raising some of the faint outer star diffraction region to be nearly level, causing a plateau (and donut optical illusion). But because that extra stretch given to faint outer diffraction would in many cases also apply to DSO detail due to similar initial pixel levels, I believe it was also causing a "boost" to certain DSO highlights - brightening them even though they were not being allocated a lot of dynamic range (i.e. the plateau).

New OptiDev, with additional points to try to not be faked out by such stars, may seem to have a "weaker" DSO stretch in the highlights, although more dynamic range should be allocated for changing features within those pixel levels than was beforehand. :confusion-shrug:

Of course, even with additional sampling tranche (though it does help the bad star core problem that has plagued many of us forever), query if those saturated stars are still having an undue effect on DSO detail in the same levels ranges. :think:
Mike in Rancho
Posts: 1153
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: Donut stars

Post by Mike in Rancho »

decay wrote: Tue Apr 02, 2024 7:33 pm Please be sure to get and use the second version of the tool. As I tried to describe (pls. see above), a single level of the pre-image can be mapped onto a range of levels in the post-image.
For example, an input level of 72 may be mapped on 23425 up to 23486 on output for different pixel locations.
The first version of the tool just tool just took the first encountered occurrence (pixel location). The second version calculates an average value of all occurrences (pixels) and min and max values. This can be used to get some kind of error estimation. Please give me a hint is this is still unclear, and I will try to explain in more detail, as it is important.
Thanks Dietmar, but yeah I'm still confused. :?

Especially as to this:
decay wrote: Sat Mar 30, 2024 4:04 pm I enhanced the tool to calculate average values. The output CSV now contains 5 columns: input brightness level, average output level, min level value, max level value and average sample count.
I'm pretty sure that despite thinking I downloaded version 2, I am still ending up with version 1, because there's only the three columns: 16-bit ADU level from 0 to 65xxx (the column axis), and then ADU counts (histogram) of those levels in pre and post.

That seems normal histogram data. I also took my last csv and, in LibreCalc, summed columns B and C. I correctly got a total of 4 million pixels for each column (I had cropped to 2000 x 2000). That makes me think v1 counting is being done correctly?

Unless v2 is a different tool altogether, we still want columns B and C to be the total ADU levels counts for pre and post, don't we? I don't see that in your new description of 5 columns.

Now, as to direct pixel mapping, meaning whether linear level Z1 always exactly maps to stretched level Z2 at all pixel xy locations, that's for sure another matter for exploration and understanding.
decay
Posts: 443
Joined: Sat Apr 10, 2021 12:28 pm
Location: Germany, NRW

Re: Donut stars

Post by decay »

Mike in Rancho wrote: Wed Apr 03, 2024 5:39 pm I'm pretty sure that despite thinking I downloaded version 2, I am still ending up with version 1
I'm sure I'm messing up the explanations, sorry. :oops: So step by step, we will get that right.
File size should be 6.656 bytes, date 2024-03-30 20:41

After running there should be two CSV files:
- hist.csv -> exactly what you described, ADU values, pre and post histogram data
- stretch.csv -> this is the interesting part: 5 columns: pre ADU values (intermittent, only some 1.000), average corresponding post ADU value, min post ADU value, max post ADU value, pre ADU value (pixel) count

Could you please check that? :)
Post Reply