Modules, tracking, and linearity

General discussion about StarTools.
Post Reply
Mike in Rancho
Posts: 1141
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Modules, tracking, and linearity

Post by Mike in Rancho »

The last page or two of our CN beginner challenge for February, the Rosette, started getting a little more philosophical regarding data fidelity, what is science vs art, and so on. Always fun stuff. :lol: Though as beginners, I'm sure we are flailing around quite a bit.

There is also of course the usual lost-in-translation effect of people using different processing platforms, and so those "speaking PI" and those "speaking ST" end up a bit disconnected.

Some of the discussion of course gets a bit unusual (do scientists even ever stretch data?, etc.), but others were a bit more pragmatic as to what we are doing with our workflows. I know there's an (incomplete, I think) table that bounces around here at times divvying up what techniques or modules fall into resolving the data, middle-of-the-road enhancing the data, and outright artistic manipulation. I need to search for that again. :D

But I was also curious what parts of ST act upon the "linear" data. I have reread the tracking page describing time travel and ST abstracting us from having to worry about linear vs non-linear. Are all the basic, tracked, ST modules working on the linear portion of the formula, and the stretch (which we kind of start off with) is the very "last" transformation to be applied? Or do some modules actually work in the stretched domain? Other than the obvious post-tracking ones, I mean. I do remember the recent discussion regarding the color bias controls being applied at the linear level.

Just curiosity, nothing terribly important. :?: :D
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: Modules, tracking, and linearity

Post by admin »

Mike in Rancho wrote: Wed Mar 02, 2022 4:13 am what parts of ST act upon the "linear" data. I have reread the tracking page describing time travel and ST abstracting us from having to worry about linear vs non-linear. Are all the basic, tracked, ST modules working on the linear portion of the formula, and the stretch (which we kind of start off with) is the very "last" transformation to be applied? Or do some modules actually work in the stretched domain? Other than the obvious post-tracking ones, I mean. I do remember the recent discussion regarding the color bias controls being applied at the linear level.
The distinction between linear and non-linear stages of the data are indeed completely gone in StarTools. There really are no "stages" or screen stretches and there really is no concept of "last" or "first". Every pixel you see is, as you indeed allude to, the result of one long, ever-changing equation with many variables. Its results are constantly re-computed for output to the screen. The results of the equation (or - crucially - parts thereof!) are used as an input to other modules. Or, looking at it another way, all modules are networked and in "contact" with each other. You fill in blanks, allowing every module's contribution to be refined.

The reason for all this? As opposed to the basic engines found in PI or PS, StarTools freely lets you choose the most opportune time to fill in the variables, or build on the equation. What is the most opportune time for an operation (in terms of signal fidelity and convenience), is usually very different to what basic signal processing engines like PI allow for.

E.g. it is most opportune to fill in the deconvolution variables after filling in the variables for a non-linear stretch, because StarTools can then take into account how the image was/is/will be (no difference to StarTools) stretched and tranformed per-pixel when performing deconvolution (this reduces artefacts and thus allows for stronger/better decon). Most modules work like that; they build on the equation by - where possible - consulting the equation itself (or parts thereof), and not just a single input image and/or mask.

In StarTools you are truly deconvolving a stretched image (meaning the way the image was stretched has a real world effect on the result in a beneficial and, of course, mathematically correct way). In PixInsight, a screen stretch (let alone local detail enhancement which is entirely impossible in PI), has no effect on the way deconvolution is performed.

The ability to consult or modify that equation at any time in any state, rather than having a linear, inflexible processing history, is what allows ST results to yield improved results versus more basic software like PI. When comparing to older processing engines, the benefits of ST's approach are probably most visible (in terms of superior results and signal fidelity) in modules like deconvolution, denoise and color. These visible workings are of course, only possible if all other modules in-between, similarly actively consult, update and secure/respect the integrity of the equation (which may or may not be visible much).

Unfortunately, this stuff - incredibly important as it is - goes over the heads of most people, save for the more hardcore signal/image processing afficionados (if you suspect there's not many of those around, you'd be correct :lol:). It's something I have had to come to terms with.

It's the equation building that requires that I keep nonsensical user decisions at bay. I used to get rather upset seeing people describe StarTools as a "collection of macros", just "for beginners" or "giving less control". The opposite is true on all accounts!
Ivo Jager
StarTools creator and astronomy enthusiast
Mike in Rancho
Posts: 1141
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: Modules, tracking, and linearity

Post by Mike in Rancho »

Thanks Ivo. :obscene-drinkingcheers:

That is quite interesting, from a geek-ish perspective anyway - even though I know next-to-nothing about signal processing. Well, other than I've learned here on ST and sometimes CN.

I hadn't thought about that extra-dimensionality: that in addition to concatenating a giant time traveling pixel formula, ST is also networked for it to consult portions of itself. It just gets cooler the more you learn. :bow-yellow:

One of the last comments (by a user of both ST and PI, or former ST user, perhaps) was divvying up linear and non-linear functions, explaining the PI "preview stretch," and opining that ST essentially does the same, applying the stretch when tracking is turned off.

I wasn't quite sure how to respond, my goal being to explain ST properly and not screw it up. Or start any unwarranted software wars. :lol: But, as the example formula in the tracking page of the website does have a gamma transformation "at the end," it started seeming to me like, well, perhaps the stretch really is the last part of the equation implemented before we see the results. Kind of like an order of operations puzzle. ;)

That said, my thoughts on it have been that ST's screen isn't so much of a preview, but instead is "always live reality," as it exists based on what you have done so far.

Your last comment is rather interesting as well. So, to some extent, it is the nature of the ST signal processing formula itself that creates boundaries and guardrails against nonsense, as opposed to discrete input-output modules in which, sure, you can do anything you want to that pixel. :think:
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: Modules, tracking, and linearity

Post by admin »

Mike in Rancho wrote: Thu Mar 03, 2022 8:03 am That is quite interesting, from a geek-ish perspective anyway - even though I know next-to-nothing about signal processing.
I think you are learning quickly - you're asking all the right questions! :thumbsup:
I hadn't thought about that extra-dimensionality: that in addition to concatenating a giant time traveling pixel formula, ST is also networked for it to consult portions of itself. It just gets cooler the more you learn. :bow-yellow:
That's it. With modules actively "having their say" (exerting influence), due to their prior contribution to the equation and variables, even when it seems like they're not active, image processing becomes much more of a convergent process. This is in contrast to old style PS/PI like engines, where - as you point out - you can completely mangle your image at every step. Knowledge that can be gleaned from previous steps, settings and actions is completely "forgotten" in such programs.
One of the last comments (by a user of both ST and PI, or former ST user, perhaps) was divvying up linear and non-linear functions, explaining the PI "preview stretch," and opining that ST essentially does the same, applying the stretch when tracking is turned off.
That's unfortunate. It appears this user may never really understood StarTools and - equally regrettable - possibly never used it to its full potential.

Turning off tracking is simply accepting the result of the equation. A StarTools stretch is not a screen/"preview" stretch. All stretches are performed "for real". By default, PI uses a 24-bit lookup table for its screen stretch. This is sometimes not enough for high dynamic range images. StarTools does not have this problem, because no lookup tables are used; all stretches are performed for real.

This user will be disappointed once he/she/they find out that wavelet sharpening and local contrast enhancement don't work with a screenstretch (I mean; they may possibly work, but will work in the linear domain, yielding unexpected results once stretched for real) . Not to mention that such operations will always have to come after decon. Of course, decon is recommended against in PI anyway by the author in most cases, due to a presumed lack of signal in most cases. The latter is much less of a problem in ST, because ST can - thanks to Tracking - ferret out where, in the image, lack of signal exists and where it does not. Therefore the recommendation in ST, is to always apply decon.
I wasn't quite sure how to respond, my goal being to explain ST properly and not screw it up.
It is much appreciated!
But, as the example formula in the tracking page of the website does have a gamma transformation "at the end," it started seeming to me like, well, perhaps the stretch really is the last part of the equation implemented before we see the results. Kind of like an order of operations puzzle. ;)
Ah, I can see how that might be confusing. The gamma correction at the "end" was mainly to demonstrate how everything is still made mathematically correct by expanding the equation and fitting in the "illegal" operation in the right place so it is no longer "illegal". Certainly, more operations likely follow a stretch. Say, for example, local dynamic range optimization.
That said, my thoughts on it have been that ST's screen isn't so much of a preview, but instead is "always live reality," as it exists based on what you have done so far.
That's correct. WYSIWYG. But what you see is not necessarily the only thing that goes into the next module you launch. What people have trouble getting their heads around, is that there are an infinite of states of your image you don't see (you can get to the more useful ones by using the Restore button).
Your last comment is rather interesting as well. So, to some extent, it is the nature of the ST signal processing formula itself that creates boundaries and guardrails against nonsense, as opposed to discrete input-output modules in which, sure, you can do anything you want to that pixel. :think:
Correct! That's the user-friendly-by-mathermatical-nature part; the "guardrails" are there for the engine to keep the mathematics from breaking down. The awesome (secondary) side effect is that it keeps newbies from making mistakes (hence earning StarTools a reputation as "software for beginners" in some misinformed circles), but it was never the primary design goal.
Ivo Jager
StarTools creator and astronomy enthusiast
Mike in Rancho
Posts: 1141
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: Modules, tracking, and linearity

Post by Mike in Rancho »

Much appreciated, Ivo. :thumbsup:

I'm slowly becoming a more informed user. And hopefully a better explainer of ST. But it seems a bit of steep climb, because, you know, so much "power and control" in all those tools of the levels-and-curves solutons. :?
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: Modules, tracking, and linearity

Post by admin »

Mike in Rancho wrote: Thu Mar 03, 2022 5:46 pm Much appreciated, Ivo. :thumbsup:

I'm slowly becoming a more informed user. And hopefully a better explainer of ST. But it seems a bit of steep climb, because, you know, so much "power and control" in all those tools of the levels-and-curves solutons. :?
Indeed, the IKEA effect (brilliant explainer here) should not be underestimated. And as they say, you cannot reason people out of a position that they did not reason themselves into.

CN in particular is - for some reason - rife with examples, with the more silly examples boasting about how long it took to process the image (I've seen multiple days quoted, as well as multi-page PDF documents with flow charts), like it is some badge of honor. It is really not; it unfortunately just shows, very publicly, that you don't know what you are doing.

Just because you expend a lot of time and effort processing an image, doesn't make your image better. It makes your images worse. More (unnecessary) parts and operations allow for more points of failure, rounding errors and compromising signal. Ditto endless corrections or redoing of your previous decisions.

Similarly, you don't automatically deserve better images because you went through a learning curve.

Ask people why they need this perceived (that's all it is) control and be prepared for crickets. But they "know" they need it, because they "feel" it makes their images better, and their preferred echo chamber (mostly CN) confirms this feeling.

Fortunately, most forums and clubs are less like CN, particularly the ones with a younger audience.
Ivo Jager
StarTools creator and astronomy enthusiast
Mike in Rancho
Posts: 1141
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: Modules, tracking, and linearity

Post by Mike in Rancho »

Well, between the explanations here, tips posted on CN, and just general googling (PI tutorials are everywhere), I'm gaining a lot more understanding of what is going on and the differences that are occurring. This often results in the ol' Leonard Nimoy single eyebrow raise. ;) And I'm also getting better just sensing it visually, especially if we are all working on the same target.

But, so be it. Back to ST and the giant formulas... :D

I think it might be handy to have a chart or cheat sheet (or even just a list) of which modules might be "re-doable," which are not so, and maybe any that are a wash, when in tracking. I know the various forms of documentation will sometimes say "only use once." That sort of thing, except you have to go hunting for it.

For example, AutoDev has the re-do stretch button, so I assume it fully replaces itself in the formulas. Color I am uncertain of, though I know there was a recent change that puts all or most Color settings back where they were before, if you enter the module again. That makes me think it is the same, a full redo from scratch? Other modules are likely add-ons for each instance, but would still be permissible -- for example different SS presets, like an isolate followed by a saturate. And finally others I'm sure are verbotten as multiple instances would not make logical sense, if the formula even tries to implement them.
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: Modules, tracking, and linearity

Post by admin »

Mike in Rancho wrote: Fri Mar 04, 2022 7:35 am I think it might be handy to have a chart or cheat sheet (or even just a list) of which modules might be "re-doable," which are not so, and maybe any that are a wash, when in tracking. I know the various forms of documentation will sometimes say "only use once." That sort of thing, except you have to go hunting for it.
Indeed, the mantra is do it right, once. Applying modules multiple times will either not work very well in terms of extra benefits, or do nothing at all.
For example, AutoDev has the re-do stretch button, so I assume it fully replaces itself in the formulas.
Correct!
Color I am uncertain of, though I know there was a recent change that puts all or most Color settings back where they were before, if you enter the module again. That makes me think it is the same, a full redo from scratch?
The Color module takes whatever luminance is in the current WYSIWYG image, and applies the chrominance data to it. In that sense it's a redo of that aspect of your processing. However, the luminance in the WYSIWYG image may/will be modified if you run Color twice in succession - if you run it on a Color image it will have to extract the luminance from that color image. The latter is never perfect and can even be very different, depending on the due luminance modifications caused by out-of-gamut colors due to the saturation, "Style" and "LRGB Method Emuluation" settings you chose the first time around.
In other words, the luminance from before the first time you ran the Color module, is not restored on any second run of the Color module.
[qute]Other modules are likely add-ons for each instance, but would still be permissible -- for example different SS presets, like an isolate followed by a saturate.[/quote]
Indeed, SS is pretty much the only module I would maybe run twice (I'd like to implement a chaining ability inside the module). Invocations of some modules that preferably come after the Color module (SS, Flux, Shrink) are indeed "tacked on to" the formula, so to speak. They build on preceding results and are possibly the closest to classic PI-style input->output algorithms, with the exception of course that their modifications to signal and noise are 100% tracked for use in Denoise. Algorithms like Entropy also still use the preceding formula for noise statistics, which are then used to separate detail from noise when enhancing said detail.

The latter is something I'm always working on; finding ways to use that vast body of knowledge the formula provides, to improve the results and/or capabilities of even the more basic algorithms/modules.

It means taking simple constants (sometimes provided by the user, because the correct value cannot be known by "dumb" input->output algorithms), and replacing them by robust, objective statistics that be calculated by using the the formula. It's a lot of work, but the results tend to be worth it and visually detectable.

This is also partly where the "less control" fallacy comes from. Doing things through analysis, rather than user input, takes "control" away from the user over things that they should never had control over in the first place, but are merely in control over because of failings of the algorithm, developer or both. If there is an objective "truth" to ferret out ("truth" as defined by physics), I see it as my job to do so on behalf of the user. Things that physics and mathematics says are not up for debate, should not be up for debate. Anything else (and there is a lot that falls in the "else" category) is fair game of course!
Ivo Jager
StarTools creator and astronomy enthusiast
Mike in Rancho
Posts: 1141
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: Modules, tracking, and linearity

Post by Mike in Rancho »

admin wrote: Sun Mar 06, 2022 2:53 am The Color module takes whatever luminance is in the current WYSIWYG image, and applies the chrominance data to it. In that sense it's a redo of that aspect of your processing. However, the luminance in the WYSIWYG image may/will be modified if you run Color twice in succession - if you run it on a Color image it will have to extract the luminance from that color image. The latter is never perfect and can even be very different, depending on the due luminance modifications caused by out-of-gamut colors due to the saturation, "Style" and "LRGB Method Emuluation" settings you chose the first time around.
In other words, the luminance from before the first time you ran the Color module, is not restored on any second run of the Color module.

Indeed, SS is pretty much the only module I would maybe run twice (I'd like to implement a chaining ability inside the module). Invocations of some modules that preferably come after the Color module (SS, Flux, Shrink) are indeed "tacked on to" the formula, so to speak. They build on preceding results and are possibly the closest to classic PI-style input->output algorithms, with the exception of course that their modifications to signal and noise are 100% tracked for use in Denoise. Algorithms like Entropy also still use the preceding formula for noise statistics, which are then used to separate detail from noise when enhancing said detail.
Ah, that is actually quite instructive. :thumbsup: Based on that, am I correct in also presuming that in extracting the "new" luminance, as modified by a prior Color (and potentially other subsequent things), any prior associations of the channels (such as to your bicolor/SHO filters) is also gone? So a subsequent Color should be looked at more similarly to how it would work outside of tracking, other than maybe the noise tracking itself?

Good info, and so any "oopsie" in Color really should be handled by Undo, if you don't get more than one additional module "keep" into the workflow. Although for me that's generally Shrink, and I don't decide I'm unhappy with a color scheme until I'm into SS.

Very good to know the pre/post Color dividing line, of a sort. So one shot only on the "pre" modules, other than an AutoDev full re-stretch, and any other second thoughts should be handled by Undo or Restore.

:obscene-drinkingcheers:
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: Modules, tracking, and linearity

Post by admin »

Mike in Rancho wrote: Sun Mar 06, 2022 9:58 am Based on that, am I correct in also presuming that in extracting the "new" luminance, as modified by a prior Color (and potentially other subsequent things), any prior associations of the channels (such as to your bicolor/SHO filters) is also gone?
No actually. Just the luminance is different. On second launch, the coloring is "reset" and the narrowband ratios, as imported, are preserved and ready to be re-applied/re-scaled exactly like they were the first time around.
Good info, and so any "oopsie" in Color really should be handled by Undo, if you don't get more than one additional module "keep" into the workflow.
Ideally. Unless you have very good reasons to use the Color module to modify luminance for some other purpose while tracking is on. Depending on the mode you use, the difference in luminance may be minimal though.
Ivo Jager
StarTools creator and astronomy enthusiast
Post Reply