Narrowband balancing, throttling, boosting

General discussion about StarTools.
Mike in Rancho
Posts: 1141
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Narrowband balancing, throttling, boosting

Post by Mike in Rancho »

What counts as the proper, most documentary-preserving way of doing things? :?:

Setting aside things like hue mapping, as well as the extent of balancing, with those items just being imager's choice as to preferred or best-revealing colors, and same thing as to just how much to show of weaker emissions.

One of the latest "rages" in PI is some Pixel Math processes by Bill Blanshan to aid in this. At first I was a bit skeptical, and there are some possible flaws (preferring starless in order to now blow out stars being one), but after further thought and looking at what it does it didn't seem too bad at all, and maybe good.

So instead of throttling back and balancing relative chrominance, as we do, they just boost up the weak channels. Black points are matched, and then the median/std dev or whatnot of the Ha channel is then replicated in the weak channel - OIII or OIII + SII.

The results do tend to look good, though keeping an eye on raising too much noise particularly in OIII. How much boost was applied, however, probably can't be determined so as to explain it to the audience, as we could explain for throttling by looking at the bias in Color.

But that had me thinking again about the NB balancing in Color, and the differences between such methods, and is something that I've pondered in the past. While the Color module techniques make perfect sense as relative and global emissions balancing, is that still not just limited to the chrominance, and is applied to the synthetic luminance layer we have been processing, but said luminance has been locked in since the compose module based on the exposures that were set?

Does that not leave us potentially weak in something like the OIII, because while we may be biasing that channel up, we aren't also raising the associated luminance? It almost makes me think of a potential luminance-chrominance disconnect, as has been discussed in other aspects of composition - usually NB+BB.

Or is something going right over my head? :confusion-shrug:

EDIT: I was up too late and forgot the term. I think it is usually referred to as SHO/HOO "normalization," in that it basically uses the same formula for normalizing that is required when using rejection during sub stacking, but to the different finished filter stacks, after cropping etc.
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: Narrowband balancing, throttling, boosting

Post by admin »

Ok,I just found the Bill's video on "Narrowband Normalization" and watched it.

It's... a very convoluted (and flawed) way of doing basic white balancing and background calibration, with some errors (clipping) introduced along the way.

1:50 Not relevant to the merits of the video, but as an aside, the M16 HST image is a combination of more than just three filters (the M16 HST image in question adds some IR band image as well from the WFC3 instrument).

2:20 "Curve adjustments to equalize..." :(

2:30 "After curve adjustments you do a channel combination" :(

That's really not how any of this works (non-linear stretching of the channels individually before combining is a big no-no). But Bill proposes another solution that is not that. So all is (still) good.

3:25 "The stars looked messed up, that's what happens when you are doing curve adjustments". No, that's just because stars don't emit much in the Ha band.

3:40 "Now I'm going to normalize it". He expands on this. I'm assuming he's got a screen stretch when he does it. Normalizing in the linear domain is totally valid of course at this stage. However it is usually redundant, as stars almost always occupy the "brightest" pixel and background calibration (performed by Wipe in ST) has already determined (and corrected for) the bottom value/bias.

6:30 "Median / Ha" I'm not sure how or why Ha or why the median. Normalizing is subtracting the minimum (not median) and scaling up again so that the maximum value found in your dataset is the absolute maximum you can represent. Subtracting the median would severely clip the signal. :confusion-shrug:

6:50 Making sure that your blackpoints match. This assumes - for some reason - that the Ha background is somehow representative of the background level in the O-III and S-II? I'm pretty sure he is just mistaken as the result would lead to some nasty color channel clipping depending on the background levels of O-III and S-II. I am assuming he's really just normalizing the three channels. EDIT: nope, he just takes the Ha for some reason. :confusion-shrug:

7:30 I think he's just explaining how automatic ("grey world") white balancing works. It's a bit unclear, but I believe in this case he says he picks the standard deviation of all three channels as the white ("grey") point. It's a valid starting point as any. StarTools uses the mean rather than standard deviation, as it works on the linear data and should yield a more robust/useful default balance.

9:30 The only reason normalization works the way it works here, is because he removes the stars. It removes the upper constraint (stars will always define the upper point to be white, whereas now, the upper point is defined by the nebulosity - some of which is fake of course).

---- Really, up until here, nothing suggest there is anything special or different to the way StarTools does its color calibration. In fact, if the dataset were processed in ST, you would have gotten a pretty much an identical image in terms of color (detail / luminance is - quite intentionally - a different matter of course). The only error introduced is the calibration of the minimum against the Ha minimum, causing channel clipping and some incorrect hues that vary with brightness.

10:00 As Bill points out, the O-III is really noisy, which is a classic example of why StarTools painstakingly separates luminance and chrominance when processing these composites; the noise in the O-III data will not affect the detail/luminance and its effects on the final image will likely be negligible.

10:30: Bill says he "likes to do this on non-linear data".

13:30 An S-curve is not a linear stretch obviously, making things incorrect, but he wisely made it an option. He explains it is to counter issues with background noise as seen in the O-III data. Separating luminance and chrominance (as seen in ST) would circumvent this obviously.

14:10 "You do have green". Indeed, if Bill would not have - for inexplicable reasons - taken the Ha as the background value for all channels, this would not have happened. EDIT: and color balancing stretched data will do that too.

14:50 Not a fundamental problem with Ha - it's just a problem with his script / assumption. And it causes hues that are incorrect.

I hope this helps at all!
Ivo Jager
StarTools creator and astronomy enthusiast
Mike in Rancho
Posts: 1141
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: Narrowband balancing, throttling, boosting

Post by Mike in Rancho »

Thanks, Ivo. :bow-yellow:

Totally did not expect a blow-by-blow analysis. I will watch again so as to pay attention to the key points you have flagged. Also had not picked up on the star removal eliminating the dataset extremes which permits the normalizing to work.

Other than also noting in passing a few lines that seemed questionable, I had otherwise taken it at face value that -- within their world as limited by what PI accomplishes with pixel math -- what they are doing is boosting a weak channel up to the (more or less) relative brightness of the dominant channel.

Thus, it seemed more legit to me than other PI ways of combining NB channels that I have seen posted, such as cross-channel subtraction or "dynamic" combinations, neither of which seemed to have any documentary value. :confusion-shrug:

But, of course I was just considering his OIII boosting as relates to what we do within ST - and so was wondering if we are leaving anything on the table by altering the channel bias decrease/increase (which may be relative - I'm uncertain there) only in Color. Thus we are doing emission balancing just in the chrominance, without raising respective luminance along with it. :think:

So for example if we increase OIII bias, are we not still applying that against the synthetic luminance created way back in Compose, which would likely have had very weak contribution from the OIII file?

I may not be making any sense, of course. :? Or perhaps I need to go back and read up on Compose again, but it seemed to me the Synth L would lock in weak acquisition.

---

Anyway - no apologies needed for missing my random ramblings! I was actually just going to let this one fade into oblivion, where perhaps it belonged, until Martin's post interested me due to perhaps similar general concepts - albeit that one seems to have been truly, and most literally, photoshopped. ;)

I think the regulars here can tell when you must be very busy with things. Don't mind us, we'll only get a few really crazy ideas going without you keeping us on the straight and narrow. :lol:
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: Narrowband balancing, throttling, boosting

Post by admin »

Mike in Rancho wrote: Wed Mar 01, 2023 7:37 am I was just considering his OIII boosting as relates to what we do within ST - and so was wondering if we are leaving anything on the table by altering the channel bias decrease/increase (which may be relative - I'm uncertain there) only in Color. Thus we are doing emission balancing just in the chrominance, without raising respective luminance along with it. :think:
That is exactly what is happening, and it's a hard-fought feature not a bug.
So for example if we increase OIII bias, are we not still applying that against the synthetic luminance created way back in Compose, which would likely have had very weak contribution from the OIII file?
Indeed, an increase in O-III bias is purely changing O-III's color prevalence versus the other bands. It does not not increase O-III's contribution to luminance/detail. In the case of poor O-III signal, it will piggyback on any (usually stronger) Ha or S-II luminance. That's the beauty of it.

The whole reason we do this, is to that we can achieve the perfect luminance signal blend of all bands, that achieves the optimal SNR for the signal at hand. Every band is weighted exactly as it is supposed to.

In return, you get the highest SNR luminance stack achievable from your data, having important benefits to how much detail you can wring out of your luminance (decon, etc.). The are virtually no drawbacks to this, only upsides.
Ivo Jager
StarTools creator and astronomy enthusiast
Mike in Rancho
Posts: 1141
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: Narrowband balancing, throttling, boosting

Post by Mike in Rancho »

Thank you Ivo. Much to think about. :think:

It seems to go against what we've otherwise said about luminance-chrominance disconnects, but I need to go back and read up more on Synth L generation details and what happens from Compose to Wipe to Color. And I may be focusing too much on one sub-issue when there are greater considerations, such as the SNR.
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: Narrowband balancing, throttling, boosting

Post by admin »

Mike in Rancho wrote: Thu Mar 02, 2023 5:23 pm Thank you Ivo. Much to think about. :think:

It seems to go against what we've otherwise said about luminance-chrominance disconnects, but I need to go back and read up more on Synth L generation details and what happens from Compose to Wipe to Color. And I may be focusing too much on one sub-issue when there are greater considerations, such as the SNR.
Hi Mike,

I was hoping that the information was just a affirmation of how you thought luminance and chrominance spearation works (and how it benefits both SNR and color rendering), but if there's anything you'd like me to clarify, do let me know.
Ivo Jager
StarTools creator and astronomy enthusiast
Mike in Rancho
Posts: 1141
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: Narrowband balancing, throttling, boosting

Post by Mike in Rancho »

admin wrote: Thu Mar 02, 2023 10:46 pm Hi Mike,

I was hoping that the information was just a affirmation of how you thought luminance and chrominance spearation works (and how it benefits both SNR and color rendering), but if there's anything you'd like me to clarify, do let me know.
Appreciate that. Sorry for being such a pain. :oops:

Have I just thought myself into a rabbit hole from which there is no escape? :confusion-shrug:

For sure I find ST's workflow to be appropriate and likely superior to anything else for staying logically true to the data, without having to jump through weird hoops or the creation of masks in order to protect and preserve color properly. Hands down, for broadband.

And procedurally, I like ST's narrowband functionality as well, permitting balancing tied to filter regardless of mapping. Though of course I'll always want more color power, maybe in 1.11. ;)

Conceptually though, I am trying to think of the best "documentary" method of throttling or boosting S, H, and/or O as needed, and then explaining to the viewers how that balance was achieved.

Thus, it's the, as you say, piggybacking of for example, OIII color, on Ha luminance detail, that has me perplexed, if either Ha is now throttled or OIII is boosted, or both. Similarly, I worry about a lonely clump of of far-flung OIII pixels, that ends up averaged into the Synth L with nothing from the Ha or SII, rather than say, maxed. And then of course as we alter the filter weights in color, the L remains as-is.

Of course, I may also be dreaming up situations that don't happen, or rarely ever do, in actual real data. Since OIII and SII may well only occur in conjunction with dominant Ha most of the time, and as to structure they would be shadowing each other based on foreground/background, probably in a complex manner. :think:

Anyway, yeah probably a self-inflicted rabbit hole. :lol:
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: Narrowband balancing, throttling, boosting

Post by admin »

Mike in Rancho wrote: Sat Mar 04, 2023 6:17 am
admin wrote: Thu Mar 02, 2023 10:46 pm Hi Mike,

I was hoping that the information was just a affirmation of how you thought luminance and chrominance spearation works (and how it benefits both SNR and color rendering), but if there's anything you'd like me to clarify, do let me know.
Appreciate that. Sorry for being such a pain. :oops:
Not at all!
Conceptually though, I am trying to think of the best "documentary" method of throttling or boosting S, H, and/or O as needed, and then explaining to the viewers how that balance was achieved.

Thus, it's the, as you say, piggybacking of for example, OIII color, on Ha luminance detail, that has me perplexed, if either Ha is now throttled or OIII is boosted, or both. Similarly, I worry about a lonely clump of of far-flung OIII pixels, that ends up averaged into the Synth L with nothing from the Ha or SII, rather than say, maxed. And then of course as we alter the filter weights in color, the L remains as-is.
Luminance/detail is signal from all bands lumped together. Whether you boost Ha, O-III or S-II, they will all end up together as one luminance value. If weighted according to signal fidelity (e.g. aggregate exposure time) for each of the bands, this gives you the cleanest starting point for enhancing your detail with. The better the aggregate signal, the more effective you can push it. Upsetting the weighting ensures the stack cannot be pushed as far, as you will have made your stack noisier than needed; there is no such thing as a free lunch - you cannot simply boost (multiply) a weak O-III dataset to try to compensate for a weak signal. Multiplying the signal multiplies the noise with it, and will thus increase the aggregate noise of your stack.

Needlessly upsetting the SNR in your luminance stack, would be a bad idea if your goal is merely to balance the coloring to show the location of relative emissions by means of color. If you were to boost the "traditional way" to accomplish that (e.g. just dumb scaling of the signal), then you run into the inevitable problem of boosting the noise floor with it in the luminance domain - as is demonstrated in the video.

Take the example of the O-III clump; if the O-III signal is strong in the aggregate luminance, that clump will show up just fine. If the O-III clump is weak, though isolated, something like Sharp, Entropy or HDR will be able to enhance it (and without significantly exacerbating noise). Ditto if the clump is mired in very strong but diffuse Ha and/or S-II.

The great thing is that the coloring, by itself, is 100% enough to say; there is predominantly O-III here compared to the rest of the image. Whether the clump is bright or faint (luminance), is wholly dependent on how much O-III you acquired versus the other bands, or how much you managed to enhance any structure. You would have likely mentioned to your audience already how much of each band you acquired.

Assuming the specific case of weak O-III, the chrominance "piggybacking" on any Ha or S-II signal still allows you conclusively claim that "there is O-III there according to my chosen emission color balance", even though the weak O-III is mired in much stronger other signal. As said though, if the S-II and Ha is diffuse and the O-III shows structure, recovering the O-III structure is trivial.

Anyhow, sometimes it is helpful to look at the effects on a terrestrial photo, just to see what is happening and whether there is anything untoward happening with detail or colors.

An example;

This gentlemen is feeding the duckies;
feedingduckies.jpg
feedingduckies.jpg (152.19 KiB) Viewed 11074 times
This is what happens if we drop the "O-III" (blue)'s contribution to just 5% for luminance (I'm using the Red/Green/Blue Luminance Contribution sliders in the FilmDev module here and set Blue's contribution to 5%);
feedingduckies_OIIIdropped.jpg
feedingduckies_OIIIdropped.jpg (146.88 KiB) Viewed 11074 times
E.g. this here is the equivalent of a SHO scene where we used O-III for chrominance 20x stronger than for luminance.

We are now relying mostly on the luminance of the "Ha" and "S-II", but the scene is still entirely coherent, balanced, informative and true to life.
We can even still sort of tell that the gentleman's pants and coat are blue. We are not lying to the viewer, we haven't changed any details, and we haven't changed the color of his coat. The only thing that happened is that any predominantly blue details (but also any associated noise!) are a now harder to see. But, remember, that was the best we could do with our poor 5% O-III dataset. Not too shabby, I'd say.

Let's now try to recover some details;
feedingduckies_OIIIdropped_shadowslifted.jpg
feedingduckies_OIIIdropped_shadowslifted.jpg (150.42 KiB) Viewed 11074 times
This is the gentleman with lifted shadows (Gamma Shadows (Lift) set to 1.25 in the HDR module). Notice the recovery of detail in his "O-III clump colored" coat. This is real detail in the O-III colored area. It will be noisier, sure. Nevertheless it is real detail. And if that detail is blue, then that detail exists in a relatively O-III dominant area. Due to the decoupling of chrominance (100%) and luminance (5%) the recovered detail will be a lot of boosted Ha or S-II as well (if any of that exists in his O-III clump colored coat). That does, however, not change that the detail - whatever its make up may be - exists in an O-III dominant area as determined by our chosen color/band balance. It remains a fact.

Or in other words, taking it completely to the extreme, you don't even have to show any O-III detail at all to still be able to claim through the use of color alone that an area is dominant in O-III relative to another in outer space. Your claim is valid and anyone can repeat the experiment and verify your claim. The detail in that area also exists, and anyone can verify that as well.

Or in other words, both your detail and color in the same image absolutely have documentary value, even when they are decoupled.
Of course, I may also be dreaming up situations that don't happen, or rarely ever do, in actual real data. Since OIII and SII may well only occur in conjunction with dominant Ha most of the time, and as to structure they would be shadowing each other based on foreground/background, probably in a complex manner. :think:
I can think of a few cases where O-III definitely "stands on its own", such as the Crescent Nebula, or the Giant Squid Nebula (OU 4).

The O-III in the Crescent can indeed be a little (a little!) trickier to process in luminance compared to resorting to the "sledgehammer" of multiplying O-III in the case of weak O-III, but OU 4 is a good example of O-III mostly existing against a "backdrop" of more diffuse Ha, making it trivial to enhance.
Anyway, yeah probably a self-inflicted rabbit hole. :lol:
Rabbit holes are awesome. As long as you get out of them again.
Ivo Jager
StarTools creator and astronomy enthusiast
Mike in Rancho
Posts: 1141
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: Narrowband balancing, throttling, boosting

Post by Mike in Rancho »

Interesting. :think:

And perhaps a can of worms. How does one get out of those? :lol:

I hadn't thought through that, as a starting point, luminance intensity would have a basis in how long I had the camera stare at something with a filter, though of course I do know that's how the Synth L works with the exposure times. That seems like I'm imposing my will on the universe, rather than just collecting the cold hard facts of reality. Grrr.

But with those things in mind, yes I can see why this is all set up the way it is. And so, perhaps one solution to boosting a weak NB channel, is simply mass integration to drive up that SNR. How annoying. ;)

I did download the first jpg of Ducky-man earlier today and played with it some, so that was a good exercise. Though being broadband, query how fully analogous it is to two or three slices of narrowband? Unlike, perhaps, some regions of a bicolor or tricolor NB which are just solid colors absent blending, mapping, or variable intensity, I''m not sure there's much in the way of pure solid colors in broadband or terrestrial. And so his blue jeans and navy blue coat have some values in the R, and quite a bit more in the G. In fact, seems the B and G are where most of the detail are.

That said, at "acquisition," Ducky-man is already sporting some decent blue in his clothes, rather than it being weak at acquisition and needing any extra boost. And though shadow lift in HDR did seem to help with recovery (I created a 5% B file too), I wonder if some of that was detail still in the stashed-away chrominance, particularly G.

After I used FilmDev to lower the B to 5%, I used Color, and being in tracking the closest-to-original I came up with was about 100% sat and 2.0 for bright and dark sat each, then saved it. Composing that file back in, as if it was acquired that way much like weak OIII NB, was far harder if not fully possible at all to recover those details. And what I did recover was pretty poor. Of course it's also an 8-bit jpg so that probably didn't help.

I was, however, even with that 5% file, able to bring pretty good blue into Color for those clothes with heavy throttling of R and G, so that was cool.

So, other than just way more integration on the weak filter, how does one boost or balance up the (typically) OIII, in a more aesthetically pleasing, color contrast displaying, and yet still legitimate manner?

Increase shadow linearity in OptiDev.
Structure enhancement as you indicated, so Contrast, HDR, Sharp.
Dark saturation in Color.
SS-Brighten, possibly with a switch to Color-Only, or as otherwise discussed in Martin's Chicken thread.

All viable? Though I realize SS may start becoming a bit enhancy.

I tried all except the first on some Ha+OIII Cal Neb data (not mine) and the boosted OIII seemed to come out pretty well, especially with the SS module.

As a follow-on question that I had while testing all that - several modules have options for dark/bright, shadow/highlight, such as HDR and Sharp. Are there intensity ranges or cutoffs for what falls into either? The Color module too, for saturation - is that based on the overall luminance still, or a range of bright/dark intensity within each of the color channels?

Dark sat did seem to do the trick, though I pulled back on general and bright sat, knowing that I was headed to SS-Brighten afterwards.

Thanks again. :obscene-drinkingcheers:
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: Narrowband balancing, throttling, boosting

Post by admin »

Mike in Rancho wrote: Sun Mar 05, 2023 8:00 am Interesting. :think:

And perhaps a can of worms. How does one get out of those? :lol:
One bite at a time? Or was that some other wisdom? I can't remember... :P
I hadn't thought through that, as a starting point, luminance intensity would have a basis in how long I had the camera stare at something with a filter, though of course I do know that's how the Synth L works with the exposure times. That seems like I'm imposing my will on the universe, rather than just collecting the cold hard facts of reality. Grrr.

But with those things in mind, yes I can see why this is all set up the way it is. And so, perhaps one solution to boosting a weak NB channel, is simply mass integration to drive up that SNR. How annoying. ;)
That's the thing with non-terrestrial scenes though and especially narrowband imaging recording emissions; there is no one prescribed exposure time or "balance".
I did download the first jpg of Ducky-man earlier today and played with it some, so that was a good exercise. Though being broadband, query how fully analogous it is to two or three slices of narrowband? Unlike, perhaps, some regions of a bicolor or tricolor NB which are just solid colors absent blending, mapping, or variable intensity, I''m not sure there's much in the way of pure solid colors in broadband or terrestrial. And so his blue jeans and navy blue coat have some values in the R, and quite a bit more in the G. In fact, seems the B and G are where most of the detail are.
Oh, there's lots of caveats here; in this scene we are dealing with reflection (not with emission), the image was already balanced, the real recorded R:G:B strength was likely something like 1:2:1, etc.

The point was to take an image with areas of distinctly different coloring and demonstrate the effects of decoupling luminance and chrominance on the documentary value of what is presented.

That said, objects in outer space tend to emit in different wavelengths at the same time (there are always exceptions obviously), so the example of an "O-III coat" exhibiting S-II and Ha detail is not uncommon.
That said, at "acquisition," Ducky-man is already sporting some decent blue in his clothes, rather than it being weak at acquisition and needing any extra boost. And though shadow lift in HDR did seem to help with recovery (I created a 5% B file too), I wonder if some of that was detail still in the stashed-away chrominance, particularly G.
Indeed, you can kill the blue channel's contribution to luminance entirely and still recover detail. That detail would be 100% devoid of any O-III structure, but the image would still be of documentary value. Mention of the complete removal of any O-III from the detail/luminance would definitely be warranted though. Something along the lines of "Color indicates relative emission concentrations of S-II, Ha and O-III, with structural detail provided by Ha and S-II". Note that this sort of thing is also done - out of pure necessity - when combining visual imagery with data from vastly different wavelengths (X-ray, gamma ray or deep infrared) for which detectors typically have very different resolutions.
After I used FilmDev to lower the B to 5%, I used Color, and being in tracking the closest-to-original I came up with was about 100% sat and 2.0 for bright and dark sat each, then saved it. Composing that file back in, as if it was acquired that way much like weak OIII NB, was far harder if not fully possible at all to recover those details. And what I did recover was pretty poor. Of course it's also an 8-bit jpg so that probably didn't help.
The entire point of the excercise is to explain that the Color module is categorically not for recovering detail. If you are trying to recover detail with the Color module, you are probably using the worst possible tool in StarTools to do that. In fact, it actively tries to suppress any detail enhancement as much as it can (it will never 100% succeeed in all cases due to gamut issues requiring compromises; for example making blue look visually as bright as the brightest green, the other channels have to pitch in a lot).
I was, however, even with that 5% file, able to bring pretty good blue into Color for those clothes with heavy throttling of R and G, so that was cool.
And that was the point! :thumbsup:
Thanks to the decoupling, you can get away with very weak data and still achieve achieve strong coloring.
So, other than just way more integration on the weak filter, how does one boost or balance up the (typically) OIII, in a more aesthetically pleasing, color contrast displaying, and yet still legitimate manner?
Increase shadow linearity in OptiDev.
Structure enhancement as you indicated, so Contrast, HDR, Sharp.
Dark saturation in Color.
SS-Brighten, possibly with a switch to Color-Only, or as otherwise discussed in Martin's Chicken thread.

All viable? Though I realize SS may start becoming a bit enhancy.
Indeed. And all preferably used before the Color module.
color contrast displaying
As long as you understand the difference between contrasting hues ("red vs blue vs green") and contrasting detail ("bright vs dark").
As a follow-on question that I had while testing all that - several modules have options for dark/bright, shadow/highlight, such as HDR and Sharp. Are there intensity ranges or cutoffs for what falls into either? The Color module too, for saturation - is that based on the overall luminance still, or a range of bright/dark intensity within each of the color channels?
If color channels are not specifically mentioned, then dark/bright, shadow/highlight pertain to the luminance component only.
"Shadow" vs "Highlight" usually exists in the context of 'Midtones'. Shadows run from darkest to grey, highlights run from grey to brightest. Midtones center around perfect grey.

Hope this helps!
Ivo Jager
StarTools creator and astronomy enthusiast
Post Reply