PI for preprocessing

Guides, tutorials, tips & tricks.
Post Reply
Mike in Rancho
Posts: 1141
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

PI for preprocessing

Post by Mike in Rancho »

I'm a couple weeks into my PI trial. Yikes, what these guys subject themselves to. :shock:

Still, it's a vast product, and I intended to buy it even before trying out the trial, which I will. Star alignment is really good, there's quite a few tools for inspecting files, the blink is excellent, the plate solving and annotation is pretty cool, and after just a few tries I'm thinking WBPP will be my stacker of choice going forward. For post-processing, I've been playing (fighting?) with it, but just to make myself somewhat literate. The translation table should help me with that too.

Of course there's a learning curve, because...PI, so you have to use a lot of Google and YouTube before you can get anything stacked. But once you get past that hurdle, the features are nice. I was able to use directory load to easily toss in two separate nights of 5 filters, each with their own flats per night, just the way NINA saves them, along with my library darks (two different exposures) and bias, and everything was stacked to the single best reference. :D ASTAP, which I've been using for a year, cannot do it quite that way, requiring 5 separate stacks which are then registered, and while DSS can stack to a chosen reference, it's more hands-on and I think you have to do it 5 separate times.

The question becomes, what PI/WBPP bells and whistles should/can/shouldn't be used for post in ST, or perhaps just in general if one is concerned with data fidelity. :?: I see we don't have a PI or WBPP preferred stacking workflow.

There are two techniques of concern, I think. First is the namesake weighting of weighted batch pre processing. At first blush, that seems a little cheaty. I'm not sure how strong the increase or decrease in weighting of any particular sub can be, but is it legitimate to do such a thing, based on quality of sub, and for inclusion in the stack rather than full culling? That said, I suppose sort of the same thing could happen if you see an iffy sub come into the computer during acquisition, and so just tell NINA to take an extra one (or two) at the end. That's weighting in a way. But then those would be real subs. Still, I'm not sure this one is too egregious.

Second is Local Normalization, which apparently has undergone great improvement and is now considered good as standard procedure rather than use only in case of emergency that it may have been in the beginning.

I know the ST mantra is no background normalizing in stacking, although I am having second thoughts on that a bit. I ran a blink on an hour of 60s L subs from my M78 last month. It goes like a little movie, and one option is to select the same STF stretch for all subs. It was amazing how much difference in brightness occurred in just an hour. I have to think that would turn sigma rejection and averaging into a complete mess. :think:

As far as I know it's only DSS that can turn off all normalizing, at least if using rejection. In ASTAP it's just done and you have no say in the matter.

But that's global normalizing. LN apparently does what it says, normalizing to one of the reference frames in a...spatially variant?...manner. Supposedly it's good to pick your best/darkest likely at meridian. Depending on your acquisition, there could be no difference at all from global normalizing. The stated purpose though is for multiple, or moving, gradients and/or multi-session acquisition, which of course exacerbates those even more. I have all of those in my backyard, of course. :D

I'm still trying to think my way through this. Again at first it seems like a red flag for unwanted pre-processing manipulation, but then perhaps not, at least if it's doing it right and not artifacting. The complex mix of gradients should be simplified, and either way it's going to be hit with Wipe -- possibly with very or too high aggression if the gradients are too scrambled?

Ponder ponder.

So, wondering what the official ST thoughts and guidelines on this stuff would be. As well what's being used out there by anyone else PI-stack/ST-post, especially in lots of LP.
fmeireso
Posts: 367
Joined: Mon Sep 28, 2020 8:46 pm
Location: Belgium

Re: PI for preprocessing

Post by fmeireso »

I have PI for some time now and i like it too for pre processing. I think WBPP is quite simple, the grouping feature however took me a couple of youtubes and some reading and help from CN members to get it working. But it stacks really great imhok allthough it needs more time then DSS.
Staralignment is also great and just works. For the rest i use Subframelector to evaluate FWHM or at least to get an idea.

I tried cosmetic correction , dunno if it really makes a difference in the endresult....

Great software, complicated, but a really nice windows like interface.

But processing? no thanks. ST really does a great job and since i can great results i don't see the need for anything else and just re invent the wheel....
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: PI for preprocessing

Post by admin »

Mike in Rancho wrote: Sun Jan 08, 2023 8:22 am I'm a couple weeks into my PI trial. Yikes, what these guys subject themselves to. :shock:

Still, it's a vast product, and I intended to buy it even before trying out the trial, which I will. Star alignment is really good, there's quite a few tools for inspecting files, the blink is excellent, the plate solving and annotation is pretty cool, and after just a few tries I'm thinking WBPP will be my stacker of choice going forward. For post-processing, I've been playing (fighting?) with it, but just to make myself somewhat literate. The translation table should help me with that too.

Of course there's a learning curve, because...PI, so you have to use a lot of Google and YouTube before you can get anything stacked. But once you get past that hurdle, the features are nice. I was able to use directory load to easily toss in two separate nights of 5 filters, each with their own flats per night, just the way NINA saves them, along with my library darks (two different exposures) and bias, and everything was stacked to the single best reference. :D ASTAP, which I've been using for a year, cannot do it quite that way, requiring 5 separate stacks which are then registered, and while DSS can stack to a chosen reference, it's more hands-on and I think you have to do it 5 separate times.
Congrats! PI is IMHO the most capable stacking tool, hands down.
The question becomes, what PI/WBPP bells and whistles should/can/shouldn't be used for post in ST, or perhaps just in general if one is concerned with data fidelity. :?: I see we don't have a PI or WBPP preferred stacking workflow.

There are two techniques of concern, I think. First is the namesake weighting of weighted batch pre processing. At first blush, that seems a little cheaty. I'm not sure how strong the increase or decrease in weighting of any particular sub can be, but is it legitimate to do such a thing, based on quality of sub, and for inclusion in the stack rather than full culling? That said, I suppose sort of the same thing could happen if you see an iffy sub come into the computer during acquisition, and so just tell NINA to take an extra one (or two) at the end. That's weighting in a way. But then those would be real subs. Still, I'm not sure this one is too egregious.
The weighting pertains to a quality "score" for each frame. Frames that are deemed higher quality (there are many characteristics and ways to establish such a quality score), are given a higher weighting during stacking.
For example, a temporary reduction in atmospheric transparency, seeing, or some other event may make one or two frames of lesser quality (PSFs may be significantly different from the others, light pollution signal may be significantly different from the others, etc.).
It stands to reason to either reject those frames, or give them a much lower weighting when aggregating their signal.
Second is Local Normalization, which apparently has undergone great improvement and is now considered good as standard procedure rather than use only in case of emergency that it may have been in the beginning.

I know the ST mantra is no background normalizing in stacking, although I am having second thoughts on that a bit. I ran a blink on an hour of 60s L subs from my M78 last month. It goes like a little movie, and one option is to select the same STF stretch for all subs. It was amazing how much difference in brightness occurred in just an hour. I have to think that would turn sigma rejection and averaging into a complete mess. :think:

As far as I know it's only DSS that can turn off all normalizing, at least if using rejection. In ASTAP it's just done and you have no say in the matter.

But that's global normalizing. LN apparently does what it says, normalizing to one of the reference frames in a...spatially variant?...manner. Supposedly it's good to pick your best/darkest likely at meridian. Depending on your acquisition, there could be no difference at all from global normalizing. The stated purpose though is for multiple, or moving, gradients and/or multi-session acquisition, which of course exacerbates those even more. I have all of those in my backyard, of course. :D

I'm still trying to think my way through this. Again at first it seems like a red flag for unwanted pre-processing manipulation, but then perhaps not, at least if it's doing it right and not artifacting. The complex mix of gradients should be simplified, and either way it's going to be hit with Wipe -- possibly with very or too high aggression if the gradients are too scrambled?
I don't have a totally straightforward answer for you.

It's a tricky one. You're damned if you do (potential for noise levels and dynamic range to vary locally; can impact things ranging from final gradient removal to deconvolution) and damned if you don't (some outlier rejection algorithms will not work, or not work as well without normalization).

Much depends on your circumstances, skies, data quality and characteristics of your gear.

Times when I would be less likely to recommend LN;
  • Your datasets suffer from non-celestial uneven lighting issues (e.g. flats issues, remnant vignetting, etc.)
  • You cannot establish a good clean reference frame.
  • Data was shot over a single night under good, clean skies
  • Frame exposure times are high (lots of stars that approach the upper part of the dynamic range).
  • Narrow fields
  • Narrowband data
  • Total integration time is shorter
Times when LN would likely be beneficial;
  • You can establish a good clean reference frame
  • You shoot the same object over multiple nights
  • You shoot wide fields
  • LP is significant and sky quality is unpredictable
  • Total integration time is longer
I hope that is of any use!
Ivo Jager
StarTools creator and astronomy enthusiast
Mike in Rancho
Posts: 1141
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: PI for preprocessing

Post by Mike in Rancho »

Thanks Ivo. :thumbsup:

I will try to keep those factors in mind for go/no-go on LN, and learn how to better tune any settings and/or select the appropriate reference.

While I'd rather not manipulate (or at least not unless I have a better understanding of what it is doing), it's starting to seem that LP or moving gradients are already manipulating my data from the get go. Between that and my struggles with Wipe extraction on troublesome L filter data, I'm starting to have a crisis of faith in the reality of my details. :lol:
Post Reply