Journal paper on spatially variant deconvolution?

General discussion about StarTools.
Post Reply
Mike in Rancho
Posts: 1153
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Journal paper on spatially variant deconvolution?

Post by Mike in Rancho »

This link just popped up in the DSLR-DSO subforum of CN: https://arxiv.org/pdf/2212.02594v2.pdf

Seems to be a science journal paper (which of course means it has a punny title) for a technique that will be employed on an upcoming NASA mission for a mini space telescope? I've just briefly skimmed the paper as well as the PUNCH mission website, but it's all generally over my head anyway. It also may be focused more on coma defects than generalized deconvolution? Or maybe not.

The footnotes have no reference to Ivo or ST. :(

But they also don't mention Richardson or Lucy. Wouldn't that be the underlying footnote to begin any discussion of astronomical deconvolution, or has that just been too refined by more recent papers?
User avatar
admin
Site Admin
Posts: 3369
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: Journal paper on spatially variant deconvolution?

Post by admin »

Mike in Rancho wrote: Mon Jun 19, 2023 8:02 pm This link just popped up in the DSLR-DSO subforum of CN: https://arxiv.org/pdf/2212.02594v2.pdf

Seems to be a science journal paper (which of course means it has a punny title) for a technique that will be employed on an upcoming NASA mission for a mini space telescope? I've just briefly skimmed the paper as well as the PUNCH mission website, but it's all generally over my head anyway. It also may be focused more on coma defects than generalized deconvolution? Or maybe not.

The footnotes have no reference to Ivo or ST. :(

But they also don't mention Richardson or Lucy. Wouldn't that be the underlying footnote to begin any discussion of astronomical deconvolution, or has that just been too refined by more recent papers?
Hi Mike,

I noticed that paper in the past (before it was submitted). It definitely describes a form of Spatially Variant PSF deconvolution, but a somewhat "naive" one.

Compared to ST's SV Decon, there are some important differences in approach. For one, they operate mostly in frequency space (and hence require overlapping regions, need boundary effect counter-measures, and require more compute power for the same task).

I'm not sure how useful the results are for AP purposes however...

I commented in the CN thread as well.

Thanks for sharing!
Ivo Jager
StarTools creator and astronomy enthusiast
Mike in Rancho
Posts: 1153
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: Journal paper on spatially variant deconvolution?

Post by Mike in Rancho »

Thanks for the explanation, Ivo. Of course you saw it early. The deconvolution community must be pretty small.

Your short essay in the thread may induce some commentary. ;)

It looks like the OP may need your chart on the continuum of data fidelity/recovery tools and modules, to those that are artistic enhancement, and points in between.

I may have mentioned it before (brain fade), but I still do think the features and docs SVD page could be updated with new before/afters to better show the star effects and with less "big white disk-ing". The structure detail improvement of course looks great. Those may still be from earlier 1.8 versions and so may benefit from 1.9 renditions? :think:
User avatar
admin
Site Admin
Posts: 3369
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: Journal paper on spatially variant deconvolution?

Post by admin »

Mike in Rancho wrote: Mon Jun 26, 2023 4:08 pm Thanks for the explanation, Ivo. Of course you saw it early. The deconvolution community must be pretty small.
I don't think there's so much of a community, but you can't help but be exposed to other people's work when you're researching and tinkering. I don't really move in academic circles either/any more (though I've been flirting with the idea off and on of doing another degree).

Being an outsider can be very refreshing though - you're less prone to "groupthink" and tend to approach things from a different angle, as you're less likely/inclined to build on other people's work (references, references, references). For example, SVDecon performs its deconvolution in the spatial domain, rather than the - usually preferred - frequency domain (as in the paper). It's less uncommon, but there are a bunch of benefits to that dovetail nicely with Tracking and de-ringing, while (also important) being faster.
I may have mentioned it before (brain fade), but I still do think the features and docs SVD page could be updated with new before/afters to better show the star effects and with less "big white disk-ing". The structure detail improvement of course looks great. Those may still be from earlier 1.8 versions and so may benefit from 1.9 renditions? :think:
Yeah, you're absolutely right. It all could really use an overhaul. Updating the docs for 1.9 is next on my list.
Ivo Jager
StarTools creator and astronomy enthusiast
Mike in Rancho
Posts: 1153
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: Journal paper on spatially variant deconvolution?

Post by Mike in Rancho »

Thanks Ivo. I've been following along. I notice you mentioned that SVD is not really meant for "fixing" star shapes, and was wondering if you could expound on that for us. :?:

I know we do have a spatial error slider (I'm not very practiced with it), and you've also mentioned before that adding iterations will bring things ever more towards a point source...

Also, granted we should all, first things first, attempt to get our business under control with respect to things like tilt and corrector backspacing...

Is it more that SVD is trying to clarify the non-star detail, and the stars PSF's are just the (variable) roadmap? Thus, if a corner star is a bit of an egg, ST will understand the the proper deconvolution of a nearby bok (or whatever) is reverse-egg? Whereas maybe at center FOV the deconvolution is more reverse-circle.

Of course, while we all want that, it's probably also true that star shapes can make or break an image. The eye is just drawn to any such defects. Hence probably the popularity of something like BXT, regardless of where it might fall on the deconvolution continuum (which seems more on the de-blur and warp side than the data recovery side, if one defines expansively, or somewhere in the middle if it is trying to do both).

How much power is there in true R-L deconvolution to undo things like eggy or coma stars, or is that really off the table? Maybe I haven't been thinking about SVD's purpose properly. :think:
User avatar
admin
Site Admin
Posts: 3369
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: Journal paper on spatially variant deconvolution?

Post by admin »

Mike in Rancho wrote: Tue Jun 27, 2023 6:54 pm Thanks Ivo. I've been following along. I notice you mentioned that SVD is not really meant for "fixing" star shapes, and was wondering if you could expound on that for us. :?:

I know we do have a spatial error slider (I'm not very practiced with it), and you've also mentioned before that adding iterations will bring things ever more towards a point source...

Also, granted we should all, first things first, attempt to get our business under control with respect to things like tilt and corrector backspacing...

Is it more that SVD is trying to clarify the non-star detail, and the stars PSF's are just the (variable) roadmap? Thus, if a corner star is a bit of an egg, ST will understand the the proper deconvolution of a nearby bok (or whatever) is reverse-egg?
That's correct! All pixels are treated equal however in principal (star, Bok globule or otherwise). It's all about signal quality of the areas and how much noise and inaccuracies you'd be "scooping up" trying to gather all that spread out signal and re-concentrate it.
Whereas maybe at center FOV the deconvolution is more reverse-circle.
Exactly.
Of course, while we all want that, it's probably also true that star shapes can make or break an image. The eye is just drawn to any such defects. Hence probably the popularity of something like BXT, regardless of where it might fall on the deconvolution continuum (which seems more on the de-blur and warp side than the data recovery side, if one defines expansively, or somewhere in the middle if it is trying to do both).

How much power is there in true R-L deconvolution to undo things like eggy or coma stars, or is that really off the table? Maybe I haven't been thinking about SVD's purpose properly. :think:
Theoretically, deconvolution can reverse any distortion. The problem is always in the precision of the data and calculations, the data being;
1. the PSF model
2. the source data ("blurred image")

The key to understanding why we can't perfectly undo blurs, is this "deconvolution is an ill-posed problem" thing that is being thrown around.

Deconvolution is a prime example of an inverse problem. Wikipedia defines it well;
An inverse problem in science is the process of calculating from a set of observations the causal factors that produced them: for example, calculating an image in X-ray computed tomography, source reconstruction in acoustics, or calculating the density of the Earth from measurements of its gravity field. It is called an inverse problem because it starts with the effects and then calculates the causes. It is the inverse of a forward problem, which starts with the causes and then calculates the effects.
Scrolling down on that Wikipedia page we get to the crux of the issue
Mathematical and computational aspects
Inverse problems are typically ill-posed, as opposed to the well-posed problems usually met in mathematical modeling. Of the three conditions for a well-posed problem suggested by Jacques Hadamard (existence, uniqueness, and stability of the solution or solutions) the condition of stability is most often violated. In the sense of functional analysis, the inverse problem is represented by a mapping between metric spaces. While inverse problems are often formulated in infinite dimensional spaces, limitations to a finite number of measurements, and the practical consideration of recovering only a finite number of unknown parameters, may lead to the problems being recast in discrete form. In this case the inverse problem will typically be ill-conditioned. In these cases, regularization may be used to introduce mild assumptions on the solution and prevent over-fitting. Many instances of regularized inverse problems can be interpreted as special cases of Bayesian inference.
What this means practically for deconvolution as implemented in computer algorithms, is that even the tiniest variation in your PSF model, source data or calculations (rounding errors) has a *massive* effect on the "solution". Any tiny error will quickly destabilize the solution. You can only imagine how seriously catastrophic any sort of noise or non-linearity is if even rounding errors can destabilize the solution under perfect conditions.

It should also be noted that non-blind deconvolution (e.g. where you know the PSF) as for example implemented in SV Decon or the paper cited on CN, can be proven to converge on a - for all intents and purposes - unique solution. Or in other words, the uniqueness aspect is not so much a problem for the deconvolution implemented in astronomical image processing software. Note by the way that the same cannot be said or proven for an opaque neural hallucination algorithm.

Now that we know that stability is the major issue, we can delve into this a bit deeper. The one trick we have up our sleeve to keep a solution from destabilising is called regularization. Regularization makes sure that the solution after applying deconvolution doesn't veer too much of course from the input data/image. What "too much" means is entirely down to the regularization algorithm; it's usually where the analytical smarts of a decon algorithm reside.

At the core of it, regularization tries to statistically (hence me bolding the Bayesian inference part) quantify the uncertainty (errors) in the source and PSF and minimize their destabilizing effect on the solution. Or in other words, regularization tries to determine how probable it is that something is real recovered signal, versus artefacting noise, and then weighs the recovered signal accordingly.
A regularization algorithm can be as unsophisticated as re-blurring the "new" image so that artefacting is blurred away again (= spreading uncertainty over neighboring pixels, alas along with the recovered signal). Or it can be as complex as ST's algorithm that introduces complex statistics from the entire processing history of a pixel, to much more accurately estimate the veracity of the recovered signal vs it just being the result of noise, non-linearities or other bad influences.

As an aside, R&L proposed to repeat this process a few times (e.g. over a few iterations); deconvolve, pull back (regularize), then deconvolve that, pull back, then deconvolve that, pull back again, etc. As a result, an ideal deconvolution algorithm "converges" on a solution beyond which no improvements can be made. Uncertainty and improvements achieve a sort of equilibrium where the successive deconvolving and pulling back start to roughly cancel each other out.

Now that we know what regularization does (quantifying uncertainty and using that knowledge to stabilize the solution), we can better appreciate what happens in the case of a severely deformed star where the ideal solution (a nice point light) is very far away from what we start off with (a highly defocused or elongated mess). Unless our dataset and PSF models are pristine and highly accurate, you will find that the equilibrium is somewhere half way between deformed and corrected. Pushing the deconvolution further would destabilize the solution too much (causing ringing, artefacts, etc.).

In ST's SVD module, you will find for example that severely deformed stars tend to have their high SNR cores corrected, but not always their low SNR "halos".

In summary, in true signal processing, there is no free lunch; the signal has to come from somewhere. If tiny bits of that signal are spread (convolved) amongst the neighbouring pixels, then you can attempt to recover said signal (deconvolve). But whatever you recovered will be subject to the noise in all those neighbouring pixels' signal. And it will further be subject to the accuracy of your model of how much to take from those neighbouring pixels (PSF).

Of course, if you use neural hallucination, you can just side step all of this and make up (hallucinate) some nice new clean substitute signal that is plausible for that area, based on the way it looks (input). It has absolutely nothing to do with deconvolution, nor with the signal that you originally captured. The original signal is re-interpreted and replaced with something "nice" in one go. None of the procedures, considerations, pitfalls, etc. of true deconvolution apply. Yay for "progress". :roll:

Hope that helps?
Ivo Jager
StarTools creator and astronomy enthusiast
Mike in Rancho
Posts: 1153
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: Journal paper on spatially variant deconvolution?

Post by Mike in Rancho »

Ah, very helpful. :thumbsup:

I think these explanations may have moved me a couple steps forward in understanding deconvolution from both a theory and application perspective. Maybe. :D

And, sticking more to the applied and specifically applied-to-ST side of things, the question is how to utilize this to get the best of what SVD can do for us?

It's sounding like to get optimum detail recovery, lack of ringing (or at least lighter necessary use of the deringing controls), and possibly more pinpointing or circularizing of non-round stars, we have to be considering the SNR that we are feeding into SVD. So -

More integration (oh noes!). This of course helps the base underlying linear data.

Shorter subs to prevent saturation of cores, also thus linear state. Though I have used datasets with plenty of blown cores that SVD doesn't seem bothered with at all. Odd.

And maybe more in our hands to change up, better Wipes, OptiDevs - perhaps less global stretch and/or shadow compression of noisy data?, and enhancing module use (and maybe even Bin, up to a point) in order to aid the non-linear SVD result as to those factors?

Or does it all just boil down to cleaner linear data?

I'll have to try some more experimenting on the above when I get a chance..
User avatar
admin
Site Admin
Posts: 3369
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: Journal paper on spatially variant deconvolution?

Post by admin »

Mike in Rancho wrote: Wed Jun 28, 2023 5:57 pm More integration (oh noes!). This of course helps the base underlying linear data.

Shorter subs to prevent saturation of cores, also thus linear state. Though I have used datasets with plenty of blown cores that SVD doesn't seem bothered with at all. Odd.
That's great to hear, as that is usually a deconvolution algorithm's Achilles heel. ST's implementation can deal with singularities (over exposing cores) well now, but any non-linearity is a big problem (see also the 1.9 beta discussion where it appears non-linearity is causing issues for Stefan).
And maybe more in our hands to change up, better Wipes, OptiDevs - perhaps less global stretch and/or shadow compression of noisy data?, and enhancing module use (and maybe even Bin, up to a point) in order to aid the non-linear SVD result as to those factors?

Or does it all just boil down to cleaner linear data?
Cleaner, linear data is definitely the #1 priority (and you can of course get cleaner data by binning).
Ivo Jager
StarTools creator and astronomy enthusiast
Post Reply