RC BlurXterminator …. Any comments ?

General discussion about StarTools.
Mike in Rancho
Posts: 1141
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: RC BlurXterminator …. Any comments ?

Post by Mike in Rancho »

admin wrote: Mon Jan 09, 2023 5:44 am There is indeed a lot of conjecture - even on behalf of RC - as to what the neural network is actually doing
:!:

BXT begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.

[use Austrian accent]
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: RC BlurXterminator …. Any comments ?

Post by admin »

You jest, but we're well on our way.

I just asked OpenAI's ChatGPT its opinion on using neural hallucination to enhance astronomical images, and whether neural hallucination is acceptable in documentary photography;
Selection_772.jpg
Selection_772.jpg (186.45 KiB) Viewed 15454 times
All I'll say (with an Austrian accent) is, "come with me if you want to live!".

EDIT: As a bonus, here it perfectly articulates my objections to calling it "deconvolution";
Selection_773.jpg
Selection_773.jpg (84 KiB) Viewed 15451 times
Selection_775.jpg
Selection_775.jpg (87.62 KiB) Viewed 15450 times
Well done little AI! :thumbsup:
Ivo Jager
StarTools creator and astronomy enthusiast
Mike in Rancho
Posts: 1141
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: RC BlurXterminator …. Any comments ?

Post by Mike in Rancho »

Wow.

I've seen some headlines about ChatGPT but haven't read through them to see what it's all about.

Interesting answers. I had the feeling "neural hallucination" was a bit of Ivo snark. ;) Is that term really "a thing"?

Thus, as-posed, the question could be a bit loaded. And could it have any pre-training from prior discussions with, oh, maybe Ivo, or did it spit this out entirely on its own? The topic here seems a bit arcane for it to be so authoritative.
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: RC BlurXterminator …. Any comments ?

Post by admin »

Mike in Rancho wrote: Mon Jan 09, 2023 5:24 pm Wow.

I've seen some headlines about ChatGPT but haven't read through them to see what it's all about.

Interesting answers. I had the feeling "neural hallucination" was a bit of Ivo snark. ;) Is that term really "a thing"?
No snark; it's a real thing and the use of the term has gained a lot of popularity to call out this sort of AI behavior. In fact the term has now become so widespread that some people in the field wish to reign in its use;

https://www.forbes.com/sites/lanceeliot ... t-to-stop/

I'm personally afraid that horse has bolted, but the mechanism being used by unsophisticated software like the <x>xTerminator series is an archetype example of neural net hallucination being (ab)used (and - most gratingly - sold as something else :evil: ).

Note, by the way, note this specific example of neural hallucination in the article;
When an X-ray or MRI is undertaken, there is nowadays a likely chance that some kind of AI will be used to clean up the images on a reconstruction basis or otherwise analyze the imagery. Researchers caution that this can introduce AI hallucinations into the mix: “The potential lack of generalization of deep learning-based reconstruction methods as well as their innate unstable nature may cause false structures to appear in the reconstructed image that is absent in the object being imaged. These false structures may arise due to the reconstruction method incorrectly estimating parts of the object that either did not contribute to the observed measurement data or cannot be recovered in a stable manner, a phenomenon that can be termed as hallucination” (as stated in “On Hallucinations in Tomographic Image Reconstruction” by co-authors Sayantan Bhadra , Varun A. Kelkar, Frank J. Brooks, and Mark A. Anastasio, IEEE Transactions on Medical Imaging, November 2021).
Ring a bell?
Thus, as-posed, the question could be a bit loaded. And could it have any pre-training from prior discussions with, oh, maybe Ivo, or did it spit this out entirely on its own? The topic here seems a bit arcane for it to be so authoritative.
I understand I may have appeared that way (the thing is *really* good!), but they are real, un-trained (by me that is) answers to unloaded questions. You should be able to get substantially similar answer by posing the same question (some noise is injected on purpose to force the model to vary answers slightly to appear more natural, therefore the answers may vary slightly).

You can give ChatGPT a spin for free if you want;
https://chat.openai.com/chat

If you're interested at all in AI, a quick play with some of the other examples will give you a great overview of where things are heading.

Please note I am not affiliated with OpenAI, but am a big fan of their work.

Please also note that, just like the <x>Xterminator series, ChatGPT too is not immune from hallucinating (but at least it's not its entire design goal). At least it has the decency to apologize when called out on it.
Ivo Jager
StarTools creator and astronomy enthusiast
Mike in Rancho
Posts: 1141
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: RC BlurXterminator …. Any comments ?

Post by Mike in Rancho »

Ha! Because of the location the video started at, I thought the "dude" in the blue shirt was real! :oops:

I went back to the beginning to see what the setup was - basically he had a ChatGPT convo and then plugged the log of that into the...realistic speaking human simulator, I guess? As soon as those are synched into one we're all in trouble. :shock:

Googling neural hallucination brought up a whole mess of stuff, including what I assumed was legitimate medical research. More of that, perhaps. So I changed that to AI hallucination and came up with more relevant results. That in itself seems to be split into two - intentional (AI "dream-like" results), and of course the unintentional.

A Wikipedia exists as termed that way, but appears to be quite new, and so is fairly short and does not yet seem to include imaging. But some amusing examples already. https://en.wikipedia.org/wiki/Hallucina ... elligence)

Quite amazing how legitimate and authoritative it comes off as. I think we need to get the blue shirt guy to read the AI responses to you.

Anyway, I asked Alexa what ChatGPT is, but she only gave me a two short sentences. Clearly not as verbose as your chat partner there. :)
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: RC BlurXterminator …. Any comments ?

Post by admin »

Mike in Rancho wrote: Tue Jan 10, 2023 4:28 am I changed that to AI hallucination and came up with more relevant results. That in itself seems to be split into two - intentional (AI "dream-like" results), and of course the unintentional.
The devious thing is that not all intentional hallucination is dream-like, as is the case for inpainting, or face generation, or the amazing Mid Journey and Stable Diffusion projects, or indeed the <x>XTerminator and Topaz AI suites. The resulting hallucinations can be quite plausible indeed, but nevertheless remain hallucinations, whether in whole or in part.

As soon as AI is used to intentionally or unintentionally make up a feature or signal that cannot be proven to be a latent feature or signal of the original input signal, it is neural hallucination.

EDIT: When asked "What is a neural hallucination algorithm" interestingly ChatGPT sometimes says;
The term "neural hallucination algorithm" is not a widely used or well-established term in the field of AI or machine learning.'
and at other times explains in detail;
A neural hallucination algorithm is a type of machine learning algorithm that is used to generate new data based on patterns learned from a training set of data. These algorithms are typically based on artificial neural networks, which are modeled after the structure and function of the human brain. They are trained on a dataset and can generate new, similar data that is often difficult to distinguish from real data. Additionally, neural hallucination algorithm is a subtype of generative models which are focused on mimicking the data generation process. They’re able to create new data instances that are similar to the training set."
I guess the terminology is very fluid still in this fast moving field, and GPT3's dataset is a snapshot of the Internet figuring it all out.
Ivo Jager
StarTools creator and astronomy enthusiast
dx_ron
Posts: 246
Joined: Fri Apr 16, 2021 3:55 pm

Re: RC BlurXterminator …. Any comments ?

Post by dx_ron »

Interesting example posted on CN of what BX can "do" with an image: https://www.cloudynights.com/topic/8642 ... p=12501590

I'm having a hard time seeing that result as "an approximation of deconvolution", but what I know about image processing could fit on the head of a pin - so - maybe?
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: RC BlurXterminator …. Any comments ?

Post by admin »

dx_ron wrote: Thu Feb 16, 2023 11:34 pm Interesting example posted on CN of what BX can "do" with an image: https://www.cloudynights.com/topic/8642 ... p=12501590

I'm having a hard time seeing that result as "an approximation of deconvolution", but what I know about image processing could fit on the head of a pin - so - maybe?
It's a great example of wishful thinking "astrophotography" and how neural hallucination algorithms play into this; in this image, everything that it (randomly) latches on to, is made into "a star" by BXT and is thus being replaced by neat point lights (even galaxies, which, of course, have entirely different shapes).

Some galaxies have disappeared altogether, while the horseshoe suggest variations in luminance that don't exist in the real thing. Diffraction spikes the algorithm "accidentally" skipped are "fatter" than the point lights it conjurs up, etc.

The re-interpretative nature of the algorithm is on full display here, and true deconvolution would never ever lead to such a result.

The quality of the data simply does not support the fantastical "detail". Fooling yourself is one thing, trying to fool someone else is much, much worse. :evil:
Ivo Jager
StarTools creator and astronomy enthusiast
Mike in Rancho
Posts: 1141
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: RC BlurXterminator …. Any comments ?

Post by Mike in Rancho »

HIP or PGC...what's the difference? ;)

Now, I know we've discussed how additional iterations in real deconvolution/SVD will start bringing the stars more inward towards a point, and also thus rounder if shapes were a bit flawed.

There must be something about the nature of the math and passes made by actual deconvoultion which prevents point-sourcing a small galaxy? Do those pixels react differently to that reversal, even if tiny in size? Whereas this is perhaps assuming star, and treating it as such, when it isn't. :think:
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: RC BlurXterminator …. Any comments ?

Post by admin »

Mike in Rancho wrote: Fri Feb 17, 2023 9:38 am HIP or PGC...what's the difference? ;)
:lol:
Such an astronomer's joke. Love it.
Now, I know we've discussed how additional iterations in real deconvolution/SVD will start bringing the stars more inward towards a point, and also thus rounder if shapes were a bit flawed.

There must be something about the nature of the math and passes made by actual deconvoultion which prevents point-sourcing a small galaxy? Do those pixels react differently to that reversal, even if tiny in size? Whereas this is perhaps assuming star, and treating it as such, when it isn't. :think:
That's exactly the thing; with real deconvolution, the math treats all data/pixels equally. The distant galaxies' pixels will be spread over a larger area (because they are bigger, more diffuse objects), but their constituent detail will be blurred exactly the same as stars (point lights). If noise is too high or PSF was chosen incorrectly, any artefacts created by an R&L decon routine will be the same and easy to detect as aberrant.

With an algorithm that interprets (to the point of guessing) what it is seeing, not all data/pixels are treated the same. Some will end up as stars, some will be deemed "background" some will be deemed detail. With poor data like this, it is guaranteed to make arbitrary mistakes, but always with confidence - hallucinating an entirely plausible (but categorically wrong) solution. It is impossible to detect any such artefacts as aberrant, without having another reference image.
Ivo Jager
StarTools creator and astronomy enthusiast
Post Reply