Discussion idea - giving StarTools stdev data from stack?

General discussion about StarTools.
User avatar
Cheman
Posts: 386
Joined: Tue Aug 20, 2013 11:20 pm
Location: Gardnerville Nevada, USA
Contact:

Re: Discussion idea - giving StarTools stdev data from stack

Post by Cheman »

This is my result
NGC 6946-1.jpg
NGC 6946-1.jpg (231.66 KiB) Viewed 10440 times
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: Discussion idea - giving StarTools stdev data from stack

Post by admin »

Cheman wrote:This is my result
NGC 6946-1.jpg
Very nice and a much more realistic stretching of the data than the versions I posted (to show the read noise difference). I notice your data looked pre-color balanced though (causing some color issues in the highlights/stars). Do you use dcraw from the command line to pre-convert to TIFF and what are your DSS settings?
(see here http://www.startools.org/forum/viewtopi ... =571#p2132)

Did you notice a difference in noise between non stddev corrected and corrected data sets using khyperiaa's stacker?
Ivo Jager
StarTools creator and astronomy enthusiast
User avatar
Cheman
Posts: 386
Joined: Tue Aug 20, 2013 11:20 pm
Location: Gardnerville Nevada, USA
Contact:

Re: Discussion idea - giving StarTools stdev data from stack

Post by Cheman »

admin wrote:
Cheman wrote:This is my result
NGC 6946-1.jpg
Very nice and a much more realistic stretching of the data than the versions I posted (to show the read noise difference). I notice your data looked pre-color balanced though (causing some color issues in the highlights/stars). Do you use dcraw from the command line to pre-convert to TIFF and what are your DSS settings?
(see here http://www.startools.org/forum/viewtopi ... =571#p2132)

Did you notice a difference in noise between non stddev corrected and corrected data sets using khyperiaa's stacker?
Ivo

As far as I know, all the settings in DSS are set to do no color correction, background correction etc. I've read all your recommendations on DSS settings in the past so I think they are all set correctly. I do not use dcraw to preconvert to tiff. I do use the hot pixel removal in DSS as I do not use dark frames. Could that contribute to the problem? To be honest, I didnt compare the two stacks, When I tried to use khyperiaa's stacker (instead of DSS) with the Dss intermediate files (not the Sdtdev stacker) it wouldnt stack em, said it could only use 2d files. Maybe I did something wrong there, dunno. I wonder (as do others) if there is something inherent in DSS that causes slight (and not so slight) problems for some of us users. Hey it's free and works pretty darn good. But this is why I'm hoping for an Ivo/khyperia collaboration on a stacking program taylored to ST. That would be :auto-nx: supercharged!!!!
BTW, I knew your posted versions were just to show the noise difference ;) I'm sure you could have done AT LEAST as good as my version :lol: There is a slighly different version linked in the Gallery section
Che
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: Discussion idea - giving StarTools stdev data from stack

Post by admin »

I wonder (as do others) if there is something inherent in DSS that causes slight (and not so slight) problems for some of us users. Hey it's free and works pretty darn good. But this is why I'm hoping for an Ivo/khyperia collaboration on a stacking program taylored to ST. That would be :auto-nx: supercharged!!!!
This would really help circumvent the DSS issues people are having. It's really annoying that there's simply no way to keep DSS from meddling with the data (e.g. not color balance it). Images could look so much better without it! And then there is the added benefit of having the standard deviation data. If kyhperiaa keeps up the good work, people's image will significantly improve - virtually without any added fuss or steps!
Ivo Jager
StarTools creator and astronomy enthusiast
User avatar
Cheman
Posts: 386
Joined: Tue Aug 20, 2013 11:20 pm
Location: Gardnerville Nevada, USA
Contact:

Re: Discussion idea - giving StarTools stdev data from stack

Post by Cheman »

In regards to DSS settings regarding Fits files from the CCD camera
I am currently set to use Adaptive Homogeneity-Directed(AHD) interpolation.
Is that the problem?
Should I use Bilinear Interpolation
Bayer Drizzle(no interpolation, no debayerization)
or Create super pixels from raw Bayer Matrix(no interpolation)
instead?
Thanks for your input
Che
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: Discussion idea - giving StarTools stdev data from stack

Post by admin »

Cheman wrote:In regards to DSS settings regarding Fits files from the CCD camera
I am currently set to use Adaptive Homogeneity-Directed(AHD) interpolation.
Is that the problem?
Should I use Bilinear Interpolation
Bayer Drizzle(no interpolation, no debayerization)
or Create super pixels from raw Bayer Matrix(no interpolation)
instead?
Thanks for your input
Che
The interpolation is not the problem (though avoiding AHD should yield a marginally better image), but the color calibration is; color balancing is a multiplicative process. The three channels are multiplied by a factor for each channel (to compensate for the sensitivity of the CCD and the transmission characterstics of the color filters).

Doing color balancing too early on in the process causes problems that are twofold;
1. As signal is multiplied, so is noise. Ergo, noise levels will differ between color channels in a color balanced image. For a DSLR image, luminance (brightness information) is made up from color information by simply adding red, green and blue. Exacerbated noise in one of the color channels will exacerbate noise in the final luminance component (even though we were just interested in balancing the colors).
2. How do you handle saturated levels (e.g. star cores)? Let's say we got a star core that's R:G:B 255:255:255 (pure white in an 8-bit dynamic range) - the star is a young blue star (as shall be evidenced by the halo around it in the color balanced version). Lets' say the image is too blue. To balance the colors, a multiplication 1.0 : 0.85 : 0.66 needs to be applied. This is all fine and dandy for the pixels that were not saturated (just multiply their R G adn B values). However for saturated values (e.g. values that were cut off at 255 because we couldn't record a higher value), things will go horribly wrong; all of a sudden your star core will be 255*1.0 : 255 * 0.85 : 255 * 0.66 is 255 : 217 : 168. The latter color comes out as a faint yellow. LIKE SO. So now you'll be left with a well color balanced blue star with a yellow-ish core. Note that color balancing is done on the linear image and we haven't started stretching our image yet. In other applications like PixInsight, Photoshop, etc. people just happily (and I would argue arbitrarily and erroneously) stretch their color information along with their luminance information. In our case making the yellow much 'brighter' and thus very, very close to white. So no harm done. I personally find it ludicrous to suggest that the color of an object in outer space is dependent on how you chose to stretch the image, but many people seem to find this an ok practice. :confusion-shrug:
However, StarTools Color Constancy algorithm (which you can switch off of course) will recover color where it can detect it in the *linear* image, undoing any stretching that your performed of the color data while you were trying to bring out detail (that's why you would want to use the Color module at the end). The many merits and uses of recovering true color have been dealt with in another thread, but it shall be obvious that if StarTools sees a yellow core (caused by the pre-color balancing), it will try to negate the dynamic range manipulations you made to enhance the luminance information and attempt to render the core as yellow, because that's what's in the linear data that it was given. So what you'll end up with is a color balanced blue star with an (incorrect) yellow core.


About debayering interpolation;
Super pixels (no interpolation) will yield a much better noise profile, because, as it says, there is no interpolation performed at all (Bayer drizzle only makes sense if your images are undersampled). Interpolation takes shot (Poisson) noise that is a natural part of the signal and spreads it out over multiple pixels (because it's then also part of the interpolated signal). This makes the resulting noise in your image no longer random (it makes it clumpy and/or patterned depending on the interpolation algorithm chosen), making it *much* harder to remove.
Ivo Jager
StarTools creator and astronomy enthusiast
khyperia
Posts: 23
Joined: Wed May 21, 2014 12:36 pm

Re: Discussion idea - giving StarTools stdev data from stack

Post by khyperia »

admin wrote:If kyhperiaa keeps up the good work, people's image will significantly improve - virtually without any added fuss or steps!
Hmm. If there's that big of a need for it, I think I'll continue working on it. I didn't know DSS had such an issue with putting out the right data - I guess it tries too hard to be an all-in-one package deal. I thought it just had some accuracy issues with registration.

I've got a few minor questions to proceed.
  • Is keeping with "only fits files" an okay route, and let people convert to/from .fits? I know ImageMagick works great for me for this - "mogrify -format fits *.tiff" - but other people might have trouble figuring that out. (also I'd be pissing off /u/bersonic, haha)
  • Along similar lines to the previous one, do I need to support bayer matrix conversion (greyscale->color)?
  • Actually, just in general, what *do* I need to support?
  • For color images (images with a bayer filter, not separate color filtered mono images), should I register the RGB separately and then stack them according to their own transformations, or should I lock the whole thing together? (locking as in calculating the transform based on a synthed L, and transforming each channel together)
  • What interpolation algorithm should I use? Essentially, the problem boils down to resampling. "For destination integer pixel coordinate X, run through the inverse transform to get the source floating coordinate X', then interpolate that floating coordinate somehow to grab integer coordinates from the source". Right now I'm using linear interpolation, as in (where x and y are values from 0 to 1):

Code: Select all

result =
 image(0, 0) * (1 - x) * (1 - y) +
 image(1, 0) * x * (1 - y) +
 image(0, 1) * (1 - x) * y +
 image(1, 1) * x * y
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: Discussion idea - giving StarTools stdev data from stack

Post by admin »

khyperia wrote:
admin wrote:If kyhperiaa keeps up the good work, people's image will significantly improve - virtually without any added fuss or steps!
Hmm. If there's that big of a need for it, I think I'll continue working on it. I didn't know DSS had such an issue with putting out the right data - I guess it tries too hard to be an all-in-one package deal. I thought it just had some accuracy issues with registration.
Trust me - there *is* a big need for it - and no software package outputs the standard deviation. It's exactly this odd thing that I started noticing when I just started out looking around for processing software. For some reason people think processing is this collection of unrelated steps you need to take and filters you need to apply, 1-by-1. That's just an incredibly dumb way of approaching processing. By measuring and taking into account stuff you've done to your image waaaay back down the 'chain', you can improve or refine the output waaay up the 'chain' (and vice versa for some operations). The standard deviation data is the perfect example of this; it's something that gets measured at one of the earliest operations on the image (stacking) and would be used the most at one of the last operations (noise reduction before switching Tracking off), as well as a little in between for decon and wavelet sharpening.
I've got a few minor questions to proceed.
  • Is keeping with "only fits files" an okay route, and let people convert to/from .fits? I know ImageMagick works great for me for this - "mogrify -format fits *.tiff" - but other people might have trouble figuring that out. (also I'd be pissing off /u/bersonic, haha)
:lol: /u/bersonic has made his bed by going the PS way - I'm afraid he's a lost cause... :lol:
FITS is the way to go (32-bit integer). Any AP program should be able to import it and there are no funky proprietary formats and compression algorithms to support (i'm looking at you libTiff and DSS :roll:).
[*] Along similar lines to the previous one, do I need to support bayer matrix conversion (greyscale->color)?
It'd be a good idea, as it is an area that causes a lot of sub-optimal data as well. The very good news, however, is that, for AP, this conversion actually needs to be *dumbed down* from the more advanced interpolation techniques, as the more advanced techniques actually cause pattern noise that is very hard to get rid of. Even better (IMHO) is to, by default not do any interpolation at all and reduce resolution to 25% so that all pixels have a real corresponding sample. Most people using a DSLR oversample their image many times over (the megapixel race has done absolutely nothing for AP because we're limited to what the atmosphere lets us resolve).
[*] Actually, just in general, what *do* I need to support?
If you want to go all the way and support RAW, there is the dcraw code you can use (I think the PI guys just asked permission from Dave Coffin to include it in the source). OSC CCD cameras tend to output their data as bayered FITS already, so they just need to be debayered using the right matrix (a simple interface could be constructed to specify which pixel consitutes red, green and blue in a 2x2 window, if you want to support more outlandish configuartions).
[*] For color images (images with a bayer filter, not separate color filtered mono images), should I register the RGB separately and then stack them according to their own transformations, or should I lock the whole thing together? (locking as in calculating the transform based on a synthed L, and transforming each channel together)
Gee, that's an interesting question! Thinking about it, locking would solve a great many glitches DSS exhibits now and then (it sometimes has trouble lining up red, green and blue in DSLR data).
[*] What interpolation algorithm should I use? Essentially, the problem boils down to resampling. "For destination integer pixel coordinate X, run through the inverse transform to get the source floating coordinate X', then interpolate that floating coordinate somehow to grab integer coordinates from the source". Right now I'm using linear interpolation, as in (where x and y are values from 0 to 1):[/list]

Code: Select all

result =
 image(0, 0) * (1 - x) * (1 - y) +
 image(1, 0) * x * (1 - y) +
 image(0, 1) * (1 - x) * y +
 image(1, 1) * x * y
I'm a big proponent of the kiss principle - keep it simple stupid. Create a solid foundation that can be easily extended in the future. You can do fancy stuff later on, if you want to.

You'll (hopefully) find bilinear interpolation will do a great job, and I know from experimentation that most other interpolation techniques can meddle with noise response and cause non-linearity and ringing. Again, compartmentalized thinking in processing and trying to do the 'best job right now in my little realm/software/product' can yield less optimal results when regarding the full signal path that your user has to go through (where the ringing and non-linear noise response comes back to bite the user in the ass). It's something I'm keen to banish.

Does this help?
Ivo Jager
StarTools creator and astronomy enthusiast
khyperia
Posts: 23
Joined: Wed May 21, 2014 12:36 pm

Re: Discussion idea - giving StarTools stdev data from stack

Post by khyperia »

admin wrote:Does this help?
Woo, yes, all those answers helped a ton. (I did not know that was actually called bilinear interpolation! No idea it was a formal thing :P )

I'm still working on the user interface bit of it, though - specifically, I don't know what type of user interface to create. Right now I've thought of a few choices.

1) Create a big complex GUI beast with toggles and switches and widgets and gizmos to configure whatever needs to be configured. (I'm a big fan of "don't have anything that can be automatically figured out be user-inputted", but that's not possible sometimes - like whether or not the user wants to do hot pixel removal). Sorta like DSS.
2) Make a sort of scripting engine, and have a REPL for the core interface (maybe with a bit of graphical helpers), with plugin possibilities / runtime scripts. As a programmer I would love this option, and is what I would choose if I were coding for myself (because of it's massive flexibility and power), although admittedly it might be user-unfriendly. Sorta like IRIS and IRAF.
3) Do something innovative, perhaps something weird like having a UI that has "processing blocks" and "pipelines" for data transfer between processing blocks, sorta like Lego Mindstorms http://ph-elec.com/wp-content/uploads/2 ... Move_C.jpg. I'm not a UI designer by any means, so I'm not sure what's possible or good.

Then there's another issue: What am I exactly trying to do with this program? If there's a big need for a new stacking program... what exactly does "stacking" encompass? It's anywhere from "take a list of input files, register, stack, and save it" to "registration/calibration, debayering, preview stretching, file format conversion, live preview as images are saved from the capture program", the list can go on and on and on. It's hard to know where to draw the line. The UI depends a lot on what it has to do, so... I guess the better question, instead of asking "what type of UI should I have", I should ask "what is the need this program needs to fill" - and I'm honestly not sure what it is. Of course, just going off my own needs, I would stop where I am now - with a very user-unfriendly hunk of a program that works for my specific needs. I'd like to take other's needs into account too, though :)

Anywho, I'll be gone for a few days now, won't respond for a bit.
User avatar
admin
Site Admin
Posts: 3367
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: Discussion idea - giving StarTools stdev data from stack

Post by admin »

IMHO, for the UI bit, try keeping UI and functionality as seperate as possible, so that you can 'skin' the core of your aplication any way you want at a later stage. The actual process should, again IMHO, respond gracefully to aborting *at any time*, output an error code if aborted and, while running/doing its thing, output some sort of meaningful message to the host (UI) application that represents a 'progress report', which the host application can then relay in any way it sees fit to the user (e.g. a nice progress bar). The strategy is 'start ugly', make it work, and refine the interface as you go along.
Then there's another issue: What am I exactly trying to do with this program? If there's a big need for a new stacking program... what exactly does "stacking" encompass? It's anywhere from "take a list of input files, register, stack, and save it" to "registration/calibration, debayering, preview stretching, file format conversion, live preview as images are saved from the capture program", the list can go on and on and on. It's hard to know where to draw the line. The UI depends a lot on what it has to do, so... I guess the better question, instead of asking "what type of UI should I have", I should ask "what is the need this program needs to fill" - and I'm honestly not sure what it is. Of course, just going off my own needs, I would stop where I am now - with a very user-unfriendly hunk of a program that works for my specific needs. I'd like to take other's needs into account too, though :)
First and foremost, you're tangibly improving people's joy in the hobby by making it easier to obtain better results for more people than ever before. Don't underestimate that impact!
Again, KISS - keep it simple, stupid. Do a few things and do them well. Expand later.
In application and UI design, apply the 80/20 rule, which goes like this; optimise your application for the 20% of functionality that people will be using 80% of the time when they fire up your application. If any stuff in your application doesn't fit the 80/20 rule, you hide it in a sub menu, some preset button or give the user some other means to accomplish the 'more exotic' task.
Apart from delivering better results, don't underestimate the need for convenience. If some core thing/process takes too much time/clicks to accomplish, you'll lose users (e.g. why should they use your application over, for example, DSS?).
Given the state of some of the stackers out there and the strong evidence that people struggle with these applications, you have a lot of leeway to improve the state of the art.
Ivo Jager
StarTools creator and astronomy enthusiast
Post Reply