Page 1 of 3

New to Startools - can you do better with this 53 hour O3 master light?

Posted: Tue Sep 08, 2020 10:02 am
by ramdom
Hello all! I'm a long time (few years) PI user, new to StarTools.

I've always been attracted to the philosophy of StarTools and in my ideal world, if I could do all the processing with a push of a button I'd be happy to hand it over to some program that did that. Because of this, I've downloaded and tried StarTools now and again over the last year or two but I never get past the initial steps and get anything useable.

Here're some issues I've run into:

1. The software runs slow. Right now, I'm using a brand new Macbook Pro that's top of the line, 4 TB SSD, 64 GB RAM, etc. and response time is slow. Is this common/normal? Maybe I should try 1.7?

2. Time to investigate. I have been using PI for a few years now, so I know it quite well. To get as proficient in ST, I'd presumably have to spend at least a significant fraction of that time and it's unclear if it'll be worth it - for me to consider it being worth it, it has to produce noticeably better images with about the same or less effort (far less effort and about the same images would work too).

I'm working on a difficult/faint target, Ou4 aka Squid Nebula and I've collected ~53 hours of data in just the O3 channel alone (632 x 5m = 3160m; this will be at least a 100+ hour series of images when all is said and done). The plan is to showcase ou4 O3 in an RGB background, ou4 in the Bat (sh2-129) narrowfield and ou4 in the Bat widefield. This is about the best I've been able to do using only PI (uncropped) which I'm reasonably satisfied with:

http://ram.org/images/space/downloads/ou4_O3.v0.26.jpg

I've been trying to see if I can replicate this or even do better in StarTools. I've read a lot of info on the ST page about PI users and PI-equivalent operations and also the tutorials here (https://www.startools.org/links--tutorials) and I'll keep trying (I'm not a big fan of video tutorials). But so far my efforts have been poor. Anyways, I'll keep trying but I wanted to throw this out:

Now, this is the kind of a request I really dislike making since I believe I should be willing to investigate and figure it out for myself. But no matter what, in the time I have, I won't be able to do it justice and also given that I've probably spent more than dozen hours trying to make this work in ST and gotten nowhere, I'm offering this challenge: take the 32 bit integrated FITS (or XISF if you prefer) and do better using only StarTools:

http://ram.org/images/space/downloads/ou4_O3.v2.fit
http://ram.org/images/space/downloads/ou4_O3.v2.xisf

Please note that v2 isn't exactly the same as v0 from which the first JPEG image is derived since I don't always calibrate with flats even though I take them but since the Dos and Don'ts say "take flats", I've provided a master light accordingly, i.e., v2 is an integration of 632 frames calibrated with a master flat and v0 is without any flats. Otherwise everything else is the same. I find flats either don't help or even slightly reduce SNR for this setup. I keep my optical train super clean so usually don't have to deal with dust bunnies* and as far as the light drop off, again, it's not a problem I've really had to deal with. If you want the flatless integrated fits (so we're starting from the identical image), then just replace "v2" with "v0" for the FITS downloads.

Thanks to all who consider or do make an effort! I appreciate it and I'm looking forward to your productions.

--Ram

* There is however a dust bunny here but given that the main object of interest is the Squid which and the background is made dark, I've not had it be a problem.

Re: New to Startools - can you do better with this 53 hour O3 master light?

Posted: Tue Sep 08, 2020 3:06 pm
by admin
Hi Ram,

Thanks for uploading.
It's getting late here so I'll have a more comprehensive response tomorrow (Melbourne, Australia time).
However I had a real quick peek.

Two things stand out;
  • The lack of calibration with flats rather mar the amazing efforts you are making to collect signal; you will see the issues in your dataset with AutoDev and subsequent Wipe operation. In a nutshell, it is hard to discern what is real faint signal and what is (could be?) nebulosity. Flats really are not optional, particularly when targeting faint objects like these. You can try working around the issue with post processing, but it is sub-optimal, subjective and may lead to signal destruction.
  • The dynamic range in the image is *huge*; It appears all frames have been added together, rather than having some sort of outlier rejection applied (do correct me if I'm wrong). This does not make a whole lot of sense for a number of reasons (for example, over-exposing stars in the scene don't add "enough" as their cores are singularities, while also philosophically the whole point of post-processing is condensing the vast dynamic in outer space in to something a human can see and comprehend - this is giving you a starting point that is even further away from that goal). Can you tell us what sort of stacking algorithm you are using?


Thank you!

Re: New to Startools - can you do better with this 53 hour O3 master light?

Posted: Tue Sep 08, 2020 6:13 pm
by ramdom
admin wrote: Tue Sep 08, 2020 3:06 pm Hi Ram,

Thanks for uploading.
It's getting late here so I'll have a more comprehensive response tomorrow (Melbourne, Australia time).
However I had a real quick peek.

Two things stand out;
  • The lack of calibration with flats rather mar the amazing efforts you are making to collect signal; you will see the issues in your dataset with AutoDev and subsequent Wipe operation. In a nutshell, it is hard to discern what is real faint signal and what is (could be?) nebulosity. Flats really are not optional, particularly when targeting faint objects like these. You can try working around the issue with post processing, but it is sub-optimal, subjective and may lead to signal destruction.
  • The dynamic range in the image is *huge*; It appears all frames have been added together, rather than having some sort of outlier rejection applied (do correct me if I'm wrong). This does not make a whole lot of sense for a number of reasons (for example, over-exposing stars in the scene don't add "enough" as their cores are singularities, while also philosophically the whole point of post-processing is condensing the vast dynamic in outer space in to something a human can see and comprehend - this is giving you a starting point that is even further away from that goal). Can you tell us what sort of stacking algorithm you are using?


Thank you!
Hi Ivo, thanks, no rush, take you time! I appreciate your willingness to look. As I said, the v2 is with flats with applied (that's the one to which I've provided the link to the FITS/XISF). The non-flats version was used to generate the comparison image and I bet with PI I could produce a highly similar image with both v0 and v2. I hope this is making sense. The link to the FITS and XISF file initial integration is with flats. The completed one which I consider "best I can do" is done without using flats. If you want to download v0 (non-flats FITS), then you need to actually replace "v2" with "v0" in the file name (so the non-flats FITS isn't available as a link in the post above).

v0 has the best SNR according to the ImageAnalysis->SNR of all the options I tried and I tried over a dozen ways to integrate. v2 has lower SNR which is why I didn't use it but it is done with flats (though it does remove that one dust bunny). The difference in SNR is small however. Since as you note, I don't have a signal tracking engine in PI I have to go with what I have at the moment and I do it based on my own NN (intuition). It has been a rare instance where flats have made a visual or SNR difference in PI (and I have carried out full processing both ways). I have my own ideas about modern CMOS sensors and the use of flats... but anyways, it's a moot issue here since I've provided a link to the flats calibrated version. If you can do better with that that's fine. :)

--

ImageIntegration: It was PI's ImageIntegration in 1.8.8-6. I used the default options and the pixel rejection algorithm was Generalized Extreme Studentized Deviate (ESD) Test and there were a lot of outliers rejected since I had to deal with a bunch of issues due to clouds, my imaging near the pole, etc. so it was no simple adding. Everything else was default but I find this rejection algorithm to be superior for very large datasets like this one. If you download the XISF file and look at the FITS header, you can see all the information is present in there (might be available in the FITS file too).

I even have an adaptive normalisation integration if you'd like that produced an overall better SNR but then when I cropped out just the nebula portion then the SNR dropped and this was the best option.

Thanks a lot!

--Ram

Re: New to Startools - can you do better with this 53 hour O3 master light?

Posted: Tue Sep 08, 2020 6:51 pm
by ramdom
One thing I want to add: this is an integration of all 632 frames so they were weighted according to PI's noise evaluation but I also did a 40 hour integration which had the best SNR (so I rejected the worst 13 hours of data using the criteria FWHM, Eccentricity, Median, SNRWeight, and Stars). I processed fully with both sets of data and SNR was 60.31 for the v0 integration and 60.48 db for the 40 hour integration. So the end result had a tiny difference and I decided to use all the data I had. Visually also - they look very very similar. So this is how I know I've converged in my processing - I've spent over a month processing this target in PI in all kinds of ways and when I say "best I can do", it's literally the best over hundreds of parametre choices. I've thrown everything I know in PI at it.

Let me know if you want me to upload the 40 hour integration - I feel I shouldn't throw out data esp. as PI's ImageIntegration really handles outliers amazingly well but it did produce a slightly better integration according according to SNR. But the 40 hour set may be a better to start from.

But I would add that regardless of which is better mathematically/theoretically (flats vs. non-flats, throwing out bad frames ahead of time or bunging it all in, etc.) in the end PI is able to handle it and produce a visually pleasing image and that trying out other things doesn't seem to affect the SNR much. Is there a way I can see what the SNR is according to StarTools? Can I see the output of the tracking algorithm?

--Ram

Re: New to Startools - can you do better with this 53 hour O3 master light?

Posted: Wed Sep 09, 2020 1:27 am
by admin
Hi Ram,

Thank you.

Dynamic range

The reason I am saying/thinking the enormous dynamic range is anomalous, is that the sensor of the cameras in your signature cannot produce a dynamic range that is this vast. Indeed I can multiply the linear signal by a factor of 100x and not see any (meaningful) change in the stellar profiles or destroying any (meaningful) data in the highlights. The upper range appears to be artificially extended beyond unity for no good reason.

I cannot say with 100% certainty, but the stellar profiles (halos) around over-exposing star cores also seem somewhat exaggerated and may be connected to the strange dynamic range issue.

Regardless, it is causing a couple of issues in StarTools that are also anomalous in response;
  • It makes a number of operations a bit slower as there will be some areas that will register as extremely low signal versus the "high" signal of the star cores. Datasets with a poor "realtive" signal require more processing power as the signal Tracking engine will try to compensate by increasing the size of local neighborhood areas for establishing/measuring spatial correlation, in order to keep noise grain propagation in check.
  • It makes the GPU accelerated version behave a bit problematic (arguably an issue that I need to solve in the alpha), as GPUs vastly prefer 32-bit floating point operations over 64-bit operations, which does not cut if this sort of vast (useless) dynamic range is to be preserved.
Note also that, as I alluded to in the CN thread, SNR measurements (there are a few methods for establishing such a measure) are subject to very narrow set of conditions to be a useful measurement. For example, comparing measurements of noise vs signal between crops of the same image will not be particularly useful for the fact that flat calibration will have multiplied the signal (and its noise component!) to make up for uneven lighting in different ways in different parts of the image.

I'd be very interested to know why the dynamic range is extended so much as it is.

Flats

I am aware the V2 datasets were supposed to be calibrated by flats, hence my surprise at seeing this after the first diagnostic AutoDev;
Selection_197.jpg
Selection_197.jpg (208.26 KiB) Viewed 4009 times
This is not normal, nor is it celestial signal (incidentally, in this very first step of the workflow, ST is already proving its worth :) ).

Indeed, Wipe - with default settings and after cropping away the artefact in the lower left - yields this (with its automatic temporary diagnostics AutoDev);
Selection_198.jpg
Selection_198.jpg (406.19 KiB) Viewed 4009 times
Note that Wipe - by design - uses a very different approach to the less sophisticated (arbitrary) sample-setting modelling as found in PI, Siril, APP etc.
It relies on gradient undulation frequency to separate (real!) gradient from celestial signal. This is incredibly important when trying to retain faint signal such as IFN, which is otherwise easily destroyed using manual/arbitrary methods.

The remnants you are seeing are purposefully left in by Wipe, as they do not undulate slow enough to fit the criteria for being a gradient; they are real "features" of your dataset. If you imagine the IFN around, for example, M81/M82 (example 1, 2, 3 of various detections) then you can see how this behavior is very important.

I know that Wipe is probably the source of the most frustration for newbies, as they expect Wipe to act like some sort of synthetic flats generator. While Wipe can fill that role to some extent (use the Vignetting preset and things will look a little "better" already), as with all modules in StarTools, the design goal is to improve on the state of the art for the purpose of signal fidelity. Contrary to what you may have heard in some circles (and contrary to what you alluded to in your initial post), StarTools is not designed to "do all the processing with a push of a button". The fact that it can often produce decent images quickly is thanks to its signal processing and flow rigor (in science things should be repeatable with an expectation to achieve similar results - no matter who performs the experiment, as long as the experiment is conducted within the same bounds of a set of parameters). However ST's behavior in this matter is a side effect of that, and not the main goal. The main goal is signal fidelity and preservation.

Continuing on, it is entirely possible to process this image further by working around the issues using a less-standard workflow (from which you will learn much less going forward once you produce better data). This involves creating a mask for Wipe that shows it which areas are off-limits and then bumping up its Aggressiveness to start killing faster undulating detail as well. This is starting to get closer to manual sample setting like in DBE (though still does not work quite the same);
StarTools_199.jpg
StarTools_199.jpg (128.65 KiB) Viewed 4009 times
Wavelet Sharpening will accentuate the outline of the shape. Note that most shapes that emit due to ionisation have distinct concentrations/"shells",/"shockfronts" originating from the source of the ionizing radiation and expanding into space, buffeted by stellar winds. As such, their edges tend to show a higher luminosity as you're seeing through higher concentrations of emissions. E.g. depicting such objects as "opaque" as if the whole object is uniformly filled with emitting gas tends to belie reality. In this, it is important to be able to trust the data you have acquired and not have to second guess whether a concentration is native to the object, or some the result of some sort of gradient/artefact caused by uneven lighting and poor/incorrect/no calibration. This is where sample-setting is so dangerous.

The Life module's default Isolate preset (which is also a scale-based detail manipulation module, but works in extremely large scales), helps push back noise and a busy star field, while re-focusing the attention on any larger scale structures in your image.

At the end of the day, with the dataset as it currently stands, I would not be comfortable pushing it beyond something like this;
Image
So, to progress from here let's try to ascertain what is going with the dynamic range issue, and let's try to find out why your flats are not working!

Re: New to Startools - can you do better with this 53 hour O3 master light?

Posted: Wed Sep 09, 2020 2:41 am
by ramdom
For of all, thank you for taking the time to do this. I am unclear as to what you're saying when you wrote this "This is not normal, nor is it celestial signal " Are you referring to the fact that there's still a gradient? The middle is dark and the outside isn't? (All that goes away once I do a CanonBandingReduction BTW and especially with a DBE on top, but even just the CBE makes a huge difference to the look of the master light.)

Unfortunately I think the problem with including all 632 frames (which is the only way I did the ones with flats, I could re-generate all the data but it'd be a PITA) which was done under a large number of different conditions and included all kinds of clouds, wind, dew, etc. Perhaps you're seeing that.

I can give you a cleaner set of only 40 hours but without flats and you can check it and see if helps/makes a difference. The SNR on this set is better. The only way to do 40 hours with flats again is to redo everything since I only saved the source files.

What you've produced is pretty decent - I could push it a lot in PI but again the work involved in pushing is where I've spent a lot of my past month.

--

Other random items:

I think that for me the purpose of doing AP is both to collect observational data that I can then process however I want AND generate aesthetically pleasing pictures in the end without regards to things like SNR. For all my SNR measurements, in the end the final image I chose was a lower SNR one since it "looked better". Of course, a lot of this is subjective, but you saw the JPEG I produced - I've seen many many images of the Squid and I'm pretty satisfied with that one and that may be a tad overprocessed but if you step back and just look at it before the overprocess, it still looks great. I think it'll look great set with RGB stars or set within the Bat and so on. So ultimately there is being a purist but I'm willing to drop it in the pursuit of a great goal (as a scientist, this near-perfectionism has served me well in life). Different strokes but I'd say a lot of AP folks are some mix of both.

I understand your point about the design of StarTools - I'm saying that's the reason I personally would switch if it would make processing life easier or produce better looking images. And in theory the philosophy of StarTools should allow that - the lack of back propagation means a lot of trial and error in other applications should not be necessary.

More randomly, I would say though what I'm most frustrated by is not knowing what parametres to use for each of these steps - unless you think the "default" options are fine. And since the response time was slow, playing with toggles was difficult but if you're saying it is due to the data in my stack I can try another stack, perhaps with a different camera or more homogeneous. My recent targets have been very heterogeneous like this, done over multiple seasons, conditions, etc. Do you think that's causing the issue?

Also when I talked about comparing crops, I'm talking about comparing the same crop - i.e., both versions of the same have been cropped identically. For instance, I could crop without DBE, or apply DBE and then crop identically, and then compare SNR. Are you saying that can't be done? I made sure to leave about 80% of the sky. I only cropped the outer 20% of the edges.

--Ram

Re: New to Startools - can you do better with this 53 hour O3 master light?

Posted: Wed Sep 09, 2020 2:44 am
by ramdom
If you are talking about the banding when you say "this is not normal", since you have PI, just run CBR default on it - the banding entirely disappears. I had some banding issues but found I could correct them by applying CBR on the master light (I also did CBR after calibration and then integrated but no difference in SNR).

--Ram

Re: New to Startools - can you do better with this 53 hour O3 master light?

Posted: Wed Sep 09, 2020 4:12 am
by admin
ramdom wrote: Wed Sep 09, 2020 2:41 am For of all, thank you for taking the time to do this. I am unclear as to what you're saying when you wrote this "This is not normal, nor is it celestial signal " Are you referring to the fact that there's still a gradient?
Correct! What I meant is that this is not normal for a stack that is supposed to be calibrated. It is not signal from the sky. It is introduced by incorrect calibration. Indeed, The background is very uneven and does not correspond with real detail in O-III (or any other band) that I can readily see in other renditions. Indeed, even before gradient modelling, as you describe "the middle is dark and the outside is not", which is not (can not be) a feature of a well-calibrated stack, unless this truly describes the object (I think we both agree that it does not).

I'm surprised to learn Canon Banding Reduction would help here at all - I don't see any banding here? Again, DBE may be abused to destroy signal that you "don't like" (if you are aware of it in the first place), but what is or isn't real becomes very hard to discern for people (and algorithms) when the unwanted signal is faint and very local.

If going for faint objects like this, your calibration needs to be impeccable!
Unfortunately I think the problem with including all 632 frames (which is the only way I did the ones with flats, I could re-generate all the data but it'd be a PITA) which was done under a large number of different conditions and included all kinds of clouds, wind, dew, etc. Perhaps you're seeing that.
I'm not sure - with such a vast amount of frames, I would expect outlier rejection algorithms to make mince meat of any troublesome frames.
What you've produced is pretty decent - I could push it a lot in PI but again the work involved in pushing is where I've spent a lot of my past month.
Pushing clean data is infinitely easier. I would spend some time to see if you can find out why your flats are not working properly, perhaps on a bright, easy target? Just for diagnostics purposes and making sure everything is working correctly and in unison.
I think that for me the purpose of doing AP is both to collect observational data that I can then process however I want AND generate aesthetically pleasing pictures in the end without regards to things like SNR. For all my SNR measurements, in the end the final image I chose was a lower SNR one since it "looked better". Of course, a lot of this is subjective, but you saw the JPEG I produced - I've seen many many images of the Squid and I'm pretty satisfied with that one and that may be a tad overprocessed but if you step back and just look at it before the overprocess, it still looks great. I think it'll look great set with RGB stars or set within the Bat and so on. So ultimately there is being a purist but I'm willing to drop it in the pursuit of a great goal (as a scientist, this near-perfectionism has served me well in life). Different strokes but I'd say a lot of AP folks are some mix of both.
Indeed, AP I like to think 50% science, 50% art. You use the science part to establish your dataset in accordance with best practices (and using tools and measures like SNR). Then your transform your dataset into an image, according to your personal artistic vision.
More randomly, I would say though what I'm most frustrated by is not knowing what parametres to use for each of these steps - unless you think
the "default" options are fine.
Indeed, with a good, clean dataset you can breeze through the modules using mostly defaults (see further below).
If you find StarTools slow on your machine, be sure to Bin your dataset if oversampled and use smaller previews where available. As said, the 1.7 alpha versions are fully GPU accelerated, though with your particular dataset (exhibiting the extreme dynamic range) the single precision floating point operations cause some headaches.
if you're saying it is due to the data in my stack I can try another stack, perhaps with a different camera or more homogeneous. My recent targets have been very heterogeneous like this, done over multiple seasons, conditions, etc. Do you think that's causing the issue?
It is always possible this may have something to do with it. A short imaging session on a brighter object should turn up any areas or issues that may need attention or optimizing. Keeping thing simple and breaking things down makes it much easier to pin-point any issues. Once everything works smoothly, you can build on these foundations.
Also when I talked about comparing crops, I'm talking about comparing the same crop - i.e., both versions of the same have been cropped identically. For instance, I could crop without DBE, or apply DBE and then crop identically, and then compare SNR. Are you saying that can't be done? I made sure to leave about 80% of the sky. I only cropped the outer 20% of the edges.
Indeed, applying DBE likely makes SNR comparisons moot depending on how SNR is measured and depending on what DBE is configured to do; in essence it models a signal and then subtracts or divides the original signal to compensate for the modelled signal. In the case of light pollution, it can model and subtracts the unwanted signal (but not the noise component which remains the same!). Therefore, in this strict sense, it per definition changes the signal to noise ratio - if measured in those terms.

To get to grips with ST quicker, you could perhaps try a freely available dataset, for example the just released SHO dataset from the the Spanish FLO Ikarus Observatory project? (see this thread). In that thread you will see an - on purpose - very minimal workflow with defaults (proving that a standard workflow works for most datasets, even complex SHO composites). It should make for a useful side-by-side workflow and algorithm effectiveness comparison with PI. Data is here, and you can find a video by David Wills processing this dataset in PixInsight. David has also uploaded his PixInsight workspace with notes etc. as well.

Any comments, trouble, questions, etc. with regards to any of this, please do let me know!

Re: New to Startools - can you do better with this 53 hour O3 master light?

Posted: Wed Sep 09, 2020 5:39 am
by ramdom
Hi Ivo, since banding is the issue you're referring to, then it is indeed due to the large scale banding that is occurring and it was particularly bad for this target. You can see it quite clearly if you step back a bit, it's a dark/light/dark/light gradient---it's really bands, not a hole in the middle. It's not the typical close up banding but it's on a different scale and CBR mostly fixes it and DBE gets the rest. For whatever reasons, this has been happening with my camera almost since inception.

I check the SNR after CBR and DBE - CBR almost always doesn't budge it and DBE also perhaps reduces SNR ever so slightly. In this case at least, I obtain spectacular results after doing these two operations where the background is essentially flat. That's how I was able to do the processing I did. Here are the two files after rotatation, CBR and DBE:

http://ram.org/images/space/downloads/ou4_O3.v0.1.fit
http://ram.org/images/space/downloads/ou4_O3.v2.1.fit

You can also try it out: just run CBE with the defaults on the v0 image after it is rotated 180 and you'll see a remarkable change when you redo the STF. I'd say 80-90% of the banding goes away after CBE and then the remaining 20% is gotten by DBE.

As far as why this occurs, that kind of large scale banding has been with the camera since I bought it. Perhaps I should've returned it and gotten another one but I've learnt in AP to make do with what you have and I was excited to get started so I just went with it. Many people have tried to help me calibrate out the amp glow and the banding but it has been to no avail. I've resorted to just masking it out and dealing with it. I fully understand the theory behind proper calibration but I think the reality of it is that these cameras from ZWO and QHY aren't always perfect not to mention software like SharpCap which have its own share of bugs, or it could be the QHY SDK which is what Robin complains about. I have plans to address all of these later but it's just kicking the can down the road.

At the end I'm not sure where this leaves us. Time is precious and it's easier to just work around issues. I do think though I will give ST a try with a cleaner set of mine (now that you've confirmed the defaults should be good, I don't have to fiddle with anything other than pushing the buttons). I can try other people's sets but I 100% believe what you describe - there will be sets where ST does better. However, I have to deal with the gear I have and the problems it generates and I really would've liked to see a difficult use case since what I've observed looking at other people's data is also that generating clean data is difficult (and this is probably my most difficult data set ever). I had long suspected some electrical interference was causing the banding and I think it's the power cable from my laptop with either incorrect or improper shielding so going forward some it will be better. I'm so used to these hacks to solve these issues that I've not bothered to spend time debugging it.

But as a favour since you did put in the work, can you please send me your final version that you said "that's as far as I'm willing to push it" as a 32 bit fits? I can then examine it properly. Thanks a lot!

--Ram

Re: New to Startools - can you do better with this 53 hour O3 master light?

Posted: Wed Sep 09, 2020 5:52 am
by ramdom
I did however take a look at thread you pointed to on SGL and the images done using ST are amazing - it is good to know that can be accomplished. But the conditions for that set are amazing - they spent 100 hours on data collection and used 40 hours of it - throwing out 60 hours worth! That's not a realistic proposition, right? We have to be able to do this without that effort - what would've happened had all 100 hours been used? But I do have other data sets of varying difficulties and cleanliness and I can compare and contrast.

So I do similar versions of my set and yes, the 40 hour set (throwing out 13 hours) is teesny bit better but just marginally. Nonethless the problems you've specifically identified are fixed with CBR. I only wish CBR as part of calibration fixes it completely which is what would make sense - but it doesn't. But CBR after does as you can see in the v0.1.fit link I provided.

--Ram