-
Notifications
You must be signed in to change notification settings - Fork 1.2k
RFC : New pixeldeblur module. #16337
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…versals are explicity controlled to prevent halos.
If you haven't previously cloned the submodule onto your machine, it may take a while to download -- its repo is over 8GB by now.... |
You're mixing floats and doubles, which causes extra conversions and makes vectorization less effective. Use float literals (1.0f instead of 1.0) and float versions of mathematical functions (sqrtf instead of sqrt). |
@ralfbrown: Wow - 8GB, that was what the problem was. It timed out a few times too. To all: I forgot to say you will want to make sure you have denoise, lenscorrection and chromatic abberations corrected in the pipeline, or you'll see all of the uncorrected chromatic abberations amplified by this module. |
My first thought was: "what does this bring that is not already available in darktable?" We really avoid having duplicate modules, so we are deprecating some from time to time to have one module for one action. Having this module would only be ok if there is not way to have a D&S or "local contrast" preset doing the same. Don't take this has a no go for integration, but we really need lot more information on why this is so needed. |
Would be nice to share a RAW + xmp with D&S and this new module for us to play with it and see if we cannot have better result with D&S. Until now I have always been able to avoid the halo, but maybe there is some specific cases I have not found. TIA. I'm sure we'll have more comment from other devs too. |
Will do. I actually have 17 images I've been playing with to get a range of contrast & noise -- about 0.5 gigs. I am out for the day now but will get to this later. |
Some first ideas / questions
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A first pass on the style.
@weltyj You have to make PR against the master branch, BTW. |
This would be much appreciated by CPU-only users indeed who want a decent sharpening w/o having to deal with DS. |
@TurboGit -- Thanks very much for the code style comments You spent some real time looking at the code. I learned with Kernighan and Ritchie, first edition -- when it was new. Those "{}" styles still come out of my fingers that way. I'll go back thru your suggestions and see if I can spot some other areas where my formatting was on "1980's automatic-pilot" as well as your other comments. @jenshannoschwalm -- Yes, this is 3-10 times faster than D&S. I have, at times, wondered if deblurring should come in the demosaic step. If you had a low noise image in particular, seems like you could do a straight interpolation on the R,G,B components independently, then transverse(lateral) chromatic abberation adjustment, then some amount of deblurring on each component layer. If you knew also how the lens performed, you could vary the deblurring amount by distance from the image center. It could get even more complicated though, for example the lens distortion correction is altering the number of actual pixels used to estimate the corrected output pixels, resulting in a variable loss of sharpness. (i.e. at the center of the image, an output window of 100 pixels may use 100 pixels from the demosaic algorithm, but at the edge of the image the final 100 pixels may only be using 80 pixels from the demosaic algorithm) In regards to performance on the CPU as it stands now on a little 810x587 test image:
I think pixeldeblur could be sped up more -- I realy am pathetically inexperienced with OpenMP But before any thing else happens -- other people should be looking critically at pixeldeblur and and see if collectively we agree it is worth adding to darktable. My personal impression is that D&S does a better job at "texture", a little worse at noise suppression, and definitely worse with halos on high contrast edges when you want maximum sharpness. I will get my test images/xmp's out on google drive today. I'll also add some brief comments about where in the images I've been looking, but not make any comments about whether I think pixeldeblur or D&S is better. Much of this is subjective and I think it's best if you look at it and develop your own opinion first. One final thought -- I think I'll drop the "Advanced Parameters". The larger radius and the smooth deblurring were options I had for some previous version of the algorithm, and now seem unnecessary. Thanks all for your comments! |
And probably an OpenCL implementation would also be nice. |
https://drive.google.com/drive/folders/17NijwiXVTGCYzVpXOzybne0mTJ6ofOjE?usp=drive_link Theoretically, there is a link to all my test files, and the corresponding xmps. It's about 0.5 gigs. I was a meanie and reset the diffuse parms so you can tackle it the way you think is best with diffuse without getting distracted by my settings. I'll leave you all to form your own opinions now. I'm certainly open to answering any questions you may have, or entertaining ideas, or ... |
@weltyj : Don't have access to your link. |
Give the link another try. Should be able to access it now. |
@ weltyj : Did some testing on the images you provided. I see that the debur algo as a tendency of burning the highlight a bit more than D&S (using the The more annoying issue of this PR is that the rendering is very dependent on the actual zoom level. This is not good as if you see the image at 100% it looks good but when the fit zoom is use the image looks quite blurry. Hard to evaluate an image with this. |
Also, please rebase your PR on top of master if you want more dev to test this. Also you'll need to fix the code for it to compile:
TIA. |
@TurboGit > The more annoying issue of this PR is that the rendering is very dependent on the actual zoom level. This is not good as if you see the image at 100% it looks good but when the fit zoom is use the image looks quite blurry. Hard to evaluate an image with this. Yes -- I haven't figured this out. If the strength of the deblur is the same regardless of the zoom -- then the "fit" version is absurdly oversharpened, which doesn't make sense to me, seems like the fit should be delivering the pixels you see on the screen to the module. Clearly a lack of understanding on my part. Working on the rebase now -- thanks. |
The zoom issue is because the iop works on the preview (scaled down from the RAW) and some algorithm (like the deblur) is working differently whether there is all details on the image (full RAW) or if working on a preview. |
@@ -0,0 +1,1411 @@ | |||
/* | |||
This file is part of darktable, | |||
Copyright (C) 2009-2024 darktable developers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2009 should be removed or changed, depending if you've started working on this in 2024 or 2023.
Keep in mind that at any zoom less than 100% (or close to it), your module is not getting 1:1 pixels from the original file, but a downsampled version. So you also need to scale the effect radius accordingly. Ninja'd by TurboGit.... :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Second quick review.
|
||
g->noise_threshold = dt_bauhaus_slider_from_params(self, N_("noise_threshold")); | ||
dt_bauhaus_slider_set_digits(g->noise_threshold, 2); | ||
gtk_widget_set_tooltip_text(g->noise_threshold, _("std deviations from local mean to be considered a noise pixel\n- very small values will blur the image")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
std -> standard
|
||
g->halo_control = dt_bauhaus_slider_from_params(self, N_("halo_control")); | ||
dt_bauhaus_slider_set_digits(g->halo_control, 3); | ||
gtk_widget_set_tooltip_text(g->halo_control, _("0: Allow halos\n1: No halos\n With a large number of iterations can make this smaller")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No capital letters
|
||
g->iterations = dt_bauhaus_slider_from_params(self, N_("iterations")); | ||
dt_bauhaus_slider_set_digits(g->iterations, 0); | ||
gtk_widget_set_tooltip_text(g->iterations, _("increase for better halo control, especially on noisy pixels.\n Usually 3 is enough")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No capital letters... probably others to be checked.
|
||
g->gaussian_strength = dt_bauhaus_slider_from_params(self, N_("gaussian_strength")); | ||
dt_bauhaus_slider_set_digits(g->gaussian_strength, 3); | ||
gtk_widget_set_tooltip_text(g->gaussian_strength, _("higher strength blur window will \'soften\' results")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need for back slash here. And the characters should be ` and ' (they are not identical).
|
||
|
||
void tiling_callback(struct dt_iop_module_t *self, | ||
struct dt_dev_pixelpipe_iop_t *piece, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indentation.
int n_in_window=0 ; | ||
int w=radius*2+1 ; | ||
|
||
for(int i=0 ; i < radius ; i++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
{ shoubd be below for (many other occurrences)
|
||
|
||
// allocate space for n-component image of size width x height | ||
static inline lab_image new_lab_image(int width, int height, int ch) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
const for all parameters
|
||
|
||
// allocate space for 1-component image of size width x height | ||
static inline gray_image new_gray_image(const int width, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all static routine must start with underscore (many other occurrences)
The thing is, there is no radius. It only considers adjacent pixels as delivered by the pixel pipe. The only parameter available to alter is the "deblur strength". I also don't understand how exactly how downsampled pixels are created. Seems like one of the outcomes of downsampling is some deblurring occurs as groups of pixels are combined. So I'm struggling to understand how to give reasonable deblur strength for the preview (unless the zoom shown in the preview window is exactly the same as what is intended for the final export zoom/scale). There is probably a way to address this -- I wonder if one could just use a checkerboard grid, and analyze how the gradient between squares changes at different zoom B001 scales, fit a little equation that predicts how much gradient "improvement" to expect from the downsampling, then back that improvement out of the deblur strength. |
Closed as superseded by #16368 (based on master). |
Uses the backward diffusion model. Gradient reversals are explicity controlled to prevent halos.
This is like a "mini" diffuse or sharpen module. With halo control set to 1.0, it will not make a halo, but if a halo was present before running the module it will aggressively use it.
This is intended for small changes -- situations where the focus was very slightly off, or the camera lens is just a little bit soft. Or you are going to make a print that will be viewed at a large enough distance that more aggressive sharpening without halos is desired.
For low noise images it can be pushed to the limit where you see aliased edges, but no halos. I found was easier to soften the result to put a little anti-aliasing back rather than try to fine tune the deblurring strength.
I believe this expands the latitude on sharpening very close to the theoretical limit.
I attempted to make contact on the darktable IRC channel, but got nothing back so I just decided to make a PR here.
More details only if you are interested:
Code looks a little strange right now -- many "printf"s -- I was collecting info on how it was working, and I built a little "opengl-glui" interface to compile it outside of darktable to make debugging easy, comparing different methods easy, so there is some strange looking ifdefs's in the code to handle that. It does compile cleanly inside the darktable environment as is.
It runs as fast in OPENMP CPU only mode as diffuse or sharpen does in OpenCL on my laptop.
I had trouble cloning the git -- running the "git submodule update" just hung in the src/tests/integrations submodule, so I ended up removing that submodule on my local repo.
The module source is at src/iop/pixeldeblur.c
I've been usiing darktable for many many years now. I've been writing code much longer than I care to admit ;-)