8000 RFC : New pixeldeblur module. by weltyj · Pull Request #16337 · darktable-org/darktable · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

RFC : New pixeldeblur module. #16337

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from

Conversation

weltyj
Copy link
@weltyj weltyj commented Feb 16, 2024

Uses the backward diffusion model. Gradient reversals are explicity controlled to prevent halos.

This is like a "mini" diffuse or sharpen module. With halo control set to 1.0, it will not make a halo, but if a halo was present before running the module it will aggressively use it.

This is intended for small changes -- situations where the focus was very slightly off, or the camera lens is just a little bit soft. Or you are going to make a print that will be viewed at a large enough distance that more aggressive sharpening without halos is desired.

For low noise images it can be pushed to the limit where you see aliased edges, but no halos. I found was easier to soften the result to put a little anti-aliasing back rather than try to fine tune the deblurring strength.

I believe this expands the latitude on sharpening very close to the theoretical limit.

I attempted to make contact on the darktable IRC channel, but got nothing back so I just decided to make a PR here.

  • First order of business is for other people to give this a try and see what they think.
  • Second thing, if you all like it, is for me to clean up the code, and get another experienced person to review and correct my attempt at OpenMP -- not a lot of code...
  • Third thing is to do the OpenCL implementation. I can do the OpenCL code
  • I am not sure exactly how to manage scales < 100% I have a rough guess at it.
  • I am happy to maintain this code in the future.

More details only if you are interested:

Code looks a little strange right now -- many "printf"s -- I was collecting info on how it was working, and I built a little "opengl-glui" interface to compile it outside of darktable to make debugging easy, comparing different methods easy, so there is some strange looking ifdefs's in the code to handle that. It does compile cleanly inside the darktable environment as is.

It runs as fast in OPENMP CPU only mode as diffuse or sharpen does in OpenCL on my laptop.

I had trouble cloning the git -- running the "git submodule update" just hung in the src/tests/integrations submodule, so I ended up removing that submodule on my local repo.

The module source is at src/iop/pixeldeblur.c

I've been usiing darktable for many many years now. I've been writing code much longer than I care to admit ;-)

@ralfbrown
Copy link
Collaborator

If you haven't previously cloned the submodule onto your machine, it may take a while to download -- its repo is over 8GB by now....

@ralfbrown
Copy link
Collaborator

You're mixing floats and doubles, which causes extra conversions and makes vectorization less effective. Use float literals (1.0f instead of 1.0) and float versions of mathematical functions (sqrtf instead of sqrt).

@weltyj
Copy link
Author
weltyj commented Feb 17, 2024

@ralfbrown: Wow - 8GB, that was what the problem was. It timed out a few times too.
Everything is float. I missed a few float literals. I need to go back and check for those as well as sqrtf() and fabsf()'s. There's probably places where arrays could be aligned too. Thanks!

To all: I forgot to say you will want to make sure you have denoise, lenscorrection and chromatic abberations corrected in the pipeline, or you'll see all of the uncorrected chromatic abberations amplified by this module.

@TurboGit
Copy link
Member

My first thought was: "what does this bring that is not already available in darktable?"

We really avoid having duplicate modules, so we are deprecating some from time to time to have one module for one action. Having this module would only be ok if there is not way to have a D&S or "local contrast" preset doing the same.

Don't take this has a no go for integration, but we really need lot more information on why this is so needed.

@weltyj
Copy link
Author
weltyj commented Feb 17, 2024

I completely understand not wanting duplicate modules. The problem with D&S is it can make halos, sometimes extremely bad. I altered the D&S module to get a more fine-grained "sharpness" parameter, but could not prevent halos on some of my images no matter how hard I pushed the edge sensitivity/edge threshold. Increasing the luminance masking threshold will prevent halos, but also prevent the sharpness desired. Local contrast just doesn't get anywhere close to what D&S or pixeldeblur can do. I do like the D&S module and except for the halos would prefer to have it rather than an additional module. Another argument for pixeldeblur is it only has one purpose -- sharpening for 100% crops -- so it's easier for a user to understand and has fewer controls. Here is an example of where D&S makes a bad halo, and what pixeldeblur can do with 100% crops. Unsharpened on the left, D&S in the middle, pixeldeblur on the right.
ds_pd_compare

@TurboGit
Copy link
Member

Would be nice to share a RAW + xmp with D&S and this new module for us to play with it and see if we cannot have better result with D&S. Until now I have always been able to avoid the halo, but maybe there is some specific cases I have not found. TIA.

I'm sure we'll have more comment from other devs too.

@weltyj
Copy link
Author
weltyj commented Feb 17, 2024

Will do. I actually have 17 images I've been playing with to get a range of contrast & noise -- about 0.5 gigs. I am out for the day now but will get to this later.

@jenshannoschwalm
Copy link
Collaborator

Some first ideas / questions

  1. If this module is tuned for 1:1 pixel processing it might be well suited a) for sharpening in the demosaicer as a postprocessing step or b) while exporting ?
  2. If i understand correctly this could be a "replacement" for the old sharpen too?
  3. If being much faster than D&S that would certainly be a pro argument especially for only-cpu users?

Copy link
Member
@TurboGit TurboGit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A first pass on the style.

@victoryforce
Copy link
Collaborator

@weltyj You have to make PR against the master branch, BTW.

@HansBull
Copy link
Contributor

This would be much appreciated by CPU-only users indeed who want a decent sharpening w/o having to deal with DS.

@Christian-Bouhon
Copy link
2. If i understand correctly this could be a "replacement" for the old sharpen too?

Just one idea among many, I suggest incorporating this module into the sharpen module.
Personally, I've come back to it because I get very good results with my K-1. I admit that you shouldn't push the radius too much (max for me 1,000).
Sharpen

@weltyj
Copy link
Author
weltyj commented Feb 18, 2024

@TurboGit -- Thanks very much for the code style comments You spent some real time looking at the code. I learned with Kernighan and Ritchie, first edition -- when it was new. Those "{}" styles still come out of my fingers that way. I'll go back thru your suggestions and see if I can spot some other areas where my formatting was on "1980's automatic-pilot" as well as your other comments.

@jenshannoschwalm -- Yes, this is 3-10 times faster than D&S. I have, at times, wondered if deblurring should come in the demosaic step. If you had a low noise image in particular, seems like you could do a straight interpolation on the R,G,B components independently, then transverse(lateral) chromatic abberation adjustment, then some amount of deblurring on each component layer. If you knew also how the lens performed, you could vary the deblurring amount by distance from the image center. It could get even more complicated though, for example the lens distortion correction is altering the number of actual pixels used to estimate the corrected output pixels, resulting in a variable loss of sharpness. (i.e. at the center of the image, an output window of 100 pixels may use 100 pixels from the demosaic algorithm, but at the edge of the image the final 100 pixels may only be using 80 pixels from the demosaic algorithm)

In regards to performance on the CPU as it stands now on a little 810x587 test image:

  • The old sharpen module took 0.013 seconds
  • Pixeldeblur took 0.114 seconds (2 iterations)
  • D&S took .343 seconds (13 iterations -- in order to get similar sharpness but a lot more noise as pixeldeblur)
  • D&S took 1.213 second (66 iterations -- in order to get similar sharpness/noise level as pixeldeblur)

I think pixeldeblur could be sped up more -- I realy am pathetically inexperienced with OpenMP

But before any thing else happens -- other people should be looking critically at pixeldeblur and and see if collectively we agree it is worth adding to darktable. My personal impression is that D&S does a better job at "texture", a little worse at noise suppression, and definitely worse with halos on high contrast edges when you want maximum sharpness. I will get my test images/xmp's out on google drive today. I'll also add some brief comments about where in the images I've been looking, but not make any comments about whether I think pixeldeblur or D&S is better. Much of this is subjective and I think it's best if you look at it and develop your own opinion first.

One final thought -- I think I'll drop the "Advanced Parameters". The larger radius and the smooth deblurring were options I had for some previous version of the algorithm, and now seem unnecessary.

Thanks all for your comments!

@TurboGit
Copy link
Member

I think pixeldeblur could be sped up more -- I realy am pathetically inexperienced with OpenMP

And probably an OpenCL implementation would also be nice.

@TurboGit TurboGit changed the title New pixeldeblur module. RFC : New pixeldeblur module. Feb 18, 2024
@TurboGit TurboGit added wip pull request in making, tests and feedback needed priority: low core features work as expected, only secondary/optional features don't scope: image processing correcting pixels labels Feb 18, 2024
@weltyj
Copy link
Author
weltyj commented Feb 19, 2024

https://drive.google.com/drive/folders/17NijwiXVTGCYzVpXOzybne0mTJ6ofOjE?usp=drive_link

Theoretically, there is a link to all my test files, and the corresponding xmps. It's about 0.5 gigs.

I was a meanie and reset the diffuse parms so you can tackle it the way you think is best with diffuse without getting distracted by my settings.
I also found out what was going wrong with a little part of the algorithm that was destroying some of the image texture (sometimes known as noise ;-) ), so I fixed that, which was a big improvement for a few of my test images.

I'll leave you all to form your own opinions now. I'm certainly open to answering any questions you may have, or entertaining ideas, or ...

@TurboGit
Copy link
Member

@weltyj : Don't have access to your link.

@weltyj
Copy link
67E6
Author
weltyj commented Feb 19, 2024

Give the link another try. Should be able to access it now.

@TurboGit
Copy link
Member

@ weltyj : Did some testing on the images you provided. I see that the debur algo as a tendency of burning the highlight a bit more than D&S (using the sharpness: strong preset).

The more annoying issue of this PR is that the rendering is very dependent on the actual zoom level. This is not good as if you see the image at 100% it looks good but when the fit zoom is use the image looks quite blurry. Hard to evaluate an image with this.

@TurboGit
Copy link
Member

Also, please rebase your PR on top of master if you want more dev to test this. Also you'll need to fix the code for it to compile:

diff --git a/src/iop/pixeldeblur.c b/src/iop/pixeldeblur.c
index 70e500229c..e8be09b482 100644
--- a/src/iop/pixeldeblur.c
+++ b/src/iop/pixeldeblur.c
@@ -323,7 +323,7 @@ static inline gray_image new_gray_image(const int width,
                                         const int height)
 {
   gray_image gi ;
-  gi.data = dt_alloc_align(64, sizeof(float) * width * height) ;
+  gi.data = dt_alloc_aligned(sizeof(float) * width * height) ;
   gi.width = width ;
   gi.height = height ;
   gi.stride = 1 ; // 1 is the stride

TIA.

@weltyj
Copy link
Author
weltyj commented A3E2 Feb 21, 2024

@TurboGit > The more annoying issue of this PR is that the rendering is very dependent on the actual zoom level. This is not good as if you see the image at 100% it looks good but when the fit zoom is use the image looks quite blurry. Hard to evaluate an image with this.

Yes -- I haven't figured this out. If the strength of the deblur is the same regardless of the zoom -- then the "fit" version is absurdly oversharpened, which doesn't make sense to me, seems like the fit should be delivering the pixels you see on the screen to the module. Clearly a lack of understanding on my part.

Working on the rebase now -- thanks.

@TurboGit
Copy link
Member

The zoom issue is because the iop works on the preview (scaled down from the RAW) and some algorithm (like the deblur) is working differently whether there is all details on the image (full RAW) or if working on a preview.

@@ -0,0 +1,1411 @@
/*
This file is part of darktable,
Copyright (C) 2009-2024 darktable developers.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2009 should be removed or changed, depending if you've started working on this in 2024 or 2023.

@ralfbrown
Copy link
Collaborator
ralfbrown commented Feb 21, 2024

Keep in mind that at any zoom less than 100% (or close to it), your module is not getting 1:1 pixels from the original file, but a downsampled version. So you also need to scale the effect radius accordingly.

Ninja'd by TurboGit.... :)

Copy link
Member
@TurboGit TurboGit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Second quick review.


g->noise_threshold = dt_bauhaus_slider_from_params(self, N_("noise_threshold"));
dt_bauhaus_slider_set_digits(g->noise_threshold, 2);
gtk_widget_set_tooltip_text(g->noise_threshold, _("std deviations from local mean to be considered a noise pixel\n- very small values will blur the image"));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

std -> standard


g->halo_control = dt_bauhaus_slider_from_params(self, N_("halo_control"));
dt_bauhaus_slider_set_digits(g->halo_control, 3);
gtk_widget_set_tooltip_text(g->halo_control, _("0: Allow halos\n1: No halos\n With a large number of iterations can make this smaller"));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No capital letters


g->iterations = dt_bauhaus_slider_from_params(self, N_("iterations"));
dt_bauhaus_slider_set_digits(g->iterations, 0);
gtk_widget_set_tooltip_text(g->iterations, _("increase for better halo control, especially on noisy pixels.\n Usually 3 is enough"));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No capital letters... probably others to be checked.


g->gaussian_strength = dt_bauhaus_slider_from_params(self, N_("gaussian_strength"));
dt_bauhaus_slider_set_digits(g->gaussian_strength, 3);
gtk_widget_set_tooltip_text(g->gaussian_strength, _("higher strength blur window will \'soften\' results"));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need for back slash here. And the characters should be ` and ' (they are not identical).



void tiling_callback(struct dt_iop_module_t *self,
struct dt_dev_pixelpipe_iop_t *piece,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indentation.

int n_in_window=0 ;
int w=radius*2+1 ;

for(int i=0 ; i < radius ; i++) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

{ shoubd be below for (many other occurrences)



// allocate space for n-component image of size width x height
static inline lab_image new_lab_image(int width, int height, int ch)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

const for all parameters



// allocate space for 1-component image of size width x height
static inline gray_image new_gray_image(const int width,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all static routine must start with underscore (many other occurrences)

@weltyj
Copy link
Author
weltyj commented Feb 21, 2024

Keep in mind that at any zoom less than 100% (or close to it), your module is not getting 1:1 pixels from the original file, but a downsampled version. So you also need to scale the effect radius accordingly.

Ninja'd by TurboGit.... :)

The thing is, there is no radius. It only considers adjacent pixels as delivered by the pixel pipe. The only parameter available to alter is the "deblur strength".

I also don't understand how exactly how downsampled pixels are created. Seems like one of the outcomes of downsampling is some deblurring occurs as groups of pixels are combined. So I'm struggling to understand how to give reasonable deblur strength for the preview (unless the zoom shown in the preview window is exactly the same as what is intended for the final export zoom/scale).

There is probably a way to address this -- I wonder if one could just use a checkerboard grid, and analyze how the gradient between squares changes at different zoom B001 scales, fit a little equation that predicts how much gradient "improvement" to expect from the downsampling, then back that improvement out of the deblur strength.

@TurboGit
Copy link
Member

Closed as superseded by #16368 (based on master).

@TurboGit TurboGit closed this Feb 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority: low core features work as expected, only secondary/optional features don't scope: image processing correcting pixels wip pull request in making, tests and feedback needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants
0