Page 12 of 17

Re: About DGDenoise

Posted: Sat Mar 11, 2017 6:33 am
by Guest
+1 as well

Re: About DGDenoise

Posted: Sun Mar 12, 2017 6:44 am
by hydra3333
Thank you, the new gpu based filters work nice.

OK, I did a test or two displaying an interleaved (source,result1,result2) clip in vsedit.
Reasonably clean, if blocky, 15Mb Source: https://drive.google.com/open?id=0B5RV2 ... Hk1UlA5c2c
Card: 750Ti, CPU: i3820 (due for replacement next year; likely a cheapie ryzen 1800X and 1050Ti, not a 1070 though given touted v2's of 1060 and 1080)

Confirming per some other posts eg viewtopic.php?f=8&t=506&start=170#p6074 that the default setting (strength=0.15) for DGDenoise seem to yield a more"softened" visual result compared to KNLMeansCL at defaults (d=1 temporal) at least on my test source. So, easy, use .06 or something.

Sometimes a really old movie can be a bit of a shocker with grain and whatnot and the best thing I've found to date has been the old MDegrain1/2/3 eg
video=MDegrain3i2(video,blksize=8,overlap=4,dct=0)
which I suppose does OK due to temporal processing ?
I haven't such a source to hand to try, so what would be your thoughts re DGDenoise / KNLMeansCL(d=3) / MDegrain for those types of sources ?

I see with interest you considered the possibility of temporal option viewtopic.php?f=8&t=506&start=210#p6137 and then some hesitation viewtopic.php?f=8&t=506&start=220#p6153 and then a possible position of sticking with spatial viewtopic.php?f=8&t=506&start=260#p6253
... I'm hoping the DG density matrix is flagging that gpu temporal denoising still has a significant possibility of a state of existance :) ... no plans for temporal ?
(google gives the words but unfortunately not the intellect to comprehend it).

Just came across the tips from this crowd, although I don't really know what their credentials are
http://www.cineticstudios.com/blog/2015 ... ction.html

vapoursynth x64 shouldn't make any difference, eg

Code: Select all

import vapoursynth as vs 
##import havsfunc as haf # http://forum.doom9.org/showthread.php?t=166582 
import havsfuncTS as haf # this version uses vanilla TemporalSoften instead of TemporalSoften2, as it will be "better" over time 
import mvsfunc as mvs  # http://forum.doom9.org/showthread.php?t=172564 
import finesharp as finesharp # http://forum.doom9.org/showthread.php?p=1777815#post1777815 http://avisynth.nl/index.php/FineSharp 
core = vs.get_core(accept_lowercase=True) # leave off threads=8 so it auto-detects threads 
# the r'' indicates do not treat special characters and accept backslashes 
core.std.LoadPlugin(r'C:\SOFTWARE\Vapoursynth-x64\vapoursynth64\plugins\dll-to-choose-from\KNLMeansCL.dll') # the r'' indicates do not treat special characters and accept backslashes 
# LOAD 64 bit AVISYNTH  plugins  into the avs namespace 
core.avs.LoadPlugin(r'C:\SOFTWARE\Vapoursynth-x64\DGIndex\DGDecodeNV.dll')  
def main(): 
    video = core.avs.DGSource(r'.\z.dgi',deinterlace=1,resize_w=720,resize_h=576) # deinterlace=1 means single rate deinterlacing 
    videooriginal = video 
    video1 = core.knlm.KNLMeansCL(video, device_type="gpu", d=1, a=2) 
    video2 = core.avs.DGDenoise(video,strength=0.06)
    #video = core.avs.DGSharpen(video,strength=0.40) # 
    video = core.std.Interleave([core.text.Text(videooriginal, 'Source clip'), core.text.Text(video1, 'LNKMeansCL clip'), core.text.Text(video2, 'DGDenoise clip')])
    video.set_output() 
    return True 
main() 
[/size]

Re: About DGDenoise

Posted: Sun Mar 12, 2017 7:27 am
by admin
hydra3333 wrote:Sometimes a really old movie can be a bit of a shocker with grain and whatnot and the best thing I've found to date has been the old MDegrain1/2/3 eg
video=MDegrain3i2(video,blksize=8,overlap=4,dct=0)
which I suppose does OK due to temporal processing ?
I haven't such a source to hand to try, so what would be your thoughts re DGDenoise / KNLMeansCL(d=3) / MDegrain for those types of sources ?
It's not worth speculating about it in the absence of a clip. When you have one to share I would be happy to discuss comparative results.
I see with interest you considered the possibility of temporal option viewtopic.php?f=8&t=506&start=210#p6137 and then some hesitation viewtopic.php?f=8&t=506&start=220#p6153 and then a possible position of sticking with spatial viewtopic.php?f=8&t=506&start=260#p6253
... I'm hoping the DG density matrix is flagging that gpu temporal denoising still has a significant possibility of a state of existance :) ... no plans for temporal ?
I've asked several times for a clip that clearly shows the benefits of temporal processing, either alone or in conjunction with spatial processing. Do you have one? That is what I would need to find the motivation to do something temporally. Meanwhile, there is always TemporalSoften(). My personal perspective (which can be altered by evidence) is that ghosting is anathema that I would avoid like the plague in the absence of palpable overriding considerations. Yeah, yeah, scene change detection. But not every frame is a scene change while it can still be full of motion, and hence ghosting. Please refute my view with some evidence, i.e., a clip.

If you want to consider motion compensation to combat ghosting, that's perfectly valid. But that has to wait for my CUDA motion estimation stuff.

DG density matrix. That's clever. Not everyone will grok it, though, especially with negative off-diagonal terms.
Just came across the tips from this crowd, although I don't really know what their credentials are
http://www.cineticstudios.com/blog/2015 ... ction.html
It's just warmed-over wives' tales, aka, conventional "wisdom", justifying some commercial product recommendations. I can't take seriously an article that provides a tip on noise reduction that advises to add noise. :facepalm:

Monty Python video enhancement algorithm:

1. Remove noise.
2. Add noise.

That tip is there only to push some commercial tools. The tip to prefer temporal processing is also just a push for a commercial denoiser. They conveniently neglect to mention ghosting. And it's strange that not a single image is given to support anything they say.

Re: DGDenoise

Posted: Sat Apr 08, 2017 5:44 am
by Sharc
I am getting ugly artefacts (broad diagonal color stripes) with chroma=true in DGDenoise.
My source is interlaced video. I separate the fields before applying the filter. Unless I am missing something you should be able to duplicate the issue with any source clip.

Re: DGDenoise

Posted: Sat Apr 08, 2017 6:50 am
by admin
Yikes! I'll get on it this morning. Thanks for the report.

Re: DGDenoise

Posted: Sat Apr 08, 2017 7:59 am
by admin
All fixed. Thanks for reporting it.

Re: DGDenoise

Posted: Sat Apr 08, 2017 8:48 am
by Sharc
Confirmed. It's working.

Re: DGDenoise

Posted: Sun Apr 09, 2017 11:29 am
by admin
Cool, thanks again Sharc.

Re: DGDenoise

Posted: Tue Apr 18, 2017 11:45 am
by Guest
Does DGDenoiseNV handle what is on this sample, film grain or mosquito noise, I believe?
http://www.mediafire.com/file/1djsss779 ... sample.mkv

If not, can you recommend a x64 filter?

Re: DGDenoise

Posted: Tue Apr 18, 2017 1:03 pm
by admin
Something is broken. Investigating...

Re: DGDenoise

Posted: Tue Apr 18, 2017 1:57 pm
by admin
Oy, that last slipstream was brain-dead. :oops:

OK, I fixed the DGDenoise problem. Now I have one little issue in DGTelecide. Fixes coming soon...

Thanks for pointing this out, gonca.

Re: DGDenoise

Posted: Tue Apr 18, 2017 2:47 pm
by Guest
Thanks for creating the filters

Re: DGDenoise

Posted: Tue Apr 18, 2017 3:07 pm
by admin
Thanks. Of course, it would be better if they worked properly. ;)

I just slipstreamed the fix. Heading over to the binaries thread to announce it.

Re: DGDenoise

Posted: Tue Apr 18, 2017 3:09 pm
by admin
This works pretty good on your sample, gonca:

dgsource("sample.dgi")
dgdenoise(strength=2.0,searchw=9,chroma=true)
dgsharpen()

That's industrial strength. Reduce strength and/or searchw, or turn off chroma, if you like.

Re: DGDenoise

Posted: Tue Apr 18, 2017 3:27 pm
by Guest
I have read the documentation but would you recommend searchw=9 or chroma+true over the defaults in general terms?

Re: DGDenoise

Posted: Tue Apr 18, 2017 3:42 pm
by admin
Generally, it's a tradeoff of speed against thoroughness (aka, "quality").

Take chroma. Make this script:

dgsource("sample.dgi")
UtoY() # visualize the chroma U plane

and compare it to this:

dgsource("sample.dgi")
grayscale() # visualize the luma plane

You will see that there is lots of noise in the luma but only a relatively small amount in the chroma. That's pretty typical. So adding chroma=true will slow things down while not doing much for the perceived result.

Regarding searchw. The bigger it is the better the smoothing but you lose some detail and it takes longer.

I usually just go with searchw=5 and chroma=false and then set the strength as desired. I don't work with very noisy sources like your sample, however. The knobs are there for you to tweak for noisier sources.

Re: DGDenoise

Posted: Tue Apr 18, 2017 4:15 pm
by Guest
Thanks for the explanation and information.
I normally don't work with sources that noisy but I do have some older movies hat seem to be from the era of "Movie Grain is King"

Re: DGDenoise

Posted: Tue Apr 18, 2017 4:24 pm
by Guest
Running an encode and it looks like its all working as it should

Re: DGDenoise

Posted: Tue Apr 18, 2017 4:28 pm
by admin
Sweet, thanks for the report and testing. I tested this version pretty well too.

Re: DGDenoise

Posted: Thu Apr 20, 2017 11:41 am
by Sharc
admin wrote:Generally, it's a tradeoff of speed against thoroughness (aka, "quality").

Take chroma. Make this script:

dgsource("sample.dgi")
UtoY() # visualize the chroma U plane

and compare it to this:

dgsource("sample.dgi")
grayscale() # visualize the luma plane
Why is the chroma U plane view shown at half frame width? Is it because of the YV12 4.2.0 colorspace?

Re: DGDenoise

Posted: Thu Apr 20, 2017 12:21 pm
by admin
It's also half height. Yes, it is that way because there is one U sample (and one V sample) for each block of 4 luma samples.

Y Y
Y Y
U
V

Re: DGDenoise

Posted: Thu Apr 20, 2017 2:27 pm
by Sharc
Ah, the penny dropped why I got the horizontal reduction only. My source was actually 4:2:2 rather than 4:2:0.
Thanks.

Re: DGDenoise

Posted: Fri Feb 23, 2018 6:40 am
by Sharc
A cosmetic issue with DGDenoise:

When I set strength=0.0 I am getting an almost black picture for chroma=false, and a green picture for chroma=true.
Well, one should not set strength to zero, right? :roll:

Re: DGDenoise

Posted: Fri Feb 23, 2018 7:41 am
by admin
Right. I'll add a check for that. Thanks for pointing it out.

Re: DGDenoise

Posted: Sun Aug 19, 2018 5:52 pm
by mparade
Hello,

Sorry for asking that, but how can one access CUDA Filters packaged into DGDecodeNV?

I have been using DGDecNV for a few years and now opted for giving a try to DGDenoise to speed up my scripts feeded into x265.

Thank you in advance for the support.