DGDecomb

These CUDA filters are packaged into DGDecodeNV, which is part of DGDecNV.
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

@Sharc

Thank you. Investigating...

@gonca

Ah, OK. Well then, everything is working as expected. No sequences needed.

One thing: You don't need that last line, as it's already in YV12 at 23.976 fps coming out of DGDecimate().
DAE avatar
Guest

Re: DGDecomb

Post by Guest »

One thing: You don't need that last line, as it's already in YV12 at 23.976 fps coming out of DGDecimate().
Old habits die hard :scratch: , or its just that I am finally getting the hang of very basic avisynth usage and still need more experienced users to point those things out to me :facepalm:
Seriously though, it is no problem for me to set up a file hosting account, if you think it will make things easier all around
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

@Sharc

The deinterlacer is broken. I will fix it.

@gonca

Thanks again for the hosting offer. Don't worry I have a fine host. ;)
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

@Sharc

Please re-download and re-test. Note that there is now a new DGTelecide option called blend. Set it to true for blend deinterlacing or to false (default) for interpolate deinterlacing.
DAE avatar
Sharc
Posts: 233
Joined: Thu Sep 23, 2010 1:53 pm

Re: DGDecomb

Post by Sharc »

Yep, the deinterlacer works now!
I noticed however some chroma artefacts, see for example the red stripes near the guy's hand in bottom left corner of frames 248,260,261 of my testclip. Visible for blend=false and blend=true.
I don't get these chroma artefacts - well, some others instead - with the classic Telecide() or the classic FieldDeinterlace(). Perhaps something to look into?

Edit:
There may be an issue with the setting of pthresh, e.g. for telecined 3:2 footage (without decimation for my testing)
a) For pthresh=0.0 Five clean frames like a b c d d (as expected). No deinterlacing
b) For 0<pthresh<1.0 Three clean frames and two combed frames, like a b c x x # x="dirty" (weak crosstalk of the fields; partial deinterlacing)
c) For 1.0<pthresh<10 (I didn't test>10) same as a) i.e. five clean frames, no deinterlacing

Hmm... more tests ongoing ....

Edit2:
Does pthresh=0.0 disable the deinterlacing? I thought it would rather deinterlace all .... I guess this was my misunderstanding.
Still, the chroma artefact remains in case of deinterlacing.
DAE avatar
jpsdr
Posts: 214
Joined: Tue Sep 21, 2010 4:16 am

Re: DGDecomb

Post by jpsdr »

admin wrote:I notice you did not comment on its speed, even though I specifically asked about it.
Sorry... I've totaly zapped this one.
Algorithm is "complex" and make probably a lot more computation than a basic/standard. My purpose wasn't realy the speed (even if i've tried everything i can to speed it up). Your's is probably speed light faster. After adding MT on my filter, i've a whole process time of around 5 minutes on a 1080p anime episode (Read Data + DGDecode + IVTC filter + UT Video encode + Write Data), when the process time without filter (Read Data + DGDecode + UT Video encode + Write data) is around 2 minutes.
But the first purpose of my post was just to share my root idea : Instead of searching the highest correlation value to detect which frame are telecined, do the telecine on the frames, and check if the correlation value is decreased. That's with this in mind that i've done my filter. After doing it, i've got the feeling that it was a better approach than the standard way.
After, adapt/test this idea with the resources you are using.
And doing the telecine before computing correlation take no time, because if you have a function like ComputeCorrelation(*src_top,*src_bottom,....), you just have to put src_top/bottom from different frames to make "like" you've IVTC the frame. But, it's true, instead of one correlation computation by frame, you have at least 2 computations correlation by frame.
I've PM you the FTP.
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

Good morning, jp. Thank you for the stream sent via PM. This script makes it look very sweet IMHO:

loadplugin("dgdecodenv.dll")
avisource("SP_IVTC.avi").assumetff()
dgtelecide(pthresh=2.5,blend=true) # postprocessing needed for some fade transitions
dgdecimate()
dgdenoise(strength=0.2)
dgsharpen(strength=1.0)

Note that with AVISource you have to set the field order correctly. Here, your stream is TFF but the default for AVISource() is BFF. To be on the safe side, always set the field order explicitly with AVISource(). This easily plays in real time in MPC-HC. Finally, I prefer blend for interlaced fades. It looks better to me when playing at normal speed than interpolate.

Regarding your algorithm, your second explanation is clearer for me but some things remain unclear.
Instead of searching the highest correlation value to detect which frame are telecined
What are you correlating and how? BTW, I don't try to determine "which frames are telecined". I simply try two field matches and pick the best match to assemble the output frame.
do the telecine on the frames, and check if the correlation value is decreased
What do you mean by "do the telecine"? It's already telecined and we are trying to undo it. And again, what correlation?

It would help to have a pseudo code explanation. Following is mine. If you could give something like that all doubts would vanish.

For each input frame:

1. Calculate two difference metrics: the difference between the top field of the current frame and the bottom field of the current frame, and the difference between the top field of the current frame and the bottom field of the previous frame. Note that the difference calculation is not a simple pixel difference. It involves shifting of the bottom field to align it with the top field and blurring of the fields to suppress noise. Tricky coding allows it to be performed very fast. ;)

2. Choose the lowest difference and call it either current-current or current-previous.

3. If current-current, output the input frame. If current-previous, assemble an output frame from the top field of the current frame and the bottom field of the previous frame.

That's just for the matching and does not include the postprocessing.
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

Good morning, Sharc.

Yes, pthresh 0.0 disables postprocessing. I have added that to the document.

Ah yes, chroma artifacts. Silly me, I deinterlaced only the luma. :oops: Thanks for pointing it out.
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

OK, Sharc, please re-download and re-test. Getting close to goodness now.

I was thinking of providing a way to dynamically change pthresh for different scenes. Something like a config file:

0: 1.5
100: 3.2
1050: 0.5

The pthresh value would change at the given frame number and remain until changed again. I'm not sure it's worth the bother, though.
DAE avatar
jpsdr
Posts: 214
Joined: Tue Sep 21, 2010 4:16 am

Re: DGDecomb

Post by jpsdr »

First, mistake in my part : "Do IVTC" and not "Do telecine".
It seems that the english word is not what i though, "false friend", or even more "doesn't exist".
"Correlation" = sum abs(X-Y) : The more X and Y are close, the lower it is. X and Y are a line of data.
So, classic algorithms i knew chose top field from frame N for X and bottom field from frame N for Y. The two frames which produce the highest values are the frames with telecine.
My idea was : If the frame is telecined, the IVTC produced frame will result in a lower computed value than the original frame.
... I begin to hit my english limitation to explain... Sorry for the sometimes bad english.
DAE avatar
jpsdr
Posts: 214
Joined: Tue Sep 21, 2010 4:16 am

Re: DGDecomb

Post by jpsdr »

Ok, basicaly what you're doing follow the same idea than me, after reading your description. Instead me is a simple pixel difference, the denoising part is just done by removing the 2 LSB... :D
After, i do a lot of things, validate IVTC pattern (the 2 frames must be contiguous), if no data are validate enough (the decisions threshold are not meet) i keep the previous pattern to apply on the current sequence, etc...
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

Thank you for the explanation. It seems to me that your solution is limited to fixed regular 3:2 pulldown and you cannot operate correctly on streams with irregular pulldown. Also, noise is not always limited to the two LSBs, but it's a reasonable thing to do.

BTW, the correct term for what you are doing is not "correlation" but "sum of absolute differences".

Did you like my results on your stream?
DAE avatar
jpsdr
Posts: 214
Joined: Tue Sep 21, 2010 4:16 am

Re: DGDecomb

Post by jpsdr »

Didn't tested yet, i'll do this a little latter properly when more time (very probably this WE).
Yes, i think it's limited to 3:2, but honestly, don't remember exactly... :scratch:
I've done this "especialy" for anime, which have "standard" 3:2 IVTC, but where the pattern changes at each scene, or even more during the same scene (very rare), but still stay a 3:2. It may be limitated, but increase the robustness of the detection.
And finaly, for scenes where nothing works, there is the "manual mode". I feed a text file to my plugin, telling on which frames i put the pattern. For exemple, i'll feed something like that :
1000 1500 3
5000 5030 1
10001 10033 4
....
Means :
Automatic detection until frame 999.
Force IVTC pattern to 3 between frames 1000 to 1500.
Automatic detection from frame 1001 to 4999
Force IVTC pattern to 1 between frames 5000 to 5030.
etc...
Because my purpose it to NOT deinterlace the result, i want it to be "perfect" progressive, without "bluring" because of a deinterlace.
But, on these files, very often fading are mess, mixing 2 IVTC pattern when between scene, or are field based instead of frame based on black/white fading, making them impossible to IVCT, whatever you do there is interlaced.
It's where the "manual mode" of mu deinterlace filter comes...
If feed it whith a text file the same way my IVTC filter, with for exemple :
100 110 1
1000 1020 2
.....
Means :
- Deinterlace between frames 100 to 110 with "mode 1".
- Deinterlace between frames 1000 to 1020 with "mode 2".
Between : Do nothing.
With these both tools and their manual mode, i'm able to get a full IVTC without deinterlacing result (except the fading parts).
But, the working time to get this result, is between 40 minutes to 2h for a 20 minutes episode.
DAE avatar
jpsdr
Posts: 214
Joined: Tue Sep 21, 2010 4:16 am

Re: DGDecomb

Post by jpsdr »

Euh... I was too much curious and get the last 2053 version, but i have an error message telling me that there is not such function as dgtelecide... :?:
I've opened an avs file using DGSource, it works fine, but only if i put the license.txt file in the avisynth plugin directory (otherwise, i have a green output with something small unreadable at the top left of the screen), it wasn't like that before...:scratch:
Same result under Windows 7 SP-1 x64 with avs+ r2440 and under Windows 7 SP-1 x86 with avisynth 2.6.1.
DAE avatar
Sharc
Posts: 233
Joined: Thu Sep 23, 2010 1:53 pm

Re: DGDecomb

Post by Sharc »

admin wrote:OK, Sharc, please re-download and re-test. Getting close to goodness now.

I was thinking of providing a way to dynamically change pthresh for different scenes. Something like a config file:

0: 1.5
100: 3.2
1050: 0.5

The pthresh value would change at the given frame number and remain until changed again. I'm not sure it's worth the bother, though.
All good now. Chroma problem solved. Thanks :D

P.S.
The config file for dynamically changing pthresh would have to be elaborated manually, by stepping through the file and inspecting the show=true result, right?
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

jpsdr wrote:Euh... I was too much curious and get the last 2053 version, but i have an error message telling me that there is not such function as dgtelecide... :?:
It has not been slipstreamed yet, so you need the Beta.rar linked earlier.

http://rationalqm.us/misc/Beta.rar
I've opened an avs file using DGSource, it works fine, but only if i put the license.txt file in the avisynth plugin directory (otherwise, i have a green output with something small unreadable at the top left of the screen), it wasn't like that before...:scratch:
Same result under Windows 7 SP-1 x64 with avs+ r2440 and under Windows 7 SP-1 x86 with avisynth 2.6.1.
Yes, I added protection to the DLL. You should read the Readme.txt file. :D
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

Sharc wrote:The config file for dynamically changing pthresh would have to be elaborated manually, by stepping through the file and inspecting the show=true result, right?
Yes. Some people are picky enough that they would be willing to do it.
DAE avatar
Guest

Re: DGDecomb

Post by Guest »

Thanks to D.G. and his CUDA tools I am getting the hang of very basic avisynth scripts, well kind of sort of.
So as I read the documentation I have some questions that arise, apart from what did he say? :lol:
If there are no objections I will ask the question in the appropriate sub forum in the hopes the more experienced users help me out.

If I may ask an opinion
Which would you recommend for DGTelecide
blend=true or blend=false
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

Which would you recommend, coffee or tea? It depends a lot on personal taste. Here is my thinking. Blend hides aliasing (stairstepping artifacts) better, but ghosting is produced that can be visible for large motions. However, for me when you play at normal speeds the ghosting is not apparent (that's why we have blend decimation for hybrid content). So, in general, I prefer blend. Some people hate blending. That's why I made the default blend=false, so people would not hate on me. Or at least fewer people, haters gonna hate.

One final point, for IVTC, likely the number of postprocessed frames is small, and so it may not matter that much which mode you use.

Looking forward to your other questions.
DAE avatar
Guest

Re: DGDecomb

Post by Guest »

Which would you recommend, coffee or tea?
Depends on whether I am going to mix in some good rum or not.

i know, let your eyes be the judge, just trying to get some pointers.
Thanks for the information
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

Irish whiskey in the coffee.
DAE avatar
Sharc
Posts: 233
Joined: Thu Sep 23, 2010 1:53 pm

Re: DGDecomb

Post by Sharc »

Just for my understanding:
The classic Decomb plugin includes the 3 functions
a. FieldDeinterlace(), an adaptive deinterlacer for interlaced video
b. Telecide(), for field matching of telecined footage with deinterlacing postprocessor
c. Decimate(), for decimation of 1 in every N frames

Now we have the GPU CUDA supported similar functions
d. deinterlacing or bobbing invoked via DGSource(), which is always acting on all frames, as I understand it.
e. DGTelecide(), a substitute for the classic b.
f. DGDecimate(), a substitute for the classic c.

A substitute for a. (adaptive deinterlacing of interlaced video) seems to be missing in CUDA world. I could perhaps try with DGDecomb(ptresh= ) postprocessing, however this always includes field matching which is probably not recommended for interlaced video. Of course I can always use the classic FieldDeinterlace() or similar if need be.

Or do I miss something?
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

Correct, Sharc, I have not implemented DGFieldDeinterlace yet. I will do it fairly soon if not straight-away.
DAE avatar
jpsdr
Posts: 214
Joined: Tue Sep 21, 2010 4:16 am

Re: DGDecomb

Post by jpsdr »

Ok, i've got the beta, i'll test this evening when back home.

Ohh... another shared idea i'm using to detect telecine (still oriented 3:2).
The following :
A B C D
0 1 2 3
will result after telecine in (one possible pattern) :
A/A B/B B/C C/D D/D
0 1 2 3 4
So, if both SoAD (and not "correlation" this time ;) ) of (Top field of frame 1 and 2) and (Bottom field of frame 3 and 4) are "null" frame 2 and 3 are telecined.
This is something i'm checking in a second time, the B plan, if first tests (A plan) are not "good enough". "null" at noise or compression artifact threshold.

Another thing, as my IVTC filter is oriented 3:2 (but not sure limitated...), basicaly, it works by pack of 5 images at the time. So, decision algorithm is runned each pack of 5 images.
I can't wait to be back to check the result.
.......... Euh... I've just read the documentation, and... i've found out that your filter result is probably totaly different from mine, and why mine is more oriented 3:2.
Your filter deinterlace :
blend: bool (default: false)
When blend=true postprocessed frames are deinterlaced using field blending. When blend=false postprocessed frames are deinterlaced using field interpolation.
My filter reconstruct the frames.
Unless... I've misunderstood something.

Process the file i've provided to you in VDub with my filter with the following parameters :
Color format YV12 for input and output.
In the filter chain, put IVTC v6.2.0 and Remove frame v1.2.4 (with default settings for both).
Run/save the process, do not preview or anything, it will not work.
Or, i can put this evening when back home the result on my sever.
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

jp, you have totally misunderstood. C'mon, I'm not an idiot. I do not perform IVTC by deinterlacing the frames. And my previous post to you clearly explained that I reconstruct the frames when possible. You even posted saying I am doing what you are doing.

Field matching is performed. If there is a good match, that is delivered. It is not deinterlaced. If there is no good match, that is, a clean frame cannot be reconstructed (possibly due to bad edit) then the frame is deinterlaced (if postprocessing is enabled). A "postprocessed frame" is one that did not find a good field match.

I suppose it was too much to think people would be familiar with Decomb, so I will copy over material from its documentation to the DGDecodeNV document.
Post Reply