DGDecomb

These CUDA filters are packaged into DGDecodeNV, which is part of DGDecNV.
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

Thanks for bringing this to my attention, jpsdr.

Do you have a clip that shows your method performing better than the traditional approach? And how does your method compare in performance? From what you said it sounds like it would be way slower.

I had a look at your code. It is so complex and extensive and with only very limited commenting that I have no hope of figuring out what you are doing. And to be honest your previous post is rather unclear. Finally, telling me about this after I complete my implementation is a bit perverse. ;)
DAE avatar
jpsdr
Posts: 214
Joined: Tue Sep 21, 2010 4:16 am

Re: DGDecomb

Post by jpsdr »

Sorry, no bad/perverse intention... :?
I'll PM you later an ftp account with a clip i'm using to made my tests, but i don't remember if it performs better on this specific clip. I think remembering it performs better than the automatic IVTC included in VDub...

If my post wasn't clear, i'll re-try to explain the idea:
N.o : odd field of frame N. (Bottom field, lines 1,3,5,...)
N.e : even field of frame N. (Top field lines 0,2,4,...).
- Compute correlation data between N.o and N.e : value A.
- Compute correlation data between N.e and N-1.o : value B.
Two correlation values are computed for each frame :
One computed from the whole frame.
One computed only on zones detected interlaced, using the same idea/method of your smart deinterlace. The map of interlaced zones is made using the original frame, and after the both correlations values (can be called A' and B') are computed only on the zones from the map constructed.
As the filter is old, it was made at the time VDub filters were working only on RGB32 data. So, all computations are made only on RGB data.
To remove noise from correlation data and greatly increase the accuracy, the 2 LSB are removed (from RGB data).
If A' and B' are "good" : If B' << A' frame is a telecined frame, otherwise not. If A' and B' are "not good", A and B are used.
To validate a telecine pattern the both frames detected must be contiguous, except... If change scene is detected.
The filter has a pipeline structure, it computes data only on the current frame. Meaning it works only when runned through the whole file, display is not working, and it's why it doesn't have a preview function.
Another thing in my program, when it founds an IVTC pattern, it stays locked on it if no "strong" detection pattern is found. It's typicaly for on anime when a caracter is talking without mouving, and on the picture only a small mouth is "moving". Another thing preventing any "preview" from working, history/past has an effect on the present.
These are the rough ideas.
If you want to play with it, put in VDub the following filter in the filter chain :
IVTC (with default setting)
Remove frame (with default setting)

then run the process, and see the saved result file.
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

Thanks, jpsdr. Looking forward to your IVTC torture clip(s).

As I mentioned, for a CUDA implementation my focus is on speed and your algorithm would be both difficult to implement and rather slow for me. I notice you did not comment on its speed, even though I specifically asked about it. Nevertheless, thank you for the further explanation.
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

Folks, please do some testing with this beta of DGTelecide/DGDecimate:

http://rationalqm.us/misc/Beta.rar

If no outright bugs are found I'll slipstream it. Remember, this has no bells and whistles. Based on results, I'll enhance it as needed.
DAE avatar
Sharc
Posts: 233
Joined: Thu Sep 23, 2010 1:53 pm

Re: DGDecomb

Post by Sharc »

First quick tests with DGTelecide():
I am getting strong residual combing even though the show=true reports that the frame has been deinterlaced.
I don't get such combes with the classic telecide().

What is the valid range of pthresh? 0.0 to 1.0? (The documentation calls it "strength" btw.)
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

Hi Sharc, please provide the source clip and your script. It's the only way I can analyze it. Thank you.

There is no "normal" range for pthresh. It depends on the source clip. Use show to see the metrics. It's not limited to 1.0; it can go as high as you need.

Thanks for the document correction.
DAE avatar
Guest

Re: DGDecomb

Post by Guest »

I just ran a quick test on a clip that was 100% interlaced, according to DGIndex and to my untrained eyes it looks good at default settings.
I saved the first 300 frames from each result as bitmaps. If needed or desired I will set up an account on a file hosting site, no ads, for easy access.
Three test were
DGSource()
DGSource() + DGTelecide(pthresh=3.5)
DGSource() + DGTelecide(pthresh=3.5) + DGDecimate()
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

Hi gonca.

You shouldn't do IVTC on interlaced material! Nevertheless, if you do, it will likely deinterlace every frame, as long as your pthresh is low enough. Better to just use deinterlace=1 in DGSource().
DAE avatar
Sharc
Posts: 233
Joined: Thu Sep 23, 2010 1:53 pm

Re: DGDecomb

Post by Sharc »

Here a testclip. It starts with a telecined segment and changes to interlaced video.
http://www.mediafire.com/file/clztmro8l ... ybrid.m2ts

Script:
clip=DGSource("....hybrid.dgi")
ivtc=clip.DGTelecide(pthresh=1.0,show=true)#.DGDecimate(cycle=5) #for testing IVTC of the telecined segment
return ivtc
DAE avatar
Guest

Re: DGDecomb

Post by Guest »

Please forgive my inexperience with the terminology.
To be more precise, this was a 4 minute clip I created from a DVD of a NTSC TV series which I bought for my collection.
DGDource vs DGSource + DGTelecide show the same frame sequence ( ie same exact image) minus what I believe is the interlacing .
However the result of the DGTelecide clearly showed repeat frames in every group of 5
So for giggles I did DGSource +DGTelecide + DGDecimate
Results were that interlacing and repeat frames are gone
As I said, I can use a file hosting site so you or any other member can analyze the sequences
Here is the script I used

Code: Select all

LoadPlugin("C:\Program Files (Portable)\dgdecnv\DGDecodeNV.dll")
DGSource("I:\test.DGI", fieldop=0)
DGTelecide(pthresh=3.5)
DGDecimate()
#DGDenoise()
#DGSharpen()
ConvertToYV12().AssumeFPS(24000,1001)
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

@Sharc

Thank you. Investigating...

@gonca

Ah, OK. Well then, everything is working as expected. No sequences needed.

One thing: You don't need that last line, as it's already in YV12 at 23.976 fps coming out of DGDecimate().
DAE avatar
Guest

Re: DGDecomb

Post by Guest »

One thing: You don't need that last line, as it's already in YV12 at 23.976 fps coming out of DGDecimate().
Old habits die hard :scratch: , or its just that I am finally getting the hang of very basic avisynth usage and still need more experienced users to point those things out to me :facepalm:
Seriously though, it is no problem for me to set up a file hosting account, if you think it will make things easier all around
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

@Sharc

The deinterlacer is broken. I will fix it.

@gonca

Thanks again for the hosting offer. Don't worry I have a fine host. ;)
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

@Sharc

Please re-download and re-test. Note that there is now a new DGTelecide option called blend. Set it to true for blend deinterlacing or to false (default) for interpolate deinterlacing.
DAE avatar
Sharc
Posts: 233
Joined: Thu Sep 23, 2010 1:53 pm

Re: DGDecomb

Post by Sharc »

Yep, the deinterlacer works now!
I noticed however some chroma artefacts, see for example the red stripes near the guy's hand in bottom left corner of frames 248,260,261 of my testclip. Visible for blend=false and blend=true.
I don't get these chroma artefacts - well, some others instead - with the classic Telecide() or the classic FieldDeinterlace(). Perhaps something to look into?

Edit:
There may be an issue with the setting of pthresh, e.g. for telecined 3:2 footage (without decimation for my testing)
a) For pthresh=0.0 Five clean frames like a b c d d (as expected). No deinterlacing
b) For 0<pthresh<1.0 Three clean frames and two combed frames, like a b c x x # x="dirty" (weak crosstalk of the fields; partial deinterlacing)
c) For 1.0<pthresh<10 (I didn't test>10) same as a) i.e. five clean frames, no deinterlacing

Hmm... more tests ongoing ....

Edit2:
Does pthresh=0.0 disable the deinterlacing? I thought it would rather deinterlace all .... I guess this was my misunderstanding.
Still, the chroma artefact remains in case of deinterlacing.
DAE avatar
jpsdr
Posts: 214
Joined: Tue Sep 21, 2010 4:16 am

Re: DGDecomb

Post by jpsdr »

admin wrote:I notice you did not comment on its speed, even though I specifically asked about it.
Sorry... I've totaly zapped this one.
Algorithm is "complex" and make probably a lot more computation than a basic/standard. My purpose wasn't realy the speed (even if i've tried everything i can to speed it up). Your's is probably speed light faster. After adding MT on my filter, i've a whole process time of around 5 minutes on a 1080p anime episode (Read Data + DGDecode + IVTC filter + UT Video encode + Write Data), when the process time without filter (Read Data + DGDecode + UT Video encode + Write data) is around 2 minutes.
But the first purpose of my post was just to share my root idea : Instead of searching the highest correlation value to detect which frame are telecined, do the telecine on the frames, and check if the correlation value is decreased. That's with this in mind that i've done my filter. After doing it, i've got the feeling that it was a better approach than the standard way.
After, adapt/test this idea with the resources you are using.
And doing the telecine before computing correlation take no time, because if you have a function like ComputeCorrelation(*src_top,*src_bottom,....), you just have to put src_top/bottom from different frames to make "like" you've IVTC the frame. But, it's true, instead of one correlation computation by frame, you have at least 2 computations correlation by frame.
I've PM you the FTP.
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

Good morning, jp. Thank you for the stream sent via PM. This script makes it look very sweet IMHO:

loadplugin("dgdecodenv.dll")
avisource("SP_IVTC.avi").assumetff()
dgtelecide(pthresh=2.5,blend=true) # postprocessing needed for some fade transitions
dgdecimate()
dgdenoise(strength=0.2)
dgsharpen(strength=1.0)

Note that with AVISource you have to set the field order correctly. Here, your stream is TFF but the default for AVISource() is BFF. To be on the safe side, always set the field order explicitly with AVISource(). This easily plays in real time in MPC-HC. Finally, I prefer blend for interlaced fades. It looks better to me when playing at normal speed than interpolate.

Regarding your algorithm, your second explanation is clearer for me but some things remain unclear.
Instead of searching the highest correlation value to detect which frame are telecined
What are you correlating and how? BTW, I don't try to determine "which frames are telecined". I simply try two field matches and pick the best match to assemble the output frame.
do the telecine on the frames, and check if the correlation value is decreased
What do you mean by "do the telecine"? It's already telecined and we are trying to undo it. And again, what correlation?

It would help to have a pseudo code explanation. Following is mine. If you could give something like that all doubts would vanish.

For each input frame:

1. Calculate two difference metrics: the difference between the top field of the current frame and the bottom field of the current frame, and the difference between the top field of the current frame and the bottom field of the previous frame. Note that the difference calculation is not a simple pixel difference. It involves shifting of the bottom field to align it with the top field and blurring of the fields to suppress noise. Tricky coding allows it to be performed very fast. ;)

2. Choose the lowest difference and call it either current-current or current-previous.

3. If current-current, output the input frame. If current-previous, assemble an output frame from the top field of the current frame and the bottom field of the previous frame.

That's just for the matching and does not include the postprocessing.
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

Good morning, Sharc.

Yes, pthresh 0.0 disables postprocessing. I have added that to the document.

Ah yes, chroma artifacts. Silly me, I deinterlaced only the luma. :oops: Thanks for pointing it out.
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

OK, Sharc, please re-download and re-test. Getting close to goodness now.

I was thinking of providing a way to dynamically change pthresh for different scenes. Something like a config file:

0: 1.5
100: 3.2
1050: 0.5

The pthresh value would change at the given frame number and remain until changed again. I'm not sure it's worth the bother, though.
DAE avatar
jpsdr
Posts: 214
Joined: Tue Sep 21, 2010 4:16 am

Re: DGDecomb

Post by jpsdr »

First, mistake in my part : "Do IVTC" and not "Do telecine".
It seems that the english word is not what i though, "false friend", or even more "doesn't exist".
"Correlation" = sum abs(X-Y) : The more X and Y are close, the lower it is. X and Y are a line of data.
So, classic algorithms i knew chose top field from frame N for X and bottom field from frame N for Y. The two frames which produce the highest values are the frames with telecine.
My idea was : If the frame is telecined, the IVTC produced frame will result in a lower computed value than the original frame.
... I begin to hit my english limitation to explain... Sorry for the sometimes bad english.
DAE avatar
jpsdr
Posts: 214
Joined: Tue Sep 21, 2010 4:16 am

Re: DGDecomb

Post by jpsdr »

Ok, basicaly what you're doing follow the same idea than me, after reading your description. Instead me is a simple pixel difference, the denoising part is just done by removing the 2 LSB... :D
After, i do a lot of things, validate IVTC pattern (the 2 frames must be contiguous), if no data are validate enough (the decisions threshold are not meet) i keep the previous pattern to apply on the current sequence, etc...
User avatar
admin
Posts: 4551
Joined: Thu Sep 09, 2010 3:08 pm

Re: DGDecomb

Post by admin »

Thank you for the explanation. It seems to me that your solution is limited to fixed regular 3:2 pulldown and you cannot operate correctly on streams with irregular pulldown. Also, noise is not always limited to the two LSBs, but it's a reasonable thing to do.

BTW, the correct term for what you are doing is not "correlation" but "sum of absolute differences".

Did you like my results on your stream?
DAE avatar
jpsdr
Posts: 214
Joined: Tue Sep 21, 2010 4:16 am

Re: DGDecomb

Post by jpsdr »

Didn't tested yet, i'll do this a little latter properly when more time (very probably this WE).
Yes, i think it's limited to 3:2, but honestly, don't remember exactly... :scratch:
I've done this "especialy" for anime, which have "standard" 3:2 IVTC, but where the pattern changes at each scene, or even more during the same scene (very rare), but still stay a 3:2. It may be limitated, but increase the robustness of the detection.
And finaly, for scenes where nothing works, there is the "manual mode". I feed a text file to my plugin, telling on which frames i put the pattern. For exemple, i'll feed something like that :
1000 1500 3
5000 5030 1
10001 10033 4
....
Means :
Automatic detection until frame 999.
Force IVTC pattern to 3 between frames 1000 to 1500.
Automatic detection from frame 1001 to 4999
Force IVTC pattern to 1 between frames 5000 to 5030.
etc...
Because my purpose it to NOT deinterlace the result, i want it to be "perfect" progressive, without "bluring" because of a deinterlace.
But, on these files, very often fading are mess, mixing 2 IVTC pattern when between scene, or are field based instead of frame based on black/white fading, making them impossible to IVCT, whatever you do there is interlaced.
It's where the "manual mode" of mu deinterlace filter comes...
If feed it whith a text file the same way my IVTC filter, with for exemple :
100 110 1
1000 1020 2
.....
Means :
- Deinterlace between frames 100 to 110 with "mode 1".
- Deinterlace between frames 1000 to 1020 with "mode 2".
Between : Do nothing.
With these both tools and their manual mode, i'm able to get a full IVTC without deinterlacing result (except the fading parts).
But, the working time to get this result, is between 40 minutes to 2h for a 20 minutes episode.
DAE avatar
jpsdr
Posts: 214
Joined: Tue Sep 21, 2010 4:16 am

Re: DGDecomb

Post by jpsdr »

Euh... I was too much curious and get the last 2053 version, but i have an error message telling me that there is not such function as dgtelecide... :?:
I've opened an avs file using DGSource, it works fine, but only if i put the license.txt file in the avisynth plugin directory (otherwise, i have a green output with something small unreadable at the top left of the screen), it wasn't like that before...:scratch:
Same result under Windows 7 SP-1 x64 with avs+ r2440 and under Windows 7 SP-1 x86 with avisynth 2.6.1.
DAE avatar
Sharc
Posts: 233
Joined: Thu Sep 23, 2010 1:53 pm

Re: DGDecomb

Post by Sharc »

admin wrote:OK, Sharc, please re-download and re-test. Getting close to goodness now.

I was thinking of providing a way to dynamically change pthresh for different scenes. Something like a config file:

0: 1.5
100: 3.2
1050: 0.5

The pthresh value would change at the given frame number and remain until changed again. I'm not sure it's worth the bother, though.
All good now. Chroma problem solved. Thanks :D

P.S.
The config file for dynamically changing pthresh would have to be elaborated manually, by stepping through the file and inspecting the show=true result, right?
Post Reply