DGDenoise

These CUDA filters are packaged into DGDecodeNV, which is part of DGDecNV.
Sharc
Distinguished Member
Distinguished Member
Posts: 181
Joined: Thu Sep 23, 2010 1:53 pm

Re: About DGDenoise

Post by Sharc » Fri Mar 10, 2017 9:08 am

Is the YV12 a CUDA/Nvidia restriction? I am asking because I try to avoid colorspace conversion to YV12 because of the Chroma upsampling and Chroma Interlace Problem.

User avatar
admin
Site Admin
Posts: 4002
Joined: Thu Sep 09, 2010 3:08 pm

Re: About DGDenoise

Post by admin » Fri Mar 10, 2017 9:24 am

What color space are you interested in support for? Typical script?

These filters are designed to be used with DGSource() so obviously I implemented only YV12. That could change, however.

Sharc
Distinguished Member
Distinguished Member
Posts: 181
Joined: Thu Sep 23, 2010 1:53 pm

Re: About DGDenoise

Post by Sharc » Fri Mar 10, 2017 9:31 am

My tape captures are normally YUV or YUY2, 4:2:0 or 4:2:2. Hmm, now I think to remember that lossless 4:2:2 is not supported by Nvidia .....

Script like:
AviSource(xxxx.avi) #typically interlaced YUY2 4:2:2
bob() #optional
Crop(....)
Resize(....) #optional
addborders(...) #optional
ConverttoYV12() #for DGfilters
DGDenoise().DGSharpen()
separatefields().selectevery(4,0,3).weave() #optional re-interlace - when bobbed before

Then encode with x264 interlaced

Edit:
I am aware that x264 will convert to YV12 anyway, however I thought it would be better - if possible - to keep YUY2 for all the editing and filtering until the last step.

User avatar
admin
Site Admin
Posts: 4002
Joined: Thu Sep 09, 2010 3:08 pm

Re: About DGDenoise

Post by admin » Fri Mar 10, 2017 12:52 pm

If your source is interlaced then you have to tell all the following filters for proper handling. So telling DGDenoise() the same via either ConvertToYV12() or directly if I added a parameter would be necessary.

And I would of course have to convert internally if I were to support YUY2 input, and that's why I would need a parameter to tell if the source is interlaced.

So I don't really follow your thinking. You can't get away from telling filters if the source is interlaced/progressive, unless of course you rely on the defaults. But in that case won't ConvertToYV12 default too?

Finally, you have to deinterlace first anyway unless you want trash as output.

Please help me to understand what you ask for here and why, in light of the above. I don't always grok things straightaway.

Sharc
Distinguished Member
Distinguished Member
Posts: 181
Joined: Thu Sep 23, 2010 1:53 pm

Re: About DGDenoise

Post by Sharc » Fri Mar 10, 2017 1:31 pm

I thought leaving the YUY2 4:2:2 colorspace untouched would a priori eliminate certain chroma problems. For example YV12 interlaced needs to be cropped mod4 vertically, whereas YUY2 would accept mod2 cropping without chroma damage, if I am not mistaken.
Maybe it's just a fart as I have to deinterlace or separate the fields anyway for applying most of the filters, and putting converttoYV12(interlaced=true) early in the script is the way to go.

User avatar
admin
Site Admin
Posts: 4002
Joined: Thu Sep 09, 2010 3:08 pm

Re: About DGDenoise

Post by admin » Fri Mar 10, 2017 1:43 pm

Sharc wrote:I have to deinterlace or separate the fields anyway for applying most of the filters, and putting converttoYV12(interlaced=true) early in the script is the way to go.
Yes, Sharc, I agree. Let's keep things as is for now. You're right that 4:2:2 support would be useful and I'd certainly want to support it when CUVID does. People are hopeful for nVidia Video SDK 8.0, but it is still awaited. On the other hand, it is only a limitation of CUVID decoding. I could make CUDA 4:2:2 filters. Let's see what nVidia Video SDK 8.0 brings us.

User avatar
admin
Site Admin
Posts: 4002
Joined: Thu Sep 09, 2010 3:08 pm

Re: About DGDenoise

Post by admin » Fri Mar 10, 2017 5:13 pm

hydra3333 wrote: This wasn't me (it hung on the shack wall for 30 years) but it may as well have been https://drive.google.com/open?id=0B5RV2 ... WhpcjZnY3M
Oh why not, a relative of mine is in this one https://drive.google.com/open?id=0B5RV2 ... HRZYWtFbDQ
The spirit of adventure and discovery shines. So blessed you are to know such people. I wish they were forum members. :)

Aleron Ives
Distinguished Member
Distinguished Member
Posts: 113
Joined: Fri May 31, 2013 8:36 pm

Re: About DGDenoise

Post by Aleron Ives » Fri Mar 10, 2017 8:22 pm

This might be a dumb question, but since you're working on adding new filters, are there any prospects for replicating the functionality of Decomb this way? That's my most-used filter, and getting a CUDA speedup for it would be welcome. I'm not sure how closely related this would be to your recent work on DGBobIM or whether CUDA is even appropriate for this purpose... :?

User avatar
admin
Site Admin
Posts: 4002
Joined: Thu Sep 09, 2010 3:08 pm

Re: About DGDenoise

Post by admin » Fri Mar 10, 2017 8:54 pm

Great idea, Aleron! Sure, we could speed up field matching and decimation. Why didn't I think of these obvious things? :scratch: Thank you for the valuable suggestion.

DGBobIM was mostly a proof-of-concept for how screwed up the IMSDK API is. Not exciting to work on that stuff.

BTW, Aussies swim real good. We may have to keep an eye on them.

Sharc
Distinguished Member
Distinguished Member
Posts: 181
Joined: Thu Sep 23, 2010 1:53 pm

Re: About DGDenoise

Post by Sharc » Sat Mar 11, 2017 2:41 am

Aleron Ives wrote:This might be a dumb question, but since you're working on adding new filters, are there any prospects for replicating the functionality of Decomb this way? That's my most-used filter, and getting a CUDA speedup for it would be welcome. I'm not sure how closely related this would be to your recent work on DGBobIM or whether CUDA is even appropriate for this purpose... :?
+1 for implementing Decomb (especially the FieldMatching/IVTC functionality of telecide() )

P.S.
I also like the show= and debug= options of FieldDeinterlace(), however speed is not critical for visual analysis.

Post Reply