[RESOLVED] not able to feed the encoder
Re: [RESOLVED] not able to feed the encoder
Thanks, Luis, but I have already started setting it up on my machine.
Re: [RESOLVED] not able to feed the encoder
If there is anything I can do to help let me know
Re: [RESOLVED] not able to feed the encoder
That was easier than I hoped.
You only have to rebuild the AvsCompat project and then use the AvsCompat.dll to replace the one in the Vapoursynth install directory. With two minor changes (one already given above) I have this script opening and displaying just fine in VirtualDubFilterMod (delivers P016 to VDFM):
import vapoursynth as vs
core = vs.get_core()
core.avs.LoadPlugin(path="D:/Don/Programming/C++/DGDecNV/DGDecodeNV/x64/Release/DGDecodeNV.dll")
video = core.avs.DGSource('d:/tmp/vapoursynth/beach10bit.dgi',fulldepth=True)
video.set_output()
I'll just do a bit more testing and then I will release the patched DLL (with source changes).
You only have to rebuild the AvsCompat project and then use the AvsCompat.dll to replace the one in the Vapoursynth install directory. With two minor changes (one already given above) I have this script opening and displaying just fine in VirtualDubFilterMod (delivers P016 to VDFM):
import vapoursynth as vs
core = vs.get_core()
core.avs.LoadPlugin(path="D:/Don/Programming/C++/DGDecNV/DGDecodeNV/x64/Release/DGDecodeNV.dll")
video = core.avs.DGSource('d:/tmp/vapoursynth/beach10bit.dgi',fulldepth=True)
video.set_output()
I'll just do a bit more testing and then I will release the patched DLL (with source changes).
Re: [RESOLVED] not able to feed the encoder
That was quick
Re: [RESOLVED] not able to feed the encoder
At my age there's no time to waste.
Re: [RESOLVED] not able to feed the encoder
I released the package for high-bit-depth from Avisynth filters via Vapoursynth (see the Binaries Notification thread). If anyone would like to do additional testing that would be helpful.
Re: [RESOLVED] not able to feed the encoder
admin wrote: ↑Tue Oct 17, 2017 5:34 pmYou only have to rebuild the AvsCompat project and then use the AvsCompat.dll to replace the one in the Vapoursynth install directory. With two minor changes (one already given above) I have this script opening and displaying just fine in VirtualDubFilterMod (delivers P016 to VDFM):
import vapoursynth as vs
core = vs.get_core()
core.avs.LoadPlugin(path="D:/Don/Programming/C++/DGDecNV/DGDecodeNV/x64/Release/DGDecodeNV.dll")
video = core.avs.DGSource('d:/tmp/vapoursynth/beach10bit.dgi',fulldepth=True)
video.set_output()
I'll just do a bit more testing and then I will release the patched DLL (with source changes). Of course we would all prefer that Myrsloik absorbs the changes.
Thank you. I am not in a position to test high bit depth, but look forward to looking into it when I have a source like that (albeit not in the near future).
I do roll my own patched AvsCompat.dll just to reference some of your new gpu functions ... I see you have included the updated vapoursynth avisynth_compat.cpp and avisynth.h in the latest binaries zip, so I can compare them with vapoursynth's originals and add your patches into my minor ones (providing it doesn't break any licensing). Nice going !
I really do like it here.
Re: [RESOLVED] not able to feed the encoder
Cool! If you want to tell me the tweaks you made, I can include them in my version too.
Re: [RESOLVED] not able to feed the encoder
I have just started my tests using StaxRip, Avisynth+ and latest DGDecNV with latest NVidia drivers on my 10-bit HDR10 HEVC sources and cards with Pascal architecture. Next week I will buy a HDR TV as well so that I can make more advanced testing.
Re: [RESOLVED] not able to feed the encoder
Thanks, mparade! In addition to the CUVID fix it would be helpful for you to test the Vapoursynth interface (using the AvsCompat fix I released).
Edit: If you get the latest test build of Vapoursynth R40-test1 you will not need my version of AvsCompat.dll.
Edit: If you get the latest test build of Vapoursynth R40-test1 you will not need my version of AvsCompat.dll.
Re: [RESOLVED] Apply project range to audio (for MKV)
OK, if you are referring to the last item in this list then I must have been reading it wrongly.
Ah, sorry, you mean in regard to handling fulldepth rather than in regard to prefetch which I what I was thinking, oops.
Right wrong or indifferent, my patched version of avisynth_compat.cpp ended up with this snippet in regard to prefetch.
is avfs (Avisynth Virtual File System) not something different and unrelated to loading avisynth plugins like yours ?r40:
fixed rgb output sometimes being flipped in avisource
added alpha output settings to avisource, the default is no alpha output
fixed gamma being infinite if not set in levels, bug introduced in r39
removed the hack needed to support avisynth mvtools, the native mvtools has been superior for several years now and removing the hack makes avisynth filter creation much faster
added avisynth+ compatibility
only do prefetching in avfs with vs script when linear access is detected, fixes excessive prefetching that could make opening scripts very slow
Ah, sorry, you mean in regard to handling fulldepth rather than in regard to prefetch which I what I was thinking, oops.
Right wrong or indifferent, my patched version of avisynth_compat.cpp ended up with this snippet in regard to prefetch.
Code: Select all
// MPEG2DEC
SOURCE(MPEG2Source)
PREFETCHR0(LumaYV12)
PREFETCHR0(BlindPP)
PREFETCHR0(Deblock)
// Meow
SOURCE(DGSource)
PREFETCHR0(DGDenoise)
PREFETCHR0(DGSharpen)
// DGBob yadif based deinterlacer http://rationalqm.us/board/viewtopic.php?f=0&t=463&p=6712&hilit=dgbob#p6712
// http://rationalqm.us/board/viewtopic.php?f=14&t=559&p=6732&hilit=dgbob#p6732
temp = int64ToIntS(vsapi->propGetInt(in, "mode", 0, &err));
//PREFETCH(DGBob, (temp > 0) ? 2 : 1, 1, -2, 2) // close enough?
// I dont think so. DG said always previous + current + next ... fudge it a bit for doublerate, what the heck
switch (temp) {
case 0:
PREFETCH(DGBob, 1, 1, -1, 1); break; // single framerate deinterlacing -1,current,+1
case 1:
PREFETCH(DGBob, 1, 1, -2, 2); break; // double framerate deinterlacing -2,current,+2 although I reckon hydra3333 got it wrong and its actually -1,current,+1 ... -2,2 can't hurt too badly I guess
case 2:
PREFETCH(DGBob, 1, 1, -2, 2); break; // double framerate deinterlacing to single rate (makes slow motion) -2,current,+2 although I reckon hydra3333 got it wrong and its actually -1,current,+1 ... -2,2 can't hurt too badly I guess
}
// not sure what to do about PVBob either
// which is DG's cuda based PureVideo deinterlacer http://rationalqm.us/board/viewtopic.php?f=14&t=559&start=240#p6786 ... act like DGBob for the time being
temp = int64ToIntS(vsapi->propGetInt(in, "mode", 0, &err));
switch (temp) {
case 0:
PREFETCH(PVBob, 1, 1, -1, 1); break; // single framerate deinterlacing -1,current,+1
case 1:
PREFETCH(PVBob, 1, 1, -2, 2); break; // double framerate deinterlacing -2,current,+2 although I reckon hydra3333 got it wrong and its actually -1,current,+1 ... -2,2 can't hurt too badly I guess
case 2:
PREFETCH(PVBob, 1, 1, -2, 2); break; // double framerate deinterlacing to single rate (makes slow motion) -2,current,+2 although I reckon hydra3333 got it wrong and its actually -1,current,+1 ... -2,2 can't hurt too badly I guess
}
BROKEN(IsCombed)
PREFETCHR0(FieldDeinterlace)
PREFETCH(Telecide, 1, 1, -2, 10) // not good
PREFETCH(DGTelecide, 1, 1, -2, 10) // also not good
temp = int64ToIntS(vsapi->propGetInt(in, "cycle", 0, &err));
PREFETCH(DGDecimate, temp - 1, temp, -(temp + 3), temp + 3) // probably suboptimal
PREFETCH(Decimate, temp - 1, temp, -(temp + 3), temp + 3) // probably suboptimal to
I really do like it here.
Re: [RESOLVED] Apply project range to audio (for MKV)
Yes, the prefetches were what I was interested in. Thanks!
Re: [RESOLVED] not able to feed the encoder
Hello, a couple of dumb user questions:
I am mis-understanding what's going on with HDR 10bit -> 8bit and "dithering down" to 8bit in DGSource, so a hint or a link would be appreciated if someone could spare the time.
DGSource() manual indicates:
Hence I don't understand how to interpret this "When fulldepth=false and the video is HEVC 10-bit or 12-bit, then the video is dithered down to 8-bit for delivery" which seems to "dither down" rather than "tonemap" (poor wording I know, sorry) ?
So I wonder, does 10bit/12bit here not mean HDR10 ? https://en.wikipedia.org/wiki/High-dynamic-range_video
Unless I'm 100% off base, h.264 AVC seems to do 10bit and 12bit HDR too (eg for uploading HDR content to youtube, and maybe there's HDR phones/cameras around now or soon ?) - does that mean DGSource (and say a 1050Ti or 750Ti card; well, a 1050Ti I guess) is capable of decoding mpeg4 AVC 10 bit as well ?
Oh, just spotted this
And ...
I am mis-understanding what's going on with HDR 10bit -> 8bit and "dithering down" to 8bit in DGSource, so a hint or a link would be appreciated if someone could spare the time.
DGSource() manual indicates:
I gather hevc HDR (10/12 bit) to SDR (8 bit) requires "tonemap" type conversion, a la https://forum.doom9.org/showthread.php?p=1831247 ?fulldepth: true/false (default: false)
When fulldepth=true and the encoded video is HEVC 10-bit or 12-bit, then DGSource() delivers 16-bit data to Avisynth with the unused lower bits zeroed. The reported pixel format is CS_YUV420P16. If either of the two conditions are not met, then DGSource() delivers 8-bit YV12 or I420 data, as determined by the i420 parameter. When fulldepth=false and the video is HEVC 10-bit or 12-bit, then the video is dithered down to 8-bit for delivery. If you need a reduced color space (less than 16 bits) for your high-bit-depth processing, you can use ConvertBits() as needed after your DGSource() call.
Hence I don't understand how to interpret this "When fulldepth=false and the video is HEVC 10-bit or 12-bit, then the video is dithered down to 8-bit for delivery" which seems to "dither down" rather than "tonemap" (poor wording I know, sorry) ?
So I wonder, does 10bit/12bit here not mean HDR10 ? https://en.wikipedia.org/wiki/High-dynamic-range_video
Unless I'm 100% off base, h.264 AVC seems to do 10bit and 12bit HDR too (eg for uploading HDR content to youtube, and maybe there's HDR phones/cameras around now or soon ?) - does that mean DGSource (and say a 1050Ti or 750Ti card; well, a 1050Ti I guess) is capable of decoding mpeg4 AVC 10 bit as well ?
Oh, just spotted this
so I suppose 10/12 bit AVC decoding is off the table.
And ...
Um, does this imply that gpu accelerated HDR10/HDR10+ -> SDR "tonemap conversion" function is a possibility inside DGSource or other DG software ?
I really do like it here.
Re: [RESOLVED] not able to feed the encoder
I use the term "dither" in a general way to just mean a bit depth reduction. I don't know the details of the nVidia dithering. It may use "tonemapping". In any case, if it does not serve your needs then you can deliver full-depth and reduce it with other filters.
See above.Hence I don't understand how to interpret this "When fulldepth=false and the video is HEVC 10-bit or 12-bit, then the video is dithered down to 8-bit for delivery" which seems to "dither down" rather than "tonemap" (poor wording I know, sorry) ?
See above.Um, does this imply that gpu accelerated HDR10/HDR10+ -> SDR "tonemap conversion" function is a possibility inside DGSource or other DG software ?
Your remaining questions were answered in a reply to your post at the forum you cited.
Re: [RESOLVED] not able to feed the encoder
OK. Thank you.
In regard to tonemap like in here https://github.com/ifb/vapoursynth-tonemap and "DG software", what I meant to wonder was would it be faster in a CUDA accelerated sense or does it seem trivial in terms of cpu cost and thus not worth it as new cuda plugin ?
I wondered because tonemapped hdr -> sdr seems reasonably likely to be a handy thing into the mid term future. As may be AV1 support I guess although that'd seem to be problematic and depend on the users' cards and nvidia's software.
Yes they were answered there by some kind soul.
In regard to tonemap like in here https://github.com/ifb/vapoursynth-tonemap and "DG software", what I meant to wonder was would it be faster in a CUDA accelerated sense or does it seem trivial in terms of cpu cost and thus not worth it as new cuda plugin ?
I wondered because tonemapped hdr -> sdr seems reasonably likely to be a handy thing into the mid term future. As may be AV1 support I guess although that'd seem to be problematic and depend on the users' cards and nvidia's software.
I really do like it here.
Re: [RESOLVED] not able to feed the encoder
That's an interesting idea, hydra3333. I'll look into it. Thanks.
Re: [RESOLVED] not able to feed the encoder
x265 accepts both 16 and 10 bit input but it is still not clear to me which would have better results finally in case of HDR input:If you have an application that accepts 16 bit you can omit the convertbits call. Or you can change it to convertbits(10) if your application accepts 10 bit.
1. by using ConvertBits(10) and feeding x265 with 10-bit input or
2. by omitting the call and feeding x265 with 16-bit input.
Output depth in x265 is set to 10-bit.
Thank you very much for any kind of help.
Re: [RESOLVED] not able to feed the encoder
The results will be the same. There is no information (all zeros) in the lower 6 bits when delivering 10-bit content as YUV420P16.
Re: [RESOLVED] not able to feed the encoder
You are most welcome and thank you for your contributions over the years.