Ah, so simple.
I think to maximize things you need a mobo with two PCIEx16 slots and lots of PCI lanes.
Ditto that!but you only live once and its only money, gotta have some fun along the way, and this is one of the ways.
Ah, so simple.
Ditto that!but you only live once and its only money, gotta have some fun along the way, and this is one of the ways.
Simple minds, simple solutions.Ah, so simple
An AMD Ryzen or Threadripper compatible mobo will carry enough PCIEx16 slots and the CPU will support the required lanesI think to maximize things you need a mobo with two PCIEx16 slots and lots of PCI lanes
Code: Select all
static void VS_CC avisynthFilterInit(VSMap *in, VSMap *out, void **instanceData, VSNode *node, VSCore *core, const VSAPI *vsapi) {
WrappedClip *clip = (WrappedClip *) * instanceData;
if (!clip->preFetchClips.empty())
clip->fakeEnv->uglyNode = clip->preFetchClips.front();
const VideoInfo &viAvs = clip->clip->GetVideoInfo();
::VSVideoInfo vi;
vi.height = viAvs.height;
vi.width = viAvs.width;
vi.numFrames = viAvs.num_frames;
vi.fpsNum = viAvs.fps_numerator;
vi.fpsDen = viAvs.fps_denominator;
vs_normalizeRational(&vi.fpsNum, &vi.fpsDen);
if (viAvs.IsYV12())
vi.format = vsapi->getFormatPreset(pfYUV420P8, core);
else if (viAvs.IsYV24())
vi.format = vsapi->getFormatPreset(pfYUV444P8, core);
else if (viAvs.IsYV16())
vi.format = vsapi->getFormatPreset(pfYUV422P8, core);
else if (viAvs.IsYV411())
vi.format = vsapi->getFormatPreset(pfYUV411P8, core);
else if (viAvs.IsColorSpace(VideoInfo::CS_YUV9))
vi.format = vsapi->getFormatPreset(pfYUV410P8, core);
else if (viAvs.IsY8())
vi.format = vsapi->getFormatPreset(pfGray8, core);
else if (viAvs.IsYUY2())
vi.format = vsapi->getFormatPreset(pfCompatYUY2, core);
else if (viAvs.IsRGB32())
vi.format = vsapi->getFormatPreset(pfCompatBGR32, core);
else
vsapi->setError(out, "Avisynth Compat: Only YV12, YUY2 and RGB32 supported");
vi.flags = 0;
vsapi->setVideoInfo(&vi, 1, node);
}
Code: Select all
else if (viAvs.IsColorSpace(VideoInfo::CS_YUV420P16))
vi.format = vsapi->getFormatPreset(pfYUV420P16, core);
admin wrote: ↑Tue Oct 17, 2017 5:34 pmYou only have to rebuild the AvsCompat project and then use the AvsCompat.dll to replace the one in the Vapoursynth install directory. With two minor changes (one already given above) I have this script opening and displaying just fine in VirtualDubFilterMod (delivers P016 to VDFM):
import vapoursynth as vs
core = vs.get_core()
core.avs.LoadPlugin(path="D:/Don/Programming/C++/DGDecNV/DGDecodeNV/x64/Release/DGDecodeNV.dll")
video = core.avs.DGSource('d:/tmp/vapoursynth/beach10bit.dgi',fulldepth=True)
video.set_output()
I'll just do a bit more testing and then I will release the patched DLL (with source changes). Of course we would all prefer that Myrsloik absorbs the changes.
Thank you. I am not in a position to test high bit depth, but look forward to looking into it when I have a source like that (albeit not in the near future).
I have just started my tests using StaxRip, Avisynth+ and latest DGDecNV with latest NVidia drivers on my 10-bit HDR10 HEVC sources and cards with Pascal architecture. Next week I will buy a HDR TV as well so that I can make more advanced testing.
is avfs (Avisynth Virtual File System) not something different and unrelated to loading avisynth plugins like yours ?r40:
fixed rgb output sometimes being flipped in avisource
added alpha output settings to avisource, the default is no alpha output
fixed gamma being infinite if not set in levels, bug introduced in r39
removed the hack needed to support avisynth mvtools, the native mvtools has been superior for several years now and removing the hack makes avisynth filter creation much faster
added avisynth+ compatibility
only do prefetching in avfs with vs script when linear access is detected, fixes excessive prefetching that could make opening scripts very slow
Code: Select all
// MPEG2DEC
SOURCE(MPEG2Source)
PREFETCHR0(LumaYV12)
PREFETCHR0(BlindPP)
PREFETCHR0(Deblock)
// Meow
SOURCE(DGSource)
PREFETCHR0(DGDenoise)
PREFETCHR0(DGSharpen)
// DGBob yadif based deinterlacer http://rationalqm.us/board/viewtopic.php?f=0&t=463&p=6712&hilit=dgbob#p6712
// http://rationalqm.us/board/viewtopic.php?f=14&t=559&p=6732&hilit=dgbob#p6732
temp = int64ToIntS(vsapi->propGetInt(in, "mode", 0, &err));
//PREFETCH(DGBob, (temp > 0) ? 2 : 1, 1, -2, 2) // close enough?
// I dont think so. DG said always previous + current + next ... fudge it a bit for doublerate, what the heck
switch (temp) {
case 0:
PREFETCH(DGBob, 1, 1, -1, 1); break; // single framerate deinterlacing -1,current,+1
case 1:
PREFETCH(DGBob, 1, 1, -2, 2); break; // double framerate deinterlacing -2,current,+2 although I reckon hydra3333 got it wrong and its actually -1,current,+1 ... -2,2 can't hurt too badly I guess
case 2:
PREFETCH(DGBob, 1, 1, -2, 2); break; // double framerate deinterlacing to single rate (makes slow motion) -2,current,+2 although I reckon hydra3333 got it wrong and its actually -1,current,+1 ... -2,2 can't hurt too badly I guess
}
// not sure what to do about PVBob either
// which is DG's cuda based PureVideo deinterlacer http://rationalqm.us/board/viewtopic.php?f=14&t=559&start=240#p6786 ... act like DGBob for the time being
temp = int64ToIntS(vsapi->propGetInt(in, "mode", 0, &err));
switch (temp) {
case 0:
PREFETCH(PVBob, 1, 1, -1, 1); break; // single framerate deinterlacing -1,current,+1
case 1:
PREFETCH(PVBob, 1, 1, -2, 2); break; // double framerate deinterlacing -2,current,+2 although I reckon hydra3333 got it wrong and its actually -1,current,+1 ... -2,2 can't hurt too badly I guess
case 2:
PREFETCH(PVBob, 1, 1, -2, 2); break; // double framerate deinterlacing to single rate (makes slow motion) -2,current,+2 although I reckon hydra3333 got it wrong and its actually -1,current,+1 ... -2,2 can't hurt too badly I guess
}
BROKEN(IsCombed)
PREFETCHR0(FieldDeinterlace)
PREFETCH(Telecide, 1, 1, -2, 10) // not good
PREFETCH(DGTelecide, 1, 1, -2, 10) // also not good
temp = int64ToIntS(vsapi->propGetInt(in, "cycle", 0, &err));
PREFETCH(DGDecimate, temp - 1, temp, -(temp + 3), temp + 3) // probably suboptimal
PREFETCH(Decimate, temp - 1, temp, -(temp + 3), temp + 3) // probably suboptimal to
I gather hevc HDR (10/12 bit) to SDR (8 bit) requires "tonemap" type conversion, a la https://forum.doom9.org/showthread.php?p=1831247 ?fulldepth: true/false (default: false)
When fulldepth=true and the encoded video is HEVC 10-bit or 12-bit, then DGSource() delivers 16-bit data to Avisynth with the unused lower bits zeroed. The reported pixel format is CS_YUV420P16. If either of the two conditions are not met, then DGSource() delivers 8-bit YV12 or I420 data, as determined by the i420 parameter. When fulldepth=false and the video is HEVC 10-bit or 12-bit, then the video is dithered down to 8-bit for delivery. If you need a reduced color space (less than 16 bits) for your high-bit-depth processing, you can use ConvertBits() as needed after your DGSource() call.
so I suppose 10/12 bit AVC decoding is off the table.
Um, does this imply that gpu accelerated HDR10/HDR10+ -> SDR "tonemap conversion" function is a possibility inside DGSource or other DG software ?
I use the term "dither" in a general way to just mean a bit depth reduction. I don't know the details of the nVidia dithering. It may use "tonemapping". In any case, if it does not serve your needs then you can deliver full-depth and reduce it with other filters.
See above.Hence I don't understand how to interpret this "When fulldepth=false and the video is HEVC 10-bit or 12-bit, then the video is dithered down to 8-bit for delivery" which seems to "dither down" rather than "tonemap" (poor wording I know, sorry) ?
See above.Um, does this imply that gpu accelerated HDR10/HDR10+ -> SDR "tonemap conversion" function is a possibility inside DGSource or other DG software ?