CUDASynth
CUDASynth
Thanks for the suggestion. I'll see what I can do. Probably for a separate filter, though, rather than being part of tweak.
CUDASynth
I have wanted a good autolevels/autoadjust for such a long time that I'd forgotten about itthechaoscoder wrote: ↑Wed Mar 13, 2024 7:13 amSounds great. Will it support features like auto gain, auto balance? This filter here http://avisynth.nl/index.php/AutoAdjust is one of the better ones, but it's also a bit broken (no source code)
Anyone remember HDRAGC ?
I really do like it here.
CUDASynth
Ha ha, again I was the pioneer. I made the first such desktop multimedia filters back in July 2001 for VirtualDub. With dithering!
https://www.rationalqm.us/histogram.html
https://www.rationalqm.us/winhistogram.html
Ah, the good old days.
OK, so guys, what do you mean by auto balance, or color balance? The luma thing I get.
Sadly, I must tell you that Bullwinkle's uncle Moosetache was tragically murdered in cold blood by an Iditarod racer. Please keep him in your hearts during this trying time. When will human hubris reach it's zenith and begin a decline?
https://www.rationalqm.us/histogram.html
https://www.rationalqm.us/winhistogram.html
Ah, the good old days.
OK, so guys, what do you mean by auto balance, or color balance? The luma thing I get.
Sadly, I must tell you that Bullwinkle's uncle Moosetache was tragically murdered in cold blood by an Iditarod racer. Please keep him in your hearts during this trying time. When will human hubris reach it's zenith and begin a decline?
- thechaoscoder
- Posts: 49
- Joined: Tue Jul 14, 2020 8:34 am
CUDASynth
auto_balance [default: false] => Enable automatic color balance
auto_gain [default: false] => Enable automatic luminance gain
So yeah luma and chroma
But the best feature of AutoAdjust was temporal averaging.
temporal_radius [default: 20]
-----------------------------
Radius for temporal stabilization in each direction [useful range: 10~30]
0 = no temporal averaging
xx = number of frames used in averaging
CUDASynth
Well thanks, but you haven't told me what they do!
What is automatic color balance?
What is temporal stabilization?
Example vids (before/after) with scripts would be ideal.
What is automatic color balance?
What is temporal stabilization?
Example vids (before/after) with scripts would be ideal.
CUDASynth
If we are talking about automatic color adjustments, auto white balance would probably be very useful as well. Whenever I've had to do any VHS or DV restoration, white balance has been an issue.
Maybe this plugin/script combo would be something to investigate if you are interested.
https://forum.doom9.org/showthread.php?t=174411
Maybe this plugin/script combo would be something to investigate if you are interested.
https://forum.doom9.org/showthread.php?t=174411
CUDASynth
DGSharpen is still integrated, so don't worry. Say we have a frame in buffer gpu0. If we sharpen the frame saving each pixel in gpu0, that is in-place. If we save to a new buffer gpu1 (and then deliver gpu1), that is not in-place. Typically, you can do in-place if a pixel is not depending on surrounding pixels. But sharpen depends on surrounding pixels. In practice the result was still mostly OK when done in-place due to CUDA parallelism, but for full correctness it needed to be made not in-place.
Sorry, didn't get a chance to look at your variability report. I'll wait for your further clarification.
- thechaoscoder
- Posts: 49
- Joined: Tue Jul 14, 2020 8:34 am
CUDASynth
I think color balance means white balance in this case.
Basically if you only apply "auto correction" on a per frame basis it could lead to some sort of flickering. One frame gets a bit too dark, the next a bit too bright etc. Averaging X frames leads to more consistent results across a scene => no "flickering".(parameter description from wiki)
temporal_radius [default: 20]
-----------------------------
Radius for temporal stabilization in each direction [useful range: 10~30]
0 = no temporal averaging
xx = number of frames used in averaging
I don't know what numbers need to be averaged, that's for the experts to figure out.
Where was a perfect example on D9 but I can't find it. Will post it when I find it.
CUDASynth
OK, there's something very strange going on and I do not have the iq nor base knowledge to begin to diagnose it.
Further to the variability in FPS when denoising https://rationalqm.us/board/viewtopic.p ... 413#p20413
Referring to the attached .pdf tables of very strange results (filename had to be .txt to upload, it is really a .pdf),
A.
I ran the tests on the 8x VHS videos using DGSource dn_enable=1, dn_quality="good" ...
Without and with stack_horizontal of "before" and "after" videos indicates
- fps is stable when the "stackhorizontal" is in absent
- fps is quite variable when the "stackhorizontal" is in place, i.e. when run at different times, quite different fps arise
So, is stackhorizotal somehow an issue I thought ... but wait ...
B.
I ran the same tests on the 8x VHS videos using DGSource dn_enable=1, dn_quality="best" ...
Here, without and with stack_horizontal of "before" and "after" videos indicates
- fps is stable when the "stackhorizontal" is in absent
- fps is stable when the "stackhorizontal" is in place
Strange ... the variability disappears or is well masked when dn_quality is changed from "good" to "best".
One finger points to stackhorizontal,
Another finger points to denoise with dn_quality = "good"
... and yet each test is a separate commandline (i.e. a separate non-concurrent process using portable vapoursynth) which should in theory remove process memory issues between runs on the same input files
C.
I ran the tests on the 8x VHS videos using DGSource dn_enable=1, dn_quality="good" ...
This time instead of stackhorizontal I used Interleave and goodness me was I surprised !!!
Here, without and with INTERLEAVE of "before" and "after" videos indicates
- fps is stable when the "INTERLEAVE" is in absent
- fps is stable when the "INTERLEAVE" is in place
BUT THE SURPRISE WAS ...
just a plain set_output on the "after" clip yields circa 570 fps on average
whereas a set_output on the interleaved_video = core.std.Interleave( [before_video, after_video] )
YEILDS 900+ fps !!!!!!!!!!!!
EDIT: But I've made a very bad assumption somewhere. Every 2nd frame in the interleaved output file is green. I guess about format. That's probably the reason for 900+fps.
So,
- stacking slows it down, how much depends on denoise="good" or "best"
- denoise="good" makes the fps unreasonably variable with a stackhorizontal
- denoise="best" with a stackhorizontal evens out or masks the unreasonably variable fps
- interleaving (denoise="good") speeds it up like absolute lightning to 950+ fps (instead of 570 fps)
Is that vapoursynth, python, or perhaps how the 2x DGSource lines may be delivering frames ?
Who knows.
Further to the variability in FPS when denoising https://rationalqm.us/board/viewtopic.p ... 413#p20413
Referring to the attached .pdf tables of very strange results (filename had to be .txt to upload, it is really a .pdf),
A.
I ran the tests on the 8x VHS videos using DGSource dn_enable=1, dn_quality="good" ...
Without and with stack_horizontal of "before" and "after" videos indicates
- fps is stable when the "stackhorizontal" is in absent
- fps is quite variable when the "stackhorizontal" is in place, i.e. when run at different times, quite different fps arise
So, is stackhorizotal somehow an issue I thought ... but wait ...
B.
I ran the same tests on the 8x VHS videos using DGSource dn_enable=1, dn_quality="best" ...
Here, without and with stack_horizontal of "before" and "after" videos indicates
- fps is stable when the "stackhorizontal" is in absent
- fps is stable when the "stackhorizontal" is in place
Strange ... the variability disappears or is well masked when dn_quality is changed from "good" to "best".
One finger points to stackhorizontal,
Another finger points to denoise with dn_quality = "good"
... and yet each test is a separate commandline (i.e. a separate non-concurrent process using portable vapoursynth) which should in theory remove process memory issues between runs on the same input files
C.
I ran the tests on the 8x VHS videos using DGSource dn_enable=1, dn_quality="good" ...
This time instead of stackhorizontal I used Interleave and goodness me was I surprised !!!
Here, without and with INTERLEAVE of "before" and "after" videos indicates
- fps is stable when the "INTERLEAVE" is in absent
- fps is stable when the "INTERLEAVE" is in place
BUT THE SURPRISE WAS ...
just a plain set_output on the "after" clip yields circa 570 fps on average
whereas a set_output on the interleaved_video = core.std.Interleave( [before_video, after_video] )
YEILDS 900+ fps !!!!!!!!!!!!
EDIT: But I've made a very bad assumption somewhere. Every 2nd frame in the interleaved output file is green. I guess about format. That's probably the reason for 900+fps.
So,
- stacking slows it down, how much depends on denoise="good" or "best"
- denoise="good" makes the fps unreasonably variable with a stackhorizontal
- denoise="best" with a stackhorizontal evens out or masks the unreasonably variable fps
- interleaving (denoise="good") speeds it up like absolute lightning to 950+ fps (instead of 570 fps)
Is that vapoursynth, python, or perhaps how the 2x DGSource lines may be delivering frames ?
Who knows.
- Attachments
-
- Z-TEST_RESULTS_2.PDF.txt
- (90.35 KiB) Downloaded 392 times
I really do like it here.
- thechaoscoder
- Posts: 49
- Joined: Tue Jul 14, 2020 8:34 am
CUDASynth
I would first try to remove the core.avs.LoadPlugin() line and see if it changes anything.
CUDASynth
@hydra3333
First I want to fix whatever green screen stuff you get. Tell me how to reproduce that in the simplest way possible.
Regarding FPS stuff, how are you measuring fps? Is this for your full transcode pipeline or just simple decoding?
While I appreciate the detailed report, it's very hard to chew. Please keep things simple for me as you have introduced so many variables into this. Just start with one simple case and say what you think is off. Don't try to debug things and multiply evrything by God knows how many factors. One simple case and what you think is off. Thank you.
First I want to fix whatever green screen stuff you get. Tell me how to reproduce that in the simplest way possible.
Regarding FPS stuff, how are you measuring fps? Is this for your full transcode pipeline or just simple decoding?
While I appreciate the detailed report, it's very hard to chew. Please keep things simple for me as you have introduced so many variables into this. Just start with one simple case and say what you think is off. Don't try to debug things and multiply evrything by God knows how many factors. One simple case and what you think is off. Thank you.
CUDASynth
OK. You have the source file(s). Hope you may be able to do something with the .bat below.
Interestingly,
- of the cudasynth downloads, TEST6 works (no green frames), TEST7 works (no green frames)
- with released v252 both interleave and stackhorizontal yield green tinged frames, both with and without the 2nd core.avs.LoadPlugin
Let me know if you need anything else.
I'm still assuming I've made bad assumptions, perhaps about formats or something.
Interestingly,
- of the cudasynth downloads, TEST6 works (no green frames), TEST7 works (no green frames)
- with released v252 both interleave and stackhorizontal yield green tinged frames, both with and without the 2nd core.avs.LoadPlugin
Code: Select all
@ECHO ON
@setlocal ENABLEDELAYEDEXPANSION
@setlocal enableextensions
Set "root=G:\HDTV\DGtest"
Set "root_test2=!root!\TEST2"
Set "vapoursynth_root=!root_test2!\Vapoursynth-x64"
Set "root_dg=!vapoursynth_root!\DGIndex"
Set "dgi_file=!root_test2!\00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.DGI"
Set "mpg_input=!root_test2!\00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.qsf.mpg"
Set "mp4_output=!root_test2!\00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.result.mp4"
DIR "!mpg_input!"
DEL /F "!dgi_file!"
"!root_dg!\DGIndexNV.exe" -version
"!root_dg!\DGIndexNV.exe" -i "!mpg_input!" -e -h -o "!dgi_file!"
REM ONLY FOR INTERLACED MATERIAL SINCE IT DEINTERLACES
DIR "!vapoursynth_root!\DGIndex\DGDecodeNV.dll"
REM @ECHO OFF
set "_VPY_file=!root_test2!\00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.vpy"
DEL /F "!_VPY_file!"
ECHO import vapoursynth as vs # this allows use of constants eg vs.YUV420P8 >> "!_VPY_file!" 2>&1
ECHO from vapoursynth import core # actual vapoursynth core >> "!_VPY_file!" 2>&1
ECHO core.std.LoadPlugin^(r'!root_dg!\DGDecodeNV.dll'^) # do it like gonca https://forum.doom9.org/showthread.php?p=1877765#post1877765 >> "!_VPY_file!" 2>&1
ECHO core.avs.LoadPlugin^(r'!root_dg!\DGDecodeNV.dll'^) # do it like gonca https://forum.doom9.org/showthread.php?p=1877765#post1877765 >> "!_VPY_file!" 2>&1
ECHO # >> "!_VPY_file!" 2>&1
ECHO before_video = core.dgdecodenv.DGSource^( r'!dgi_file!', deinterlace=1, use_top_field=True, use_pf=False ^) >> "!_VPY_file!" 2>&1
ECHO # >> "!_VPY_file!" 2>&1
ECHO after_video = core.dgdecodenv.DGSource^( r'!dgi_file!', deinterlace=1, use_top_field=True, use_pf=False, dn_enable=1, dn_quality="good", dn_strength=0.06, dn_cstrength=0.06, dn_tthresh=75.0, dn_show=0, sh_enable=1, sh_strength=0.3 ^) >> "!_VPY_file!" 2>&1
ECHO #after_video = core.dgdecodenv.DGSource^( r'!dgi_file!', deinterlace=1, use_top_field=True, use_pf=False, dn_enable=1, dn_quality="best", dn_strength=0.06, dn_cstrength=0.06, dn_tthresh=75.0, dn_show=0, sh_enable=1, sh_strength=0.3 ^) >> "!_VPY_file!" 2>&1
ECHO # >> "!_VPY_file!" 2>&1
ECHO # INTERLEAVED >> "!_VPY_file!" 2>&1
ECHO # >> "!_VPY_file!" 2>&1
ECHO #interleaved_video = core.std.Interleave^( [before_video, after_video] ^) >> "!_VPY_file!" 2>&1
ECHO #interleaved_video = core.std.AssumeFPS^( interleaved_video, fpsnum=25, fpsden=1 ^) >> "!_VPY_file!" 2>&1
ECHO #interleaved_video.set_output^(^) >> "!_VPY_file!" 2>&1
ECHO # >> "!_VPY_file!" 2>&1
ECHO # STACKED >> "!_VPY_file!" 2>&1
ECHO # >> "!_VPY_file!" 2>&1
ECHO stacked_video = core.std.StackHorizontal^( [before_video, after_video] ^) >> "!_VPY_file!" 2>&1
ECHO stacked_video.set_output^(^) >> "!_VPY_file!" 2>&1
ECHO # >> "!_VPY_file!" 2>&1
ECHO # AFTER CLIP ONLY >> "!_VPY_file!" 2>&1
ECHO # >> "!_VPY_file!" 2>&1
ECHO #after_video = core.std.AssumeFPS^( after_video, fpsnum=25, fpsden=1 ^) >> "!_VPY_file!" 2>&1
ECHO #after_video.set_output^(^) >> "!_VPY_file!" 2>&1
@ECHO ON
TYPE "!_VPY_file!"
"!vapoursynth_root!\VSPipe.exe" --version
REM this vspipe yields fps
"!vapoursynth_root!\VSPipe.exe" --filter-time --container y4m "!_VPY_file!" --
REM this yields a .mp4 video
DEL /F "!mp4_output!"
"!vapoursynth_root!\VSPipe.exe" --container y4m --filter-time "!_VPY_file!" - | "!vapoursynth_root!\ffmpeg_OpenCL.exe" -hide_banner -v verbose -nostats -f yuv4mpegpipe -i pipe: -probesize 200M -analyzeduration 200M -fps_mode passthrough -sws_flags lanczos+accurate_rnd+full_chroma_int+full_chroma_inp -strict experimental -c:v h264_nvenc -pix_fmt nv12 -preset p7 -multipass fullres -forced-idr 1 -g 25 -coder:v cabac -spatial-aq 1 -temporal-aq 1 -dpb_size 0 -bf:v 3 -b_ref_mode:v 0 -rc:v vbr -cq:v 0 -b:v 6000000 -minrate:v 500000 -maxrate:v 12000000 -bufsize 12000000 -profile:v high -level 5.2 -movflags +faststart+write_colr -y "!mp4_output!"
pause
goto :eof
I'm still assuming I've made bad assumptions, perhaps about formats or something.
I really do like it here.
CUDASynth
I converted the bat file to a simple VPY script that I can open in VirtualDub2. I duplicated the green screen. Investigating...
I preferred your old avatar.
Code: Select all
import vapoursynth as vs
from vapoursynth import core
core.std.LoadPlugin(r'D:\Don\Programming\C++\DGDecNV\DGDecodeNV\x64\Release\DGDecodeNV.dll')
before_video = core.dgdecodenv.DGSource(r'00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.dgi', deinterlace=1, use_top_field=True, use_pf=False)
after_video = core.dgdecodenv.DGSource(r'00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.dgi', deinterlace=1, use_top_field=True, use_pf=False, dn_enable=1, dn_quality="good", dn_strength=0.06, dn_cstrength=0.06, dn_tthresh=75.0, dn_show=0, sh_enable=1, sh_strength=0.3)
stacked_video = core.std.StackHorizontal([before_video, after_video])
stacked_video.set_output()
CUDASynth
I broke DGSharpen() in the last slipstream. Will fix ASAP.
CUDASynth
Slipstream released.
BTW, DGTweak() is well under way. And I have ideas for the balance stuff.
BTW, DGTweak() is well under way. And I have ideas for the balance stuff.
CUDASynth
OK, I'm a happy camper now.
The green frames are gone.
The variability in fps has largely disappeared in the case of outputting only the "after" clip. Maybe it's magic, maybe it's Windows, who knows.
(The fps variability remains when using stackhorizontal, over separate consecutive runs, but who cares. Blame Windows.)
So, looks good ! Thank you !
This
Yields these tables after 3 test runs of 8 VHS .mpg files against each .vpy stacked/interleaved/after-only.
https://www.newsweek.com/21-animals-tha ... ht-1571299
Two minutes rather than two years, is more my speed
The green frames are gone.
The variability in fps has largely disappeared in the case of outputting only the "after" clip. Maybe it's magic, maybe it's Windows, who knows.
(The fps variability remains when using stackhorizontal, over separate consecutive runs, but who cares. Blame Windows.)
So, looks good ! Thank you !
This
Code: Select all
"G:\HDTV\DGtest\Vapoursynth-x64\VSPipe.exe" --filter-time --container y4m "G:\HDTV\DGtest\TEST2\00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.vpy" --
with variations of this:
import vapoursynth as vs # this allows use of constants eg vs.YUV420P8
from vapoursynth import core # actual vapoursynth core
core.std.LoadPlugin(r'G:\HDTV\DGtest\Vapoursynth-x64\DGIndex\DGDecodeNV.dll') # do it like gonca https://forum.doom9.org/showthread.php?p=1877765#post1877765
core.avs.LoadPlugin(r'G:\HDTV\DGtest\Vapoursynth-x64\DGIndex\DGDecodeNV.dll') # do it like gonca https://forum.doom9.org/showthread.php?p=1877765#post1877765
#
before_video = core.dgdecodenv.DGSource( r'G:\HDTV\DGtest\TEST2\00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.DGI', deinterlace=1, use_top_field=True, use_pf=False )
#
after_video = core.dgdecodenv.DGSource( r'G:\HDTV\DGtest\TEST2\00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.DGI', deinterlace=1, use_top_field=True, use_pf=False, dn_enable=1, dn_quality="good", dn_strength=0.06, dn_cstrength=0.06, dn_tthresh=75.0, dn_show=0, sh_enable=1, sh_strength=0.3 )
#after_video = core.dgdecodenv.DGSource( r'G:\HDTV\DGtest\TEST2\00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.DGI', deinterlace=1, use_top_field=True, use_pf=False, dn_enable=1, dn_quality="best", dn_strength=0.06, dn_cstrength=0.06, dn_tthresh=75.0, dn_show=0, sh_enable=1, sh_strength=0.3 )
#
# INTERLEAVED
#
#interleaved_video = core.std.Interleave( [before_video, after_video] )
#interleaved_video = core.std.AssumeFPS( interleaved_video, fpsnum=25, fpsden=1 )
#interleaved_video.set_output()
#
# STACKED
#
#stacked_video = core.std.StackHorizontal( [before_video, after_video] )
#stacked_video.set_output()
#
# AFTER CLIP ONLY
#
after_video = core.std.AssumeFPS( after_video, fpsnum=25, fpsden=1 )
after_video.set_output()
Well, sheep are mentioned in the link. As are squirrels... cough
https://www.newsweek.com/21-animals-tha ... ht-1571299
Two minutes rather than two years, is more my speed
I really do like it here.
- thechaoscoder
- Posts: 49
- Joined: Tue Jul 14, 2020 8:34 am
CUDASynth
It makes sense that Interleave is faster than StackHorizontal. Interleave should have almost zero processing overhead, your "list of frames" simply gets twice a long.
The other difference is that StackHorizontal needs to process 2 filters (or 2 frames) per frame. So twice the work per frame.
I was curious and tested avisynth, and the fps drop about the same.
And fps with or without interleave are about the same as well.
The other difference is that StackHorizontal needs to process 2 filters (or 2 frames) per frame. So twice the work per frame.
I was curious and tested avisynth, and the fps drop about the same.
And fps with or without interleave are about the same as well.
CUDASynth
Bullwinkle will not be happy.hydra3333 wrote: ↑Sat Mar 16, 2024 1:14 amWell, sheep are mentioned in the link. As are squirrels... cough
https://www.newsweek.com/21-animals-tha ... ht-1571299
@thechaoscoder
I concur in your analysis, for which thank you. For stack we have two frame copies into a composite frame.
CUDASynth
I would have expected the plain set_output() of the filtered clip to be faster than the clip which interleaves the filtered clip and the unfiltered clip.
The result of ~60% of the fps of the interleaved clip which notionally does more work seems a bit counter-intuitive.
That's a massive amount of fps being burned somewhere
Oh well.
The result of ~60% of the fps of the interleaved clip which notionally does more work seems a bit counter-intuitive.
That's a massive amount of fps being burned somewhere
Oh well.
I really do like it here.
CUDASynth
I did some basic testing with just timing how long things take to play in VirtualDub.
Script 1: Decode before and after, assumefps 5000 for after, loop 20 times. Time to play: 48 seconds
Script 2: Decode before and after, interleave before and after, assumefps 5000, loop 20 times. Time to play: 126 seconds
Now, Script 2 has twice as many frames as Script 1, so we divide its time by two, giving us:
Not interleaved: 48 seconds
Interleaved: 63 seconds
or, using the frame counts and times, we get:
Not interleaved: 1065 fps (RTX 4090 haha)
Interleaved: 812 fps
Looks reasonable to me. The overhead of interleave is greater than one might think because it has to generate frames from the before clip, whereas Script 1 never does. I prefer not to get into your methodology as it brings in lots of extra factors and assumptions I know nothing about. Please frame further discussion using this testing paradigm, where everything is on the table and there are no black boxes.
Script 1: Decode before and after, assumefps 5000 for after, loop 20 times. Time to play: 48 seconds
Code: Select all
import vapoursynth as vs
from vapoursynth import core
core.std.LoadPlugin(r'D:\Don\Programming\C++\DGDecNV\DGDecodeNV\x64\Release\DGDecodeNV.dll')
before_video = core.dgdecodenv.DGSource(r'00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.dgi', deinterlace=1, use_top_field=True, use_pf=False)
after_video = core.dgdecodenv.DGSource(r'00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.dgi', deinterlace=1, use_top_field=True, use_pf=False, dn_enable=1, dn_quality="good", dn_strength=0.06, dn_cstrength=0.06, dn_tthresh=75.0, dn_show=0, sh_enable=1, sh_strength=0.3)
after_video = core.std.AssumeFPS(after_video, fpsnum=5000, fpsden=1)
after_video = core.std.Loop(after_video, 20)
after_video.set_output()
Code: Select all
import vapoursynth as vs
from vapoursynth import core
core.std.LoadPlugin(r'D:\Don\Programming\C++\DGDecNV\DGDecodeNV\x64\Release\DGDecodeNV.dll')
before_video = core.dgdecodenv.DGSource(r'00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.dgi', deinterlace=1, use_top_field=True, use_pf=False)
after_video = core.dgdecodenv.DGSource(r'00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.dgi', deinterlace=1, use_top_field=True, use_pf=False, dn_enable=1, dn_quality="good", dn_strength=0.06, dn_cstrength=0.06, dn_tthresh=75.0, dn_show=0, sh_enable=1, sh_strength=0.3)
interleaved_video = core.std.Interleave([before_video, after_video])
interleaved_video = core.std.AssumeFPS(interleaved_video, fpsnum=5000, fpsden=1)
interleaved_video = core.std.Loop(interleaved_video, 20)
interleaved_video.set_output()
Not interleaved: 48 seconds
Interleaved: 63 seconds
or, using the frame counts and times, we get:
Not interleaved: 1065 fps (RTX 4090 haha)
Interleaved: 812 fps
Looks reasonable to me. The overhead of interleave is greater than one might think because it has to generate frames from the before clip, whereas Script 1 never does. I prefer not to get into your methodology as it brings in lots of extra factors and assumptions I know nothing about. Please frame further discussion using this testing paradigm, where everything is on the table and there are no black boxes.
CUDASynth
edit: understood now. Thanks. I got as far as better measuring elapsed times with powershell
before applying the necessary thought to what you said.
There's an end to it.
CudaSynth rules.
Code: Select all
powershell.exe -executionpolicy bypass -Command "Measure-Command {G:\HDTV\DGtest\TEST3-DG\Vapoursynth-x64\VSPipe.exe --filter-time --container y4m G:\HDTV\DGtest\TEST3-DG\00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT_normal.vpy -- | Out-Default}"
There's an end to it.
CudaSynth rules.
I really do like it here.