CUDASynth

These CUDA filters are packaged into DGDecodeNV, which is part of DGDecNV.
User avatar
Rocky
Posts: 3621
Joined: Fri Sep 06, 2019 12:57 pm

CUDASynth

Post by Rocky »

Thanks for the suggestion. I'll see what I can do. Probably for a separate filter, though, rather than being part of tweak.
User avatar
hydra3333
Posts: 406
Joined: Wed Oct 06, 2010 3:34 am
Contact:

CUDASynth

Post by hydra3333 »

thechaoscoder wrote:
Wed Mar 13, 2024 7:13 am
Sounds great. Will it support features like auto gain, auto balance? This filter here http://avisynth.nl/index.php/AutoAdjust is one of the better ones, but it's also a bit broken (no source code)
I have wanted a good autolevels/autoadjust for such a long time that I'd forgotten about it :)
Anyone remember HDRAGC ? ;)
I really do like it here.
User avatar
Rocky
Posts: 3621
Joined: Fri Sep 06, 2019 12:57 pm

CUDASynth

Post by Rocky »

Ha ha, again I was the pioneer. :ugeek: I made the first such desktop multimedia filters back in July 2001 for VirtualDub. With dithering!

https://www.rationalqm.us/histogram.html
https://www.rationalqm.us/winhistogram.html

Ah, the good old days.

OK, so guys, what do you mean by auto balance, or color balance? The luma thing I get.

Sadly, I must tell you that Bullwinkle's uncle Moosetache was tragically murdered in cold blood by an Iditarod racer. Please keep him in your hearts during this trying time. When will human hubris reach it's zenith and begin a decline?
User avatar
thechaoscoder
Posts: 49
Joined: Tue Jul 14, 2020 8:34 am

CUDASynth

Post by thechaoscoder »

Rocky wrote:
Thu Mar 14, 2024 11:06 am

OK, so guys, what do you mean by auto balance, or color balance? The luma thing I get.
auto_balance [default: false] => Enable automatic color balance
auto_gain [default: false] => Enable automatic luminance gain

So yeah luma and chroma

But the best feature of AutoAdjust was temporal averaging.

temporal_radius [default: 20]
-----------------------------
Radius for temporal stabilization in each direction [useful range: 10~30]
0 = no temporal averaging
xx = number of frames used in averaging
User avatar
Rocky
Posts: 3621
Joined: Fri Sep 06, 2019 12:57 pm

CUDASynth

Post by Rocky »

Well thanks, but you haven't told me what they do!

What is automatic color balance?
What is temporal stabilization?

Example vids (before/after) with scripts would be ideal.
User avatar
hydra3333
Posts: 406
Joined: Wed Oct 06, 2010 3:34 am
Contact:

CUDASynth

Post by hydra3333 »

<snip> google can be handy ...
I really do like it here.
DAE avatar
Boulder
Posts: 113
Joined: Fri Jul 29, 2011 7:22 am

CUDASynth

Post by Boulder »

If we are talking about automatic color adjustments, auto white balance would probably be very useful as well. Whenever I've had to do any VHS or DV restoration, white balance has been an issue.

Maybe this plugin/script combo would be something to investigate if you are interested.
https://forum.doom9.org/showthread.php?t=174411
User avatar
Rocky
Posts: 3621
Joined: Fri Sep 06, 2019 12:57 pm

CUDASynth

Post by Rocky »

hydra3333 wrote:
Thu Mar 14, 2024 8:36 pm
Could you please clarify what that * means ?
DGSharpen is still integrated, so don't worry. Say we have a frame in buffer gpu0. If we sharpen the frame saving each pixel in gpu0, that is in-place. If we save to a new buffer gpu1 (and then deliver gpu1), that is not in-place. Typically, you can do in-place if a pixel is not depending on surrounding pixels. But sharpen depends on surrounding pixels. In practice the result was still mostly OK when done in-place due to CUDA parallelism, but for full correctness it needed to be made not in-place.

Sorry, didn't get a chance to look at your variability report. I'll wait for your further clarification.
User avatar
Rocky
Posts: 3621
Joined: Fri Sep 06, 2019 12:57 pm

CUDASynth

Post by Rocky »

Boulder wrote:
Fri Mar 15, 2024 12:35 am
If we are talking about automatic color adjustments, auto white balance would probably be very useful as well.
Thank you Boulder for your information.
User avatar
thechaoscoder
Posts: 49
Joined: Tue Jul 14, 2020 8:34 am

CUDASynth

Post by thechaoscoder »

Rocky wrote:
Thu Mar 14, 2024 12:21 pm
Well thanks, but you haven't told me what they do!

What is automatic color balance?
What is temporal stabilization?

Example vids (before/after) with scripts would be ideal.
I think color balance means white balance in this case.
(parameter description from wiki)
temporal_radius [default: 20]
-----------------------------
Radius for temporal stabilization in each direction [useful range: 10~30]
0 = no temporal averaging
xx = number of frames used in averaging
Basically if you only apply "auto correction" on a per frame basis it could lead to some sort of flickering. One frame gets a bit too dark, the next a bit too bright etc. Averaging X frames leads to more consistent results across a scene => no "flickering".

I don't know what numbers need to be averaged, that's for the experts to figure out. :scratch:

Where was a perfect example on D9 but I can't find it. Will post it when I find it.
User avatar
hydra3333
Posts: 406
Joined: Wed Oct 06, 2010 3:34 am
Contact:

CUDASynth

Post by hydra3333 »

OK, there's something very strange going on and I do not have the iq nor base knowledge to begin to diagnose it.

Further to the variability in FPS when denoising https://rationalqm.us/board/viewtopic.p ... 413#p20413

Referring to the attached .pdf tables of very strange results (filename had to be .txt to upload, it is really a .pdf),

A.
I ran the tests on the 8x VHS videos using DGSource dn_enable=1, dn_quality="good" ...

Without and with stack_horizontal of "before" and "after" videos indicates
- fps is stable when the "stackhorizontal" is in absent
- fps is quite variable when the "stackhorizontal" is in place, i.e. when run at different times, quite different fps arise

So, is stackhorizotal somehow an issue I thought ... but wait ...

B.
I ran the same tests on the 8x VHS videos using DGSource dn_enable=1, dn_quality="best" ...
Here, without and with stack_horizontal of "before" and "after" videos indicates
- fps is stable when the "stackhorizontal" is in absent
- fps is stable when the "stackhorizontal" is in place

Strange ... the variability disappears or is well masked when dn_quality is changed from "good" to "best".

One finger points to stackhorizontal,
Another finger points to denoise with dn_quality = "good"

... and yet each test is a separate commandline (i.e. a separate non-concurrent process using portable vapoursynth) which should in theory remove process memory issues between runs on the same input files

C.
I ran the tests on the 8x VHS videos using DGSource dn_enable=1, dn_quality="good" ...
This time instead of stackhorizontal I used Interleave and goodness me was I surprised !!!

Here, without and with INTERLEAVE of "before" and "after" videos indicates
- fps is stable when the "INTERLEAVE" is in absent
- fps is stable when the "INTERLEAVE" is in place

BUT THE SURPRISE WAS ...
just a plain set_output on the "after" clip yields circa 570 fps on average
whereas a set_output on the interleaved_video = core.std.Interleave( [before_video, after_video] )
YEILDS 900+ fps !!!!!!!!!!!!

EDIT: But I've made a very bad assumption somewhere. Every 2nd frame in the interleaved output file is green. I guess about format. That's probably the reason for 900+fps.

So,
- stacking slows it down, how much depends on denoise="good" or "best"
- denoise="good" makes the fps unreasonably variable with a stackhorizontal
- denoise="best" with a stackhorizontal evens out or masks the unreasonably variable fps
- interleaving (denoise="good") speeds it up like absolute lightning to 950+ fps (instead of 570 fps)

Is that vapoursynth, python, or perhaps how the 2x DGSource lines may be delivering frames ?

Who knows.
Attachments
Z-TEST_RESULTS_2.PDF.txt
(90.35 KiB) Downloaded 14 times
I really do like it here.
User avatar
thechaoscoder
Posts: 49
Joined: Tue Jul 14, 2020 8:34 am

CUDASynth

Post by thechaoscoder »

I would first try to remove the core.avs.LoadPlugin() line and see if it changes anything.
User avatar
Rocky
Posts: 3621
Joined: Fri Sep 06, 2019 12:57 pm

CUDASynth

Post by Rocky »

@hydra3333

First I want to fix whatever green screen stuff you get. Tell me how to reproduce that in the simplest way possible.

Regarding FPS stuff, how are you measuring fps? Is this for your full transcode pipeline or just simple decoding?

While I appreciate the detailed report, it's very hard to chew. Please keep things simple for me as you have introduced so many variables into this. Just start with one simple case and say what you think is off. Don't try to debug things and multiply evrything by God knows how many factors. One simple case and what you think is off. Thank you.
User avatar
hydra3333
Posts: 406
Joined: Wed Oct 06, 2010 3:34 am
Contact:

CUDASynth

Post by hydra3333 »

OK. You have the source file(s). Hope you may be able to do something with the .bat below.

Interestingly,
- of the cudasynth downloads, TEST6 works (no green frames), TEST7 works (no green frames)
- with released v252 both interleave and stackhorizontal yield green tinged frames, both with and without the 2nd core.avs.LoadPlugin

Code: Select all

@ECHO ON
@setlocal ENABLEDELAYEDEXPANSION
@setlocal enableextensions

Set "root=G:\HDTV\DGtest"
Set "root_test2=!root!\TEST2"

Set "vapoursynth_root=!root_test2!\Vapoursynth-x64"
Set "root_dg=!vapoursynth_root!\DGIndex"

Set "dgi_file=!root_test2!\00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.DGI"
Set "mpg_input=!root_test2!\00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.qsf.mpg"
Set "mp4_output=!root_test2!\00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.result.mp4"

DIR "!mpg_input!"
DEL /F "!dgi_file!"
"!root_dg!\DGIndexNV.exe" -version 
"!root_dg!\DGIndexNV.exe" -i "!mpg_input!" -e -h -o "!dgi_file!"

REM ONLY FOR INTERLACED MATERIAL SINCE IT DEINTERLACES
DIR "!vapoursynth_root!\DGIndex\DGDecodeNV.dll"
REM @ECHO OFF
set "_VPY_file=!root_test2!\00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.vpy"
DEL /F "!_VPY_file!"
ECHO import vapoursynth as vs		# this allows use of constants eg vs.YUV420P8 >> "!_VPY_file!" 2>&1
ECHO from vapoursynth import core	# actual vapoursynth core >> "!_VPY_file!" 2>&1
ECHO core.std.LoadPlugin^(r'!root_dg!\DGDecodeNV.dll'^) # do it like gonca https://forum.doom9.org/showthread.php?p=1877765#post1877765 >> "!_VPY_file!" 2>&1
ECHO core.avs.LoadPlugin^(r'!root_dg!\DGDecodeNV.dll'^) # do it like gonca https://forum.doom9.org/showthread.php?p=1877765#post1877765 >> "!_VPY_file!" 2>&1
ECHO #  >> "!_VPY_file!" 2>&1
ECHO before_video = core.dgdecodenv.DGSource^( r'!dgi_file!', deinterlace=1, use_top_field=True, use_pf=False ^) >> "!_VPY_file!" 2>&1
ECHO #  >> "!_VPY_file!" 2>&1
ECHO after_video = core.dgdecodenv.DGSource^( r'!dgi_file!', deinterlace=1, use_top_field=True, use_pf=False, dn_enable=1, dn_quality="good", dn_strength=0.06, dn_cstrength=0.06, dn_tthresh=75.0, dn_show=0, sh_enable=1, sh_strength=0.3 ^) >> "!_VPY_file!" 2>&1
ECHO #after_video = core.dgdecodenv.DGSource^( r'!dgi_file!', deinterlace=1, use_top_field=True, use_pf=False, dn_enable=1, dn_quality="best", dn_strength=0.06, dn_cstrength=0.06, dn_tthresh=75.0, dn_show=0, sh_enable=1, sh_strength=0.3 ^) >> "!_VPY_file!" 2>&1
ECHO #  >> "!_VPY_file!" 2>&1
ECHO # INTERLEAVED >> "!_VPY_file!" 2>&1
ECHO #  >> "!_VPY_file!" 2>&1
ECHO #interleaved_video = core.std.Interleave^( [before_video, after_video] ^) >> "!_VPY_file!" 2>&1
ECHO #interleaved_video = core.std.AssumeFPS^( interleaved_video, fpsnum=25, fpsden=1 ^) >> "!_VPY_file!" 2>&1
ECHO #interleaved_video.set_output^(^) >> "!_VPY_file!" 2>&1
ECHO #  >> "!_VPY_file!" 2>&1
ECHO # STACKED >> "!_VPY_file!" 2>&1
ECHO #  >> "!_VPY_file!" 2>&1
ECHO stacked_video = core.std.StackHorizontal^( [before_video, after_video] ^) >> "!_VPY_file!" 2>&1
ECHO stacked_video.set_output^(^) >> "!_VPY_file!" 2>&1
ECHO #  >> "!_VPY_file!" 2>&1
ECHO # AFTER CLIP ONLY >> "!_VPY_file!" 2>&1
ECHO #  >> "!_VPY_file!" 2>&1
ECHO #after_video = core.std.AssumeFPS^( after_video, fpsnum=25, fpsden=1 ^) >> "!_VPY_file!" 2>&1
ECHO #after_video.set_output^(^) >> "!_VPY_file!" 2>&1
@ECHO ON
TYPE "!_VPY_file!"

"!vapoursynth_root!\VSPipe.exe" --version 
REM this vspipe yields fps
"!vapoursynth_root!\VSPipe.exe" --filter-time --container y4m "!_VPY_file!" -- 

REM this yields a .mp4 video
DEL /F "!mp4_output!"
"!vapoursynth_root!\VSPipe.exe" --container y4m --filter-time "!_VPY_file!" - | "!vapoursynth_root!\ffmpeg_OpenCL.exe" -hide_banner -v verbose -nostats -f yuv4mpegpipe -i pipe: -probesize 200M -analyzeduration 200M  -fps_mode passthrough -sws_flags lanczos+accurate_rnd+full_chroma_int+full_chroma_inp -strict experimental -c:v h264_nvenc -pix_fmt nv12 -preset p7 -multipass fullres -forced-idr 1 -g 25 -coder:v cabac -spatial-aq 1 -temporal-aq 1 -dpb_size 0 -bf:v 3 -b_ref_mode:v 0 -rc:v vbr -cq:v 0 -b:v 6000000 -minrate:v 500000 -maxrate:v 12000000 -bufsize 12000000 -profile:v high -level 5.2 -movflags +faststart+write_colr -y  "!mp4_output!"

pause
goto :eof
Let me know if you need anything else.

I'm still assuming I've made bad assumptions, perhaps about formats or something.

Clipboard_03-16-2024_01.jpg
I really do like it here.
User avatar
Rocky
Posts: 3621
Joined: Fri Sep 06, 2019 12:57 pm

CUDASynth

Post by Rocky »

I converted the bat file to a simple VPY script that I can open in VirtualDub2. I duplicated the green screen. Investigating...

Code: Select all

import vapoursynth as vs
from vapoursynth import core
core.std.LoadPlugin(r'D:\Don\Programming\C++\DGDecNV\DGDecodeNV\x64\Release\DGDecodeNV.dll')
before_video = core.dgdecodenv.DGSource(r'00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.dgi', deinterlace=1, use_top_field=True, use_pf=False)
after_video = core.dgdecodenv.DGSource(r'00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.dgi', deinterlace=1, use_top_field=True, use_pf=False, dn_enable=1, dn_quality="good", dn_strength=0.06, dn_cstrength=0.06, dn_tthresh=75.0, dn_show=0, sh_enable=1, sh_strength=0.3)
stacked_video = core.std.StackHorizontal([before_video, after_video])
stacked_video.set_output()
I preferred your old avatar. :cry:
User avatar
Rocky
Posts: 3621
Joined: Fri Sep 06, 2019 12:57 pm

CUDASynth

Post by Rocky »

I broke DGSharpen() in the last slipstream. Will fix ASAP.
User avatar
Rocky
Posts: 3621
Joined: Fri Sep 06, 2019 12:57 pm

CUDASynth

Post by Rocky »

Slipstream released.

BTW, DGTweak() is well under way. And I have ideas for the balance stuff.
User avatar
hydra3333
Posts: 406
Joined: Wed Oct 06, 2010 3:34 am
Contact:

CUDASynth

Post by hydra3333 »

OK, I'm a happy camper now.
The green frames are gone.
The variability in fps has largely disappeared in the case of outputting only the "after" clip. Maybe it's magic, maybe it's Windows, who knows.
(The fps variability remains when using stackhorizontal, over separate consecutive runs, but who cares. Blame Windows.)

So, looks good ! Thank you !

This

Code: Select all

"G:\HDTV\DGtest\Vapoursynth-x64\VSPipe.exe" --filter-time --container y4m "G:\HDTV\DGtest\TEST2\00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.vpy" -- 
with variations of this:
import vapoursynth as vs		# this allows use of constants eg vs.YUV420P8 
from vapoursynth import core	# actual vapoursynth core 
core.std.LoadPlugin(r'G:\HDTV\DGtest\Vapoursynth-x64\DGIndex\DGDecodeNV.dll') # do it like gonca https://forum.doom9.org/showthread.php?p=1877765#post1877765 
core.avs.LoadPlugin(r'G:\HDTV\DGtest\Vapoursynth-x64\DGIndex\DGDecodeNV.dll') # do it like gonca https://forum.doom9.org/showthread.php?p=1877765#post1877765 
#  
before_video = core.dgdecodenv.DGSource( r'G:\HDTV\DGtest\TEST2\00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.DGI', deinterlace=1, use_top_field=True, use_pf=False ) 
#  
after_video = core.dgdecodenv.DGSource( r'G:\HDTV\DGtest\TEST2\00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.DGI', deinterlace=1, use_top_field=True, use_pf=False, dn_enable=1, dn_quality="good", dn_strength=0.06, dn_cstrength=0.06, dn_tthresh=75.0, dn_show=0, sh_enable=1, sh_strength=0.3 ) 
#after_video = core.dgdecodenv.DGSource( r'G:\HDTV\DGtest\TEST2\00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.DGI', deinterlace=1, use_top_field=True, use_pf=False, dn_enable=1, dn_quality="best", dn_strength=0.06, dn_cstrength=0.06, dn_tthresh=75.0, dn_show=0, sh_enable=1, sh_strength=0.3 ) 
#  
# INTERLEAVED 
#  
#interleaved_video = core.std.Interleave( [before_video, after_video] ) 
#interleaved_video = core.std.AssumeFPS( interleaved_video, fpsnum=25, fpsden=1 ) 
#interleaved_video.set_output() 
#  
# STACKED 
#  
#stacked_video = core.std.StackHorizontal( [before_video, after_video] ) 
#stacked_video.set_output() 
#  
# AFTER CLIP ONLY 
#  
after_video = core.std.AssumeFPS( after_video, fpsnum=25, fpsden=1 ) 
after_video.set_output() 
Yields these tables after 3 test runs of 8 VHS .mpg files against each .vpy stacked/interleaved/after-only.
test2_results.jpg
Rocky wrote:
Fri Mar 15, 2024 8:24 pm
I preferred your old avatar. :cry:
Well, sheep are mentioned in the link. As are squirrels... cough ;)
https://www.newsweek.com/21-animals-tha ... ht-1571299
Two minutes rather than two years, is more my speed :)
I really do like it here.
User avatar
thechaoscoder
Posts: 49
Joined: Tue Jul 14, 2020 8:34 am

CUDASynth

Post by thechaoscoder »

It makes sense that Interleave is faster than StackHorizontal. Interleave should have almost zero processing overhead, your "list of frames" simply gets twice a long.

The other difference is that StackHorizontal needs to process 2 filters (or 2 frames) per frame. So twice the work per frame.

I was curious and tested avisynth, and the fps drop about the same.
And fps with or without interleave are about the same as well.
User avatar
Rocky
Posts: 3621
Joined: Fri Sep 06, 2019 12:57 pm

CUDASynth

Post by Rocky »

hydra3333 wrote:
Sat Mar 16, 2024 1:14 am
Well, sheep are mentioned in the link. As are squirrels... cough ;)
https://www.newsweek.com/21-animals-tha ... ht-1571299
Bullwinkle will not be happy.

@thechaoscoder

I concur in your analysis, for which thank you. For stack we have two frame copies into a composite frame.
User avatar
hydra3333
Posts: 406
Joined: Wed Oct 06, 2010 3:34 am
Contact:

CUDASynth

Post by hydra3333 »

I would have expected the plain set_output() of the filtered clip to be faster than the clip which interleaves the filtered clip and the unfiltered clip.
The result of ~60% of the fps of the interleaved clip which notionally does more work seems a bit counter-intuitive.
That's a massive amount of fps being burned somewhere :?
Oh well.
I really do like it here.
User avatar
Rocky
Posts: 3621
Joined: Fri Sep 06, 2019 12:57 pm

CUDASynth

Post by Rocky »

I did some basic testing with just timing how long things take to play in VirtualDub.

Script 1: Decode before and after, assumefps 5000 for after, loop 20 times. Time to play: 48 seconds

Code: Select all

import vapoursynth as vs
from vapoursynth import core
core.std.LoadPlugin(r'D:\Don\Programming\C++\DGDecNV\DGDecodeNV\x64\Release\DGDecodeNV.dll')
before_video = core.dgdecodenv.DGSource(r'00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.dgi', deinterlace=1, use_top_field=True, use_pf=False)
after_video = core.dgdecodenv.DGSource(r'00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.dgi', deinterlace=1, use_top_field=True, use_pf=False, dn_enable=1, dn_quality="good", dn_strength=0.06, dn_cstrength=0.06, dn_tthresh=75.0, dn_show=0, sh_enable=1, sh_strength=0.3)
after_video = core.std.AssumeFPS(after_video, fpsnum=5000, fpsden=1)
after_video = core.std.Loop(after_video, 20)
after_video.set_output()
Script 2: Decode before and after, interleave before and after, assumefps 5000, loop 20 times. Time to play: 126 seconds

Code: Select all

import vapoursynth as vs
from vapoursynth import core
core.std.LoadPlugin(r'D:\Don\Programming\C++\DGDecNV\DGDecodeNV\x64\Release\DGDecodeNV.dll')
before_video = core.dgdecodenv.DGSource(r'00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.dgi', deinterlace=1, use_top_field=True, use_pf=False)
after_video = core.dgdecodenv.DGSource(r'00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT.dgi', deinterlace=1, use_top_field=True, use_pf=False, dn_enable=1, dn_quality="good", dn_strength=0.06, dn_cstrength=0.06, dn_tthresh=75.0, dn_show=0, sh_enable=1, sh_strength=0.3)
interleaved_video = core.std.Interleave([before_video, after_video])
interleaved_video = core.std.AssumeFPS(interleaved_video, fpsnum=5000, fpsden=1)
interleaved_video = core.std.Loop(interleaved_video, 20)
interleaved_video.set_output()
Now, Script 2 has twice as many frames as Script 1, so we divide its time by two, giving us:

Not interleaved: 48 seconds
Interleaved: 63 seconds

or, using the frame counts and times, we get:

Not interleaved: 1065 fps (RTX 4090 haha)
Interleaved: 812 fps

Looks reasonable to me. The overhead of interleave is greater than one might think because it has to generate frames from the before clip, whereas Script 1 never does. I prefer not to get into your methodology as it brings in lots of extra factors and assumptions I know nothing about. Please frame further discussion using this testing paradigm, where everything is on the table and there are no black boxes.
User avatar
hydra3333
Posts: 406
Joined: Wed Oct 06, 2010 3:34 am
Contact:

CUDASynth

Post by hydra3333 »

edit: understood now. Thanks. I got as far as better measuring elapsed times with powershell

Code: Select all

powershell.exe -executionpolicy bypass -Command "Measure-Command {G:\HDTV\DGtest\TEST3-DG\Vapoursynth-x64\VSPipe.exe --filter-time --container y4m G:\HDTV\DGtest\TEST3-DG\00_PostcardsFromMannum_sample-unprocessed_interlaced_CUT_normal.vpy -- | Out-Default}" 
before applying the necessary thought to what you said.
There's an end to it.
CudaSynth rules.
I really do like it here.
User avatar
Rocky
Posts: 3621
Joined: Fri Sep 06, 2019 12:57 pm

CUDASynth

Post by Rocky »

Glad we're on the same page now!
User avatar
Rocky
Posts: 3621
Joined: Fri Sep 06, 2019 12:57 pm

CUDASynth

Post by Rocky »

The standalone CUDA version of DGTweak() is working in 8-bits. Next going to add 16-bit and then integrate it into the DGSource() CUDASynth framework. After all that, some form of white balance functionality.
Post Reply