HDR -> SDR conversion
Re: HDR -> SDR tonemapping
Sweet! Thanks DJ.
Re: HDR -> SDR tonemapping
Also checked without up-conversion to 16 bits, that works as intended.
Code: Select all
LwlibavVideoSource("D:\ACCEL_WORLD_INFINITE_BURST_UHD\BDMV\STREAM\00004.m2ts")
ConvertFromDoubleWidth(bits=10)
z_ConvertFormat(pixel_type="RGBPS",colorspace_op="2020ncl:st2084:2020:l=>rgb:linear:2020:l", dither_type="none")
DGReinhard(contrast=0.55, bright=2.35)
z_ConvertFormat(pixel_type="YV12",colorspace_op="rgb:linear:2020:l=>709:709:709:l",dither_type="ordered")
PC: RTX 2070 | Ryzen R9 5950X (no OC) | 64 GB RAM
Notebook: RTX 4060 | Ryzen R9 7945HX | 32 GB RAM
Notebook: RTX 4060 | Ryzen R9 7945HX | 32 GB RAM
Re: HDR -> SDR tonemapping
In approximately two hours I will be able to post some comparative images of both filters versus sdr sample
Re: HDR -> SDR tonemapping
New comparisons
Note
DGHable and DGReinhard at default
HDR clip was not not resized to 1080p
sdr 1
DGHable 1
DGReinhard 1
Note
DGHable and DGReinhard at default
HDR clip was not not resized to 1080p
sdr 1
DGHable 1
DGReinhard 1
Re: HDR -> SDR tonemapping
sample 2
sdr
DGHable
DGReinhard
sdr
DGHable
DGReinhard
Re: HDR -> SDR tonemapping
sample 3
sdr
DGHable
DGReinhard
sdr
DGHable
DGReinhard
Re: HDR -> SDR tonemapping
sample 4
sdr
DGHable
DGReinhard
sdr
DGHable
DGReinhard
Re: HDR -> SDR tonemapping
Thank you, gonca. Great variety of scene types.
Maybe Hable needs to come down a notch in exposure to 2.0. Both operators work acceptably. I'm not sure which I prefer.
Maybe Hable needs to come down a notch in exposure to 2.0. Both operators work acceptably. I'm not sure which I prefer.
Re: HDR -> SDR tonemapping
It might be a relevant point that those images are in jpg format, forum doesn't allow bmp
Re: HDR -> SDR tonemapping
I think default settings really don't do it justice. SDR is first.
Settings I used (adjusted for other scenes too): exposure=1.6, b=0.40, c=0.11, d=0.30, e=0.019, w=20
I prefer Hable because it's more flexible, while Reinhard only has contrast/white adjustments.
Settings I used (adjusted for other scenes too): exposure=1.6, b=0.40, c=0.11, d=0.30, e=0.019, w=20
I prefer Hable because it's more flexible, while Reinhard only has contrast/white adjustments.
Re: HDR -> SDR tonemapping
I feel like without an HDR monitor as a reference, every setting would be a guess-timate. Trying to pick the right setting is quite a challenge.
Re: HDR -> SDR tonemapping
I have a HDR TV being used as a monitor
The problem is that I can't capture a picture from the UHD source while retaining the HDR, metadata, of the original to post
I can compare by playing the clips but then no one else sees them
The problem is that I can't capture a picture from the UHD source while retaining the HDR, metadata, of the original to post
I can compare by playing the clips but then no one else sees them
Re: HDR -> SDR tonemapping
HDR color mapping is set by the user.. and in the case of Monitor / TV's the manufacturer LG, Sony & Samsung etc..
So there is no correct way.. It's all your way.
Re: HDR -> SDR tonemapping
Small note about nvhsp: It only patches the first header and is only meant for the output of nvencc, but is not really needed nowadays since nvencc itself supports hdr related signaling.
@admin: Are you planning to port DGTonemap to Vapoursynth? (I'd like a gpu based alternative to tonemap)
Cu Selur
@admin: Are you planning to port DGTonemap to Vapoursynth? (I'd like a gpu based alternative to tonemap)
Cu Selur
Re: HDR -> SDR tonemapping
NVEncC does accept the HDR signalling through the command line since a few versions ago.
Both work, so they are both good, depending on workflow
Both work, so they are both good, depending on workflow
Re: HDR -> SDR tonemapping
Re: HDR -> SDR tonemapping
Ready for a splash of ice water in the face, guys? I already took one.
Here's the deal. A CUDA version of DGTonemap is currently useless, for two reasons: a) the overhead of transferring three (R, G, B) float images to/from the GPU makes the CUDA version perform worse than simply doing things in SW with a decent prefetch, and b) the overall time is heavily dominated by the SW conversions, so that even if the tonemapping part performed better with CUDA, it would have little overall effect.
So, what is the morale of this story? Everything has to be done on the GPU, i.e, ship up one 16-bit frame, convert to 709 and tonemap, and ship back the frame. Then we could expect a large speedup. Got my work cut out for me.
Here's the deal. A CUDA version of DGTonemap is currently useless, for two reasons: a) the overhead of transferring three (R, G, B) float images to/from the GPU makes the CUDA version perform worse than simply doing things in SW with a decent prefetch, and b) the overall time is heavily dominated by the SW conversions, so that even if the tonemapping part performed better with CUDA, it would have little overall effect.
So, what is the morale of this story? Everything has to be done on the GPU, i.e, ship up one 16-bit frame, convert to 709 and tonemap, and ship back the frame. Then we could expect a large speedup. Got my work cut out for me.
Re: HDR -> SDR tonemapping
Don't go swimming in Lake Erie yetReady for a splash of ice water in the face, guys? I already took one.
As for the rest, if anyone can do it you can
Don't forget your paper reviews, or to relax every so often.
The software version already gives us a good solution
Re: HDR -> SDR tonemapping
Reviews are done, but I'm going to take your advice to relax.
Re: HDR -> SDR tonemapping
I really do like it here.
Re: HDR -> SDR tonemapping
I have a question. In the DGTonemap's manual, the formula for Hable is
Can you tell me what the "x" variable means? Thank you.
Code: Select all
hable(x) = ((x*(a*x+c*b)+d*e) / (x*(a*x+b)+d*f)) - e/f
Re: HDR -> SDR tonemapping
It's easiest just to give you the code.
Code: Select all
#include "windows.h"
#include "avisynth.h"
#include "stdio.h"
class DGReinhard : public GenericVideoFilter
{
float contrast;
float bright;
public:
DGReinhard(PClip _child, float _contrast, float _bright, IScriptEnvironment* env) : GenericVideoFilter(_child)
{
if (vi.pixel_type != VideoInfo::CS_RGBPS)
{
env->ThrowError("DGReinhard: input must be CS_RGBPS");
}
contrast = _contrast;
bright = _bright;
}
PVideoFrame __stdcall GetFrame(int n, IScriptEnvironment* env);
};
PVideoFrame __stdcall DGReinhard::GetFrame(int n, IScriptEnvironment* env)
{
PVideoFrame src = child->GetFrame(n, env);
PVideoFrame dst = env->NewVideoFrame(vi);
float *srcpR, *srcpG, *srcpB;
float *dstpR, *dstpG, *dstpB;
const int src_pitch = src->GetPitch(PLANAR_R) / 4;
const int dst_pitch = dst->GetPitch(PLANAR_R) / 4;
const int row_size = dst->GetRowSize(PLANAR_R);
const int width = vi.width;
const int height = dst->GetHeight(PLANAR_R);
const float offset = (1.0f - contrast) / contrast;
const float factor = (bright + offset) / bright;
srcpR = (float *)src->GetReadPtr(PLANAR_R);
dstpR = (float *)dst->GetWritePtr(PLANAR_R);
srcpG = (float *)src->GetReadPtr(PLANAR_G);
dstpG = (float *)dst->GetWritePtr(PLANAR_G);
srcpB = (float *)src->GetReadPtr(PLANAR_B);
dstpB = (float *)dst->GetWritePtr(PLANAR_B);
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
dstpR[x] = srcpR[x] / (srcpR[x] + offset) * factor;
dstpG[x] = srcpG[x] / (srcpG[x] + offset) * factor;
dstpB[x] = srcpB[x] / (srcpB[x] + offset) * factor;
}
srcpR += src_pitch;
srcpG += src_pitch;
srcpB += src_pitch;
dstpR += dst_pitch;
dstpG += dst_pitch;
dstpB += dst_pitch;
}
return dst;
}
class DGHable : public GenericVideoFilter
{
float exposure, a, b, c, d, e, f, w;
public:
DGHable(PClip _child, float _exposure,
float _a,
float _b,
float _c,
float _d,
float _e,
float _f,
float _w,
IScriptEnvironment* env) : GenericVideoFilter(_child)
{
if (vi.pixel_type != VideoInfo::CS_RGBPS)
{
env->ThrowError("DGHable: input must be CS_RGBPS");
}
exposure = _exposure;
a = _a;
b = _b;
c = _c;
d = _d;
e = _e;
f = _f;
w = _w;
}
float hable(float in)
{
return (in * (in * a + b * c) + d * e) / (in * (in * a + b) + d * f) - e / f;
}
PVideoFrame __stdcall GetFrame(int n, IScriptEnvironment* env);
};
PVideoFrame __stdcall DGHable::GetFrame(int n, IScriptEnvironment* env)
{
PVideoFrame src = child->GetFrame(n, env);
PVideoFrame dst = env->NewVideoFrame(vi);
float *srcpR, *srcpG, *srcpB;
float *dstpR, *dstpG, *dstpB;
const int src_pitch = src->GetPitch(PLANAR_R) / 4;
const int dst_pitch = dst->GetPitch(PLANAR_R) / 4;
const int row_size = dst->GetRowSize(PLANAR_R);
const int width = vi.width;
const int height = dst->GetHeight(PLANAR_R);
srcpR = (float *)src->GetReadPtr(PLANAR_R);
dstpR = (float *)dst->GetWritePtr(PLANAR_R);
srcpG = (float *)src->GetReadPtr(PLANAR_G);
dstpG = (float *)dst->GetWritePtr(PLANAR_G);
srcpB = (float *)src->GetReadPtr(PLANAR_B);
dstpB = (float *)dst->GetWritePtr(PLANAR_B);
float whitescale = 1.0f / hable(w);
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
dstpR[x] = hable(exposure * srcpR[x]) * whitescale;
dstpG[x] = hable(exposure * srcpG[x]) * whitescale;
dstpB[x] = hable(exposure * srcpB[x]) * whitescale;
}
srcpR += src_pitch;
srcpG += src_pitch;
srcpB += src_pitch;
dstpR += dst_pitch;
dstpG += dst_pitch;
dstpB += dst_pitch;
}
return dst;
}
AVSValue __cdecl Create_DGReinhard(AVSValue args, void* user_data, IScriptEnvironment* env)
{
return new DGReinhard(args[0].AsClip(),
(float)args[1].AsFloat(0.3f),
(float)args[2].AsFloat(5.0f),
env);
}
AVSValue __cdecl Create_DGHable(AVSValue args, void* user_data, IScriptEnvironment* env)
{
return new DGHable(args[0].AsClip(),
(float)args[1].AsFloat(2.0f),
(float)args[2].AsFloat(0.15f),
(float)args[3].AsFloat(0.50f),
(float)args[4].AsFloat(0.10f),
(float)args[5].AsFloat(0.20f),
(float)args[6].AsFloat(0.02f),
(float)args[7].AsFloat(0.30f),
(float)args[8].AsFloat(11.20f),
env);
}
const AVS_Linkage *AVS_linkage = 0;
extern "C" __declspec(dllexport) const char* __stdcall AvisynthPluginInit3(IScriptEnvironment* env, AVS_Linkage* vectors)
{
AVS_linkage = vectors;
env->AddFunction("DGReinhard", "c[contrast]f[bright]f", Create_DGReinhard, 0);
env->AddFunction("DGHable", "c[exposure]f[a]f[b]f[c]f[d]f[e]f[f]f[w]f", Create_DGHable, 0);
return 0;
}