1 year ago
#210375
jiandingzhe
How to correctly work with the range of depth data obtained from render-to-texture?
I'm doing some customized 2D rendering to depth buffer that connects to a texture of internal format GL_DEPTH_STENCIL
. In fragment shader, the normalized Z value (only 0.0 to 1.0 is used, I'm lazy) is explicitly written from some process:
in float some_value;
uniform float max_dist;
void main()
{
float dist = some_process( some_value );
gl_FragDepth = clamp( dist / max_dist, 0.0, 1.0 );
}
Now I need to perform further process on the resultant bitmap on CPU side. However, glGetTexImage
would give you GL_UNSIGNED_INT_24_8
binary format for a depth-stencil data. What should I do with the 24-bit depth component? How does the normalized floating-point Z value of [-1.0, 1.0] map to the 24-bit integer?
opengl
framebuffer
depth-buffer
render-to-texture
0 Answers
Your Answer