When using the multview / docking feature of dear imgui (availible on the main repo in the docking branch) dear imgui will use desktop relative coordinates instead of window relative. This causing the rendering to get "offseted" if not handled correctly.
Before this change the rendering matrix wasn't used at all and this has now been changed in the vertex shader.
Notice this change is fully backwards compatible with the relative windows as well as the upper left corner will always be set to 0,0 in that case thus this change will work with both versions.
I also added some changes to skip rendering if not needed (based on the the other backend implementations for dear imgui such as the OpenGL one)
* display bokeh sample pattern, add bokeh shape, improve look
draw sample pattern to texture and display in ui to see number of samples and their arrangment
add bokeh shape controls
remove adhoc 'sqrt' pattern since display makes existing pattern easier to understand and it looks nicer.
switch to floating point color texture and leave lighting results in linear space until after dof is performed. provides better results and bright spots can make more noticeable bokeh shapes.
change default values to use take more samples at reduced resolution so initial experience when loading the sample is better looking image
* update screenshot, minor change to ui
fix height of ui element so scrollbar not required by default layout
update screenshot
* fix typo in texturev
atleast, i'm pretty sure that's a typo don't see a reason to set width twice
- Fix Android Building
- "entry_android.cpp:157:29: error: no member named 'kErrorRederWriterEof' in namespace 'bx'; did you mean 'kErrorReaderWriterEof'?"
* Implement bokeh depth of field
Implement bokeh depth of field as described in the blog post here:
https://blog.tuxedolabs.com/2018/05/04/bokeh-depth-of-field-in-single-pass.html
Additionally, implement the optimizations discussed in the closing paragraph. Apply the effect in multiple passes. Calculate the circle of confusion and store in the alpha channel while downsampling the image. Then compute depth of field at this lower res, storing sample size in alpha. Then composite the blurred image, based on the sample size. Compositing the lower res like this can lead to blocky edges where there's a depth discontinuity and the blur is just enough. May be an area to improve on.
Provide an alternate means of determining radius of current sample when blurring. I find the blog post's sample pattern to be difficult to directly reason about. It is not obvious, given the parameters, how many samples will be taken. And it can be very many samples. Though the results are good. The 'sqrt' pattern chosen here looks alright and allows for the number of samples to be set directly. If you are going to use this in a project, may be worth exploring additional sample patterns. And certainly update the shader to remove the pattern choice from inside the sample loop.
* fix typo in shader of denoise example
copy/paste error, applying y offset to x component instead
* add denoise example
/*
* Implement SVGF style denoising as bgfx example. Goal is to explore various
* options and parameters, not produce an optimized, efficient denoiser.
*
* Starts with deferred rendering scene with very basic lighting. Lighting is
* masked out with a noise pattern to provide something to denoise. There are
* two options for the noise pattern. One is a fixed 2x2 dither pattern to
* stand-in for lighting at quarter resolution. The other is the common
* shadertoy random pattern as a stand-in for some fancier lighting without
* enough samples per pixel, like ray tracing.
*
* First a temporal denoising filter is applied. The temporal filter is only
* using normals to reject previous samples. The SVGF paper also describes using
* depth comparison to reject samples but that is not implemented here.
*
* Followed by some number of spatial filters. These are implemented like in the
* SVGF paper. As an alternative to the 5x5 Edge-Avoiding A-Trous filter, can
* select a 3x3 filter instead. The 3x3 filter takes fewer samples and covers a
* smaller area, but takes less time to compute. From a loosely eyeballed
* comparison, N 5x5 passes looks similar to N+1 3x3 passes. The wider spatial
* filters take a fair chunk of time to compute. I wonder if it would be a good
* idea to interleave the input texture before computing, after the first pass
* which skips zero pixels.
*
* I have not implemetened the variance guided part.
*
* There's also an optional TXAA pass to be applied after. I am not happy with
* its implementation yet, so it defaults to off here.
*/
/*
* References:
* Spatiotemporal Variance-Guided Filtering: Real-Time Reconstruction for
* Path-Traced Global Illumination. by Christoph Schied and more.
* - SVGF denoising algorithm
*
* Streaming G-Buffer Compression for Multi-Sample Anti-Aliasing.
* by E. Kerzner and M. Salvi.
* - details about history comparison for temporal denoising filter
*
* Edge-Avoiding À-Trous Wavelet Transform for Fast Global Illumination
* Filtering. by Holger Dammertz and more.
* - details about a-trous algorithm for spatial denoising filter
*/
* screen space shadows sample
implement screen space shadows. requires deferred rendering or a depth prepass. convert rendered depth to linear depth to skip reconstructing multiple times when doing shadow test.
project light into screen space to find direction from each pixel to the light. walk through screen space texture towards light. sample depth to reconstruct position represented by this sample pixel and compare to position along interpolated ray from pixel to light. if position represented by depth is closer to the eye than the light ray, an initial pixel is in shadow.
specify distance of shadow ray via world units or pixels in screen space.
optionally offset the initial sample position by noise to reduce banding.
demonstrate other ways to reduce hard edge of screen space shadow.
* clean out denoise sample for pull request...
* rename folder to 44- add missing file
/*
* Implement SVGF style denoising as bgfx example. Goal is to explore various
* options and parameters, not produce an optimized, efficient denoiser.
*
* Starts with deferred rendering scene with very basic lighting. Lighting is
* masked out with a noise pattern to provide something to denoise. There are
* two options for the noise pattern. One is a fixed 2x2 dither pattern to
* stand-in for lighting at quarter resolution. The other is the common
* shadertoy random pattern as a stand-in for some fancier lighting without
* enough samples per pixel, like ray tracing.
*
* First a temporal denoising filter is applied. The temporal filter is only
* using normals to reject previous samples. The SVGF paper also describes using
* depth comparison to reject samples but that is not implemented here.
*
* Followed by some number of spatial filters. These are implemented like in the
* SVGF paper. As an alternative to the 5x5 Edge-Avoiding A-Trous filter, can
* select a 3x3 filter instead. The 3x3 filter takes fewer samples and covers a
* smaller area, but takes less time to compute. From a loosely eyeballed
* comparison, N 5x5 passes looks similar to N+1 3x3 passes. The wider spatial
* filters take a fair chunk of time to compute. I wonder if it would be a good
* idea to interleave the input texture before computing, after the first pass
* which skips zero pixels.
*
* I have not implemetened the variance guided part.
*
* There's also an optional TXAA pass to be applied after. I am not happy with
* its implementation yet, so it defaults to off here.
*/
/*
* References:
* Spatiotemporal Variance-Guided Filtering: Real-Time Reconstruction for
* Path-Traced Global Illumination. by Christoph Schied and more.
* - SVGF denoising algorithm
*
* Streaming G-Buffer Compression for Multi-Sample Anti-Aliasing.
* by E. Kerzner and M. Salvi.
* - details about history comparison for temporal denoising filter
*
* Edge-Avoiding À-Trous Wavelet Transform for Fast Global Illumination
* Filtering. by Holger Dammertz and more.
* - details about a-trous algorithm for spatial denoising filter
*/