https://web.maths.unsw.edu.au/~josefdick/MCQMC_Proceedings/MCQMC_Proceedings_2012_Preprints/100_Keller_tutorial.pdf
https://zhuanlan.zhihu.com/p/20197323?columnSlug=graphics
高效的生成在高纬空间分布均匀的随机数在计算机程序中非常常见的组成部分。对于一切需要采样的算法来说,分布均匀的随机数就意味着更加优秀的样本分布。光线传递的模拟(渲染)基于蒙特卡洛积分(monte carlo integration),这个过程采样无处不在,所以好的样本分布直接影响积分过程的收敛速度。

与常见的伪随机数对比,低差异序列(low discrepency sequence)非常的广泛的被用在图形,甚至于金融领域。它们除了在高纬空间中的分布更加均匀以外还有许多其他的性质更利于渲染程序的执行。

上图中,左右两边分别用32个sobol序列和伪随机书作为样本分布渲染,可以看出左边的噪点比右边少许多。

下面介绍常见的低差异序列的定义。

什么是Discrepancy 差异
首先说说这里均匀分布里的“均匀”指的是什么。一个直观的理解可以看下面的图片,左边为伪随机数组成的二维点集,右边则是由低差异序列点集的对整个空间的覆盖更加完整。

拟蒙特卡罗方法,与蒙特卡罗相似,但理论基础不同的方法—“拟蒙特卡罗方法”(Quasi-Monte Carlo方法)—近年来也获得迅速发展。我国数学家华罗庚、王元提出的“华—王”方法即是其中的一例。这种方法的基本思想是“用确定性的超均匀分布序列(数学上称为Low Discrepancy Sequences)代替蒙特卡罗方法中的随机数序列。对某些问题该方法的实际速度一般可比蒙特卡罗方法提出高数百倍,并可计算精确度。

Abstract This self-contained tutorial surveys the state of the art in quasi-Monte Carlo rendering algorithms as used for image synthesis in the product design and movie industry. Based on the number theoretic constructions of low discrepancy sequences, it explains techniques to generate light transport paths to connect cameras
and light sources. Summing up their contributions on the image plane results
in a consistent numerical algorithm, which due to the superior uniformity of low
discrepancy sequences often converges faster than its (pseudo-) random counterparts.
In addition, its deterministic nature allows for simple and efficient parallelization
while guaranteeing exact reproducibility. The underlying techniques of parallel
quasi-Monte Carlo integro-approximation, the high speed generation of quasiMonte
Carlo points, treating weak singularities in a robust way, and high performance
ray tracing have many applications outside computer graphics, too.
1 Introduction
“One look is worth a thousand words” characterizes best the expressive power of
images. Being able to visualize a product in a way that cannot be distinguished
from a real photograph before realization can greatly help to win an audience. As
ubiquitous in many movies, a sequence of such images can tell whole stories in a
captive and convincing way. As a consequence of the growing demand and benefit
of synthetic images, a substantial amount of research has been dedicated to finding
more efficient rendering algorithms.
The achievable degree of realism depends on the physical correctness of the
model and the consistency of the simulation algorithms. While modeling is beyond
the focus of this tutorial, we review the fundamentals in Sec. 2. The paradigm of
consistency is discussed in the next Sec. 1.1 as it is key to the quasi-Monte Carlo

techniques in Sec. 3 that are at the heart of the deterministic rendering algorithms
explored in Sec. 4.
On a historical note, the investigation of quasi-Monte Carlo methods in computer
graphics goes back to Shirley [69] and Niederreiter [54], and received early
industrial attention [60]. This comprehensive tutorial surveys the state of the art, includes
new results, and is applicable far beyond computer graphics, as for example
in financial mathematics and general radiation transport simulation.
1.1 Why Consistency matters most
Analytic solutions in light transport simulation are only available for problems too
simple to be of practical relevance, although some of these settings are useful in understanding
and testing algorithms [31]. In practical applications, functions are highdimensional
and contain discontinuities that cannot be located efficiently. Therefore
approximate solutions are computed using numerical algorithms. In the following
paragraphs, we clarify the most important notions, as they are often confused, especially
in marketing.
Consistency
Numerical algorithms, whose approximation error vanishes as the sample size increases,
are called consistent. Note that consistency is not a statement with respect
to the speed of convergence. Within computer graphics, consistency guarantees image
synthesis without persistent artifacts such as discretization artifacts introduced
by a rendering algorithm; the results are consistent with the input model and in that
sense the notion of consistency is understandable without any mathematical background.
While many commercial implementations of rendering algorithms required
expert knowledge to tweak a big set of parameters until artifacts due to intermediate
approximations become invisible, the design of many recent rendering algorithms
follows the paradigm of consistency. As a result, users can concentrate on content
creation, because light transport simulation has become as simple as pushing the
“render”-button in an application.
Unbiased Monte Carlo Algorithms
The bias of an algorithm using random numbers is the difference between the mathematical
object and the expectation of the estimator of the mathematical object to be
approximated. If this difference is zero, the algorithm is called unbiased. However,
this property alone is not sufficient, because an estimator can be unbiased but not
consistent, thus even lacking convergence. In addition, biased but consistent algorithms
can handle problems that unbiased algorithms cannot handle: For example,
density estimation allows for efficiently handling the problem of “insufficient techniques”
(for the details see Sec. 4.4.1).
The theory of many unbiased Monte Carlo algorithms is based on independent
random sampling, which is used at the core of many proofs in probability theory
and allows for simple parallelization and for estimating the variance as a measure
of error.
Physically Based Modeling
Physically based modeling subsumes the creation of input for image synthesis algorithms,
where physical entities such as measured data for light sources and optical
properties of matter or analytic models thereof are used for the input specification.
Modeling with such entities and relying on consistent light transport simulation to
many users is much more natural as compared to tweaking lights and materials in
order to deliver photorealistic results.
Although often confused in computer graphics, physically correct rendering is
not equivalent to unbiased Monte Carlo algorithms: Even non-photorealistic images
can be rendered using unbiased Monte Carlo algorithms. In addition, so far none of
the physically based algorithms can claim to comply with all the laws of physics,
because they are simply not able to efficiently simulate all effects of light transport
and therefore cannot be physically correct.
Deterministic Consistent Numerical Algorithms
While independence and unpredictability characterize random numbers, these properties
often are undesirable for computer simulations: Independence compromises
the speed of convergence and unpredictability disallows the exact repetition of a
computer simulation. Mimicking random numbers by pseudo-random numbers generated
by deterministic algorithms, computations become exactly repeatable, however,
arbitrarily jumping ahead in such sequences as required in scalable parallelization
often is inefficient due to the goal of emulating unpredictability.
In fact, deterministic algorithms can produce samples that approximate a given
distribution much better than random numbers can. By their deterministic nature,
such samples must be correlated and predictable. The lack of independence is not
an issue, because independence is not visible in an average anyhow and consistency
can be shown using number theoretic arguments instead of probabilistic ones. In
addition, partitioning such sets of samples and leaping in such sequences of samples
can be highly efficient.
As it will be shown throughout the article, advantages of such deterministic consistent
numerical algorithms are improved convergence, exact reproducibility, and
simple communication-avoiding parallelization. Besides rendering physically based
models, these methods also apply to rendering non-physical models that often are
chosen to access artistic freedom or to speed up the rendering process. The illustration in Fig. 1 provides some initial intuition of the concepts and facts discussed in
this section.


Fig. 1 Illustration of the difference between unbiased and deterministic consistent uniform sampling:
The top row shows four independent sets of 18 points each and their union as generated by
a pseudo-random number generator. The middle row shows independent realizations of so-called
stratified samples with their union that result from uniformly partitioning the domain and independently
sampling inside each resulting interval in order to increase uniformity. However, points
can come arbitrarily close together along interval boundaries and there is no guarantee for their
union to improve upon uniformity. The bottom row shows the union of four contiguous blocks of
18 points of the Halton sequence. As opposed to the pseudorandom number generator and stratified
sampling, the samples of the Halton sequence are more uniform, nicely complement each
other in the union, and provide a guaranteed minimum distance and intrinsic stratification along
the sequence.

2 Principles of Light Transport Simulation
Implementing the process of taking a photo on a computer involves the simulation of
light transport. This in turn requires a mathematical model of the world: A boundary
representation with attached optical properties describes the surfaces of the objects
to be visualized. Such a model may be augmented by the optical properties of volumes,
spectral properties, consideration of interference, and many more physical
phenomena. Once the optical properties of the camera system and the light sources
are provided, the problem specification is complete.
The principles of light transport simulation are well covered in classic textbooks
on computer graphics: Currently, [66] is the most updated standard reference, [16]
is a classic reference available for free on the internet, and [70] can be considered


a primer and kick start. Recent research is well surveyed in [82, 22, 6] along with
profound investigations of numerical algorithms and their issues.
2.1 Light Transport along Paths
Light transport simulation consists of identifying all paths that connect cameras and
light sources and integrating their contribution to form the synthetic image. Fig. 2
illustrates the principles of exploring path space.
One way of generating light transport paths is to follow the trajectories of photons
emitted from the light sources along straight line segments between the interactions
with matter. However, no computational device can simulate a number of photons
sufficiently large to represent reality and hence the direct simulation often is not
efficient.
When applicable, light transport paths can be reversed due to the Helmholtz reciprocity
principle and trajectories can be traced starting from the camera sensor or
eye. Most efficient algorithms connect such camera and light path segments and
therefore are called bidirectional.
Vertices of paths can be connected by checking their mutual visibility with respect
to a straight line or by checking their mutual distance with respect to a suitable
metric. While checking the mutual visibility is precise, it does not allow for
efficiently simulating some important contributions of light caused by surfaces that
are highly specular and/or transmissive, which is known as the problem of insuffi-
cient techniques [42]. In such cases, connecting paths by merging two vertices that
are sufficiently close helps. The resulting bias can be controlled by the maximum
distance allowed for merging vertices.

The interactions with matter need to be modeled: Bidirectional scattering distribution
functions (BSDFs) describe the properties of optical interfaces, while scattering
and absorption cross sections determine when to scatter in volume using the
distribution given by a phase function [66]. Similarly, the optical properties of the
light sources and sensors have to be mathematically modeled. For cameras, models
range from a simple pinhole to complete lenses allowing for the simulation of depth
of field and motion blur. Light sources often are characterized by so-called light
profiles. All these physical properties can be provided in measured form, too, which
in many cases provides quality superior to the current analytic models.
Beyond that, optical properties can be modeled as functions of wavelength across
the spectrum of light in order to overcome the restriction of the common approach
using only three selected wavelengths to represent red, green, and blue and to enable
dispersion and fluorescence. The simulation of effects due to polarization and the
wave character of light are possible to a certain extent, however, are subject to active
research.
While modeling with real entities is very intuitive, it must be noted that certain
violations of physics can greatly help the efficiency of rendering and/or help telling
stories at the cost of systematic errors.
2.2 Accelerated Ray Tracing and Visibility
The boundary of the scene often is stored as a directed acyclic graph, which allows
for referencing parts of the scene multiple times to instance them at multiple positions
in favor of a compact representation. Complex geometry like for example hair,
fur, foliage, or crowds often are generated procedurally, in which case the call graph
implicitly represents the scene graph. Triangles, quadrangles, or multi-resolution
surfaces, which include subdivision surfaces, are the most common geometric primitives
used for boundary representation.
The vertices of a light transport path are connected by straight line segments.
First, these can be found by tracing rays from a point x into a direction ω to identify
the closest point of intersection h(x,ω) with the scene boundary. A second way to
construct paths is to connect two vertices x and y of two different path segments.
This can be accomplished by checking the mutual visibility V(x, y), which is zero
if the straight line of sight between the points x and y, a so-called shadow ray, is
occluded, one otherwise. As a third operation, two vertices can be merged, if their
distance with respect to a metric is less than a threshold. Efficient implementations
of the three operations all are based on hierarchal culling (see [39, 35] for a very
basic primer).
In order to accelerate ray tracing, the list of objects and/or space are recursively
partitioned. Given a ray to be traced, traversal is started from the root node descending
into a subtree, whenever the ray intersects this part of the scene. Most parts
of the scene thus are hierarchically culled and never touched. In case the cost of
the construction of such an auxiliary acceleration hierarchy can be amortized over

tracing many paths, it makes sense to store it partially or completely. Checking the
mutual visibility by a shadow ray is even more efficient, since the traversal can be
stopped upon any intersection with the boundary, while tracing a ray requires to find
the intersection closest to its origin.
Efficiently merging vertices follows the same principle of hierarchical culling
[35]: Given two sets of points in space, the points of the one set that are at a maximum
given distance from the points of the other set are found by hierarchically
subdividing space and pruning the search for partitions of space that cannot overlap
within the given distance.
3 Principles of Quasi-Monte Carlo Integro-Approximation
Image synthesis can be considered an integro-approximation problem of the form


where f(x,y) is the measurement contribution to a location y by a light transport
path identified by x. We will focus on deterministic linear algorithms [78] to consistently
determine the whole image function g for all pixels y using one low discrepancy
sequence xi of deterministic sample points. The principles of such quasi-Monte
Carlo methods have been introduced to a wide audience in [55], which started a
series of MCQMC conferences, whose proceedings contain almost all recent developments
in quasi-Monte Carlo methods. Many of the results and developments are
summarized in recent books [72, 49, 10].
Before reviewing the algorithms to generate low discrepancy sequences in Sec. 3.1
and techniques resulting from their number theoretic construction in Sec. 3.2, error
bounds are discussed with respect to measures of uniformity.

Uniform Sampling, Stratification, and Discrete Density Approximation
A common way to generate a discrete approximation of a density comprises the
creation of uniformly distributed samples that are transformed [25, 9]. For many
such transformations, an improved uniformity results in a better discrete density
approximation. Measures of uniformity often follow from proofs of error bounds
(see the next paragraph) as a result of the attempt to bound the error by a product of
properties of the sampling points and the function as used in for example Thm. 1.
For the setting of computer graphics, where X is a domain of integration, B are the
Borel sets over X, and µ the Lebesgue measure, a practical measure of uniformity
is given by

Quasi-Monte Carlo Image Synthesis in a Nutshell——低差异序列相关推荐

  1. Variance Reduction Methods: a Quick Introduction to Quasi Monte Carlo——完结

    https://www.scratchapixel.com/lessons/mathematics-physics-for-computer-graphics/monte-carlo-methods- ...

  2. HMC(Hamiltonian Monte Carlo抽样算法详细介绍)

    Hamiltonian Monte Carlo简介 Hamiltonian dynamics的物理含义 Simulating Hamiltonian dynamics the Leap Frog Me ...

  3. 蒙特卡罗方法(Monte Carlo method)

    蒙特卡罗方法(Monte Carlo method) 蒙特卡罗方法概述   蒙特卡罗方法又称统计模拟法.随机抽样技术,是一种随机模拟方法,以概率和统计理论方法为基础的一种计算方法,是使用随机数(或更常 ...

  4. ADPRL - 近似动态规划和强化学习 - Note 10 - 蒙特卡洛法和时序差分学习及其实例 (Monte Carlo and Temporal Difference)

    Note 10 蒙特卡洛法和时序差分学习 Monte Carlo and Temporal Difference 蒙特卡洛法和时序差分学习 Note 10 蒙特卡洛法和时序差分学习 Monte Car ...

  5. 强化学习(四) - 蒙特卡洛方法(Monte Carlo Methods)及实例

    强化学习(四) - 蒙特卡洛方法(Monte Carlo Methods)及实例 4. 蒙特卡洛方法 4.1 蒙特卡洛预测 例4.1:Blackjack(21点) 4.2 动作价值的蒙特卡洛估计 4. ...

  6. [matlab]Monte Carlo模拟学习笔记

    理论基础:大数定理,当频数足够多时,频率可以逼近概率,从而依靠概率与$\pi$的关系,求出$\pi$ 所以,rand在Monte Carlo中是必不可少的,必须保证测试数据的随机性. 用蒙特卡洛方法进 ...

  7. 蒙特卡罗(Monte Carlo)方法

    蒙特卡罗(Monte Carlo)方法,也称为计算机随机模拟方法,是一种基于"随机数"的计算方法.          一 起源 这一方法源于美国在第二次世界大战进研制原子弹的&qu ...

  8. Monte Carlo仿真方法的基本思想及其特点

    Monte Carlo仿真方法又称统计试验法,它是一种采用统计抽样理论近似地求解数学.物理及工程问题的方法.它解决问题的基本思想是,首先建立与描述该问题有相似性的概率模型,然后对模型进行随机模拟或统计 ...

  9. 论文辅助笔记(代码实现):Bayesian Probabilistic Matrix Factorizationusing Markov Chain Monte Carlo

    1 主要思路回顾 具体可见:论文笔记 Bayesian Probabilistic Matrix Factorizationusing Markov Chain Monte Carlo (ICML 2 ...

最新文章

  1. 利用Directsound编程实现实时混音
  2. Ubuntu16.04 安装ftp
  3. Python 进阶_OOP 面向对象编程_组合与继承
  4. Android安全开发之Provider组件安全
  5. Javascript中使用正则表达式进行数据验证
  6. SharePoint Permission中6个表的关联关系**
  7. 转学到斯坦福大学计算机专业,斯坦福大学转学申请条件有哪些?
  8. Codeforces Round #161 (Div. 2) B. Squares
  9. netdev_priv
  10. python语言def_python中def的含义
  11. 阿里云服务器添加CDN
  12. c语言初学者编程大题部分
  13. 做人的十三条黄金玉律
  14. Springboot奶茶店点餐系统vtj89计算机毕业设计-课程设计-期末作业-毕设程序代做
  15. TIME_WAIT详解
  16. 开发工具与关键技术: 使用HTML 徽章 CSS3 动画 JQUERY 动态切换 JS自动切换
  17. 用python批量修改后缀名
  18. 信息系统项目管理师---第九章 项目人力资源管理历年考题
  19. 服务器系统lede,[OpenWrt Wiki] 系统配置
  20. python,时间的四种格式

热门文章

  1. 【Spark Core】【RDD】【01】核心属性 执行原理
  2. L1-054 福到了 (15分)
  3. LaTeX设置图片左对齐
  4. 前台、后台、前端、后端的区别
  5. 关于演化策略NES和协方差ES
  6. JDK11占比第一?
  7. 创建虚拟机、安装centos6,centos7系统,图形化界面
  8. 专硕计算机学院排名,考研计算机院校排名
  9. 华为服务器L型滑道安装步骤讲解
  10. 华硕ROG|玩家国度冰刃6双屏GX650RX Windows11原厂预装系统 工厂模式恢复安装带ASUSRecevory一键还原