记录一下,

在Muli3D中实现一个 采样 cubeTexture 的函数

result CMuli3DCubeTexture::SampleTexture( vector4 &o_vColor, float32 i_fU, float32 i_fV, float32 i_fW, const vector4 *i_pXGradient, const vector4 *i_pYGradient, const uint32 *i_pSamplerStates )
{// Determine face and local u/v coordinates ...// source: http://developer.nvidia.com/object/cube_map_ogl_tutorial.html// major axis // direction     target                              sc     tc    ma // ----------    ---------------------------------   ---    ---   --- //  +rx          GL_TEXTURE_CUBE_MAP_POSITIVE_X_EXT   -rz    -ry   rx //  -rx          GL_TEXTURE_CUBE_MAP_NEGATIVE_X_EXT   +rz    -ry   rx //  +ry          GL_TEXTURE_CUBE_MAP_POSITIVE_Y_EXT   +rx    +rz   ry //  -ry          GL_TEXTURE_CUBE_MAP_NEGATIVE_Y_EXT   +rx    -rz   ry //  +rz          GL_TEXTURE_CUBE_MAP_POSITIVE_Z_EXT   +rx    -ry   rz //  -rz          GL_TEXTURE_CUBE_MAP_NEGATIVE_Z_EXT   -rx    -ry   rzfloat32 fCU, fCV, fInvMag;m3dcubefaces Face;const float32 fAbsU = fabsf( i_fU );const float32 fAbsV = fabsf( i_fV );const float32 fAbsW = fabsf( i_fW );if( fAbsU >= fAbsV && fAbsU >= fAbsW ){if( i_fU >= 0.0f ){// major axis direction: +rxFace = m3dcf_positive_x;fCU = -i_fW; fCV = -i_fV; fInvMag = 1.0f / fAbsU;}else{// major axis direction: -rxFace = m3dcf_negative_x;fCU = i_fW; fCV = -i_fV; fInvMag = 1.0f / fAbsU;}}else if( fAbsV >= fAbsU && fAbsV >= fAbsW ){if( i_fV >= 0.0f ){// major axis direction: +ryFace = m3dcf_positive_y;fCU = i_fU; fCV = i_fW; fInvMag = 1.0f / fAbsV;}else{// major axis direction: -ryFace = m3dcf_negative_y;fCU = i_fU; fCV = -i_fW; fInvMag = 1.0f / fAbsV;}}else //if( fAbsW >= fAbsU && fAbsW >= fAbsV ){if( i_fW >= 0.0f ){// major axis direction: +rzFace = m3dcf_positive_z;fCU = i_fU; fCV = -i_fV; fInvMag = 1.0f / fAbsW;}else{// major axis direction: -rzFace = m3dcf_negative_z;fCU = -i_fU; fCV = -i_fV; fInvMag = 1.0f / fAbsW;}}// s   =   ( sc/|ma| + 1 ) / 2 // t   =   ( tc/|ma| + 1 ) / 2fInvMag *= 0.5f;const float32 fU = /*fSaturate*/( fCU * fInvMag + 0.5f );const float32 fV = /*fSaturate*/( fCV * fInvMag + 0.5f );return m_ppCubeFaces[Face]->SampleTexture( o_vColor, fU, fV, 0, i_pXGradient, i_pYGradient, i_pSamplerStates );
}

这个原理主要是这样的,来自于:https://scalibq.wordpress.com/2013/06/23/cubemaps/

Cubemap addressing

A cubemap is a collection of 6 square 2D textures, each mapped to a face of a cube (hence the name). It can be visualized like this:

For a texture lookup, a 3D vector is used as a texture coordinate. Think of this vector as a ray from the center of the cube, going through one of the faces. The texture is sampled at the intersection of the ray with the face. Or more intuitively: the viewer is at the center of the cube, looking in a certain direction. The vector is that direction.

Cubemaps are often previewed in 2D in a cross arrangement, like this:

Here you can clearly see how the 6 faces encode an entire environment, for the full 360 degrees. On MSDN there is a nice overview of how the texture coordinates are mapped out:

As you can see, the cube is axis-aligned, so the faces correspond with the X, Y and Z axis, where each axis has a negative and a positive face in the cubemap (-x, +x, -y, +y, -z and +z respectively).

It does not explain how the texture coordinate are actually calculated however. You can find that on an old nVidia page covering cubemaps though (although it is OpenGL-oriented, so it uses (S,T) rather than (U,V) as texture coordinates). The texture mapping works in two stages:

1) The face is selected by looking at the absolute values of the components of the 3d vector (|x|, |y|, |z|). The component with the absolute value of the largest magnitude determines the major axis. The sign of the component selects the positive or negative direction.

2) The selected face is addressed as a regular 2D texture with U, V coordinates within a (0..1) range. The U and V are calculated from the two components that were not the major axis. So for example, if  we have +x as our cubemap face, then Y and Z will be used to calculate U and V:

U = ((-Z/|X|) + 1)/2

V = ((-Y/|X|) + 1)/2

Since X had the largest magnitude, dividing Y and Z by |X| will bring them within a (-1..1) range. This is then translated to a (0..2) range, and finally scaled to (0..1) range. Now we can do a regular 2D texture lookup. You can work out the formulas for the other 5 faces by looking at the mappings above. You can see which axis maps to U and V in each case, and you can see the direction in which the axis goes, compared to U/V, to see whether you need to flip the sign or not.

Note that since the texture coordinates are scaled by the major axis, it is not required to normalize the 3D vector. Regardless of the length of the vector, the coordinates will always end up in (0..1) range. Normalizing has no effect, it is just another scaling operation, which is made redundant by the texture coordinate calculation. In fact, on early hardware, cubemaps were often used to normalize vectors for per-pixel operations (the cubemap would just contain a normalized vector encoding the lookup vector for each texel). This early hardware would either not be able to perform a normalization per-pixel at all, or a cubemap lookup may have been cheaper than an arithmetic solution.

Note also that the texture coordinates have to be calculated on a per-pixel basis. If you were to do this per-vertex, you might have the problem that not all three vertices will look at the same face, and therefore not all vertices will address the same 2D texture. The face can change at any point inside a triangle.

Dynamic cubemaps

There are other types of environment maps than cubemaps. A popular one is the spherical map:

This type of environment map was very popular in the early days, because it is very easy to calculate texture coordinates for it, without requiring special hardware support (and also quite efficient in software, very popular for faking phong shading in the early 90s). Namely, if you take a normalized vector (x,y,z) as your view direction, you can derive the texture coordinates like this:

U = (x + 1) / 2

V = (y + 1) / 2

However, the cubemap has some advantages over other environment mappings. As you can already see in this spherical image, the resolution is very poor towards the edges. A cubemap has a very uniform mapping of pixels in all directions.

Another advantage is that cubemaps can very easily be updated in realtime with render-to-texture. You can render an environment map by just doing a renderpass for each cube face.  Each face is square, and there are 4 faces going round horizontally in 360 degrees (front, right, back, left), so each face covers a 90 degree viewing angle (and then the extra 2 faces going round vertically, top and bottom). All you have to do is set up a camera positioned where you want the center of your environment map to be, facing in the right direction, with field-of-view of 90 degrees horizontally and vertically, and render to the respective face.

For dynamically updating other types of maps, such as spherical maps, you would first need to render to a set of 2D textures, and then you would need an extra pass with a special mesh to compose them into a spherical map with the correct warping.

Previewing a cubemap in realtime

I wanted to create a 2D preview of the contents of a cubemap, in the usual cross arrangement. The first idea then would be to just create 6 quads for the cross geometry, with 2D textures on them. So, a cubemap consists of 2D textures. Or does it? Well, conceptually it does. But not all 3D APIs actually let you manipulate the individual faces as textures. That is the problem I ran into. In Direct3D 10+, there is no specific cubemap datatype. There is a generic texture array datatype, and if you create an array of 6 2D textures, it can be used as a cubemap. But it’s still an array of 2D textures, so you can still use the faces directly as regular 2D textures (or rendertargets).

In Direct3D 9 however, a cubemap is a specific type of texture. You can access the individual faces as surfaces, but not as textures. You could create a set of 2D textures of the same dimensions, and then copy each cubemap surface to a texture, but that will clearly have some extra overhead.

So instead I wanted to map the cubemap onto the cross directly. For that, I had to derive the proper 3D vectors at each vertex of the cross. Since we already know that they do not have to be normalized, it can be done within the range of (-1..1) for each vector component, which is more intuitive than having to deal with normalized vectors.

So, I assigned indices to each vertex in the layout, and worked out which vertices fit to where, and then derived the components of the view vector. The indices are in orange, the view vectors in blue:

Every view vector is shared by 3 faces. So we need to have the following sets of vertices that share the same vector:

  • 0, 2, 6
  • 1,5
  • 7, 11, 12
  • 10, 13

The rest of the vertices are interior to the cross, and sharing is done automatically.

Normally my vertex formats would only have 2D texture coordinates. If I were to store the view vector as texture coordinates, I would need to define a new vertex format. However, since the per-vertex normal has no meaning for this preview mesh, I decided to just encode the view vector in the normalvector of each vertex instead. This way I would not need any custom vertex format for a cubemap preview. I would just need a pixel shader that sampled the cubemap with the normal vector.

For the positions, I decided to put the cross inside a unit square. The width has a (-0.5..0.5) range. The height has a (-0.375..0.375) range, since it is 4 faces wide but only 3 faces high. This gives me the following vertices (this is the list I was hoping to find and just copy-paste into my own code):

vertexBuffer[] = {{ -0.25f, 0.375f, 0.0f,-1,  1, -1 },{   0.0f, 0.375f, 0.0f,1,  1, -1 },{  -0.5f, 0.125f, 0.0f,-1,  1, -1 },{ -0.25f, 0.125f, 0.0f,-1,  1,  1 },{   0.0f, 0.125f, 0.0f,1,  1,  1 },{  0.25f, 0.125f, 0.0f,1,  1, -1 },{   0.5f, 0.125f, 0.0f,-1,  1, -1 },{  -0.5f, -0.125f, 0.0f,-1, -1, -1 },{ -0.25f, -0.125f, 0.0f,-1, -1,  1 },{   0.0f, -0.125f, 0.0f,1, -1,  1 },{  0.25f, -0.125f, 0.0f,1, -1, -1 },{   0.5f, -0.125f, 0.0f,-1, -1, -1 },{ -0.25f, -0.375f, 0.0f,-1, -1, -1 },{   0.0f, -0.375f, 0.0f,1, -1, -1 }
};

And with that mesh, I can finally preview my cubemaps directly. It is an elegant solution as well, compared to having to use 6 separate textures (which means you need to either abuse multitexture heavily, or render each plane with a separate call, because you have to switch textures). And it works for all 3D APIs, since you are actually using the cubemap itself, and there is no need for a workaround copying the texture to a separate 2D texure.

Muli3D 9 CubeTexture的采样原理相关推荐

  1. 重要性采样原理及实现

    原理: 重要性采样主要用于难以直接采样的数据分布上,采样是指从已知的某个分布采样一些数据进行后续运算,但是数据分布比较复杂不容易进行采样,针对这种问题使用蒙特卡罗法,例如: 复杂的概率密度函数p(x) ...

  2. MC, MCMC, Gibbs采样 原理实现(in R)

    本文用讲一下指定分布的随机抽样方法:MC(Monte Carlo), MC(Markov Chain), MCMC(Markov Chain Monte Carlo)的基本原理,并用R语言实现了几个例 ...

  3. adc采样时间_ADC采样原理

    微处理器不能直接处理模拟信号,只能处理高.低变化的数字信号.ADC的作用就是将连续变化的模拟信号转化为离散的数字信号,再将数字信号传送到微处理器. 光.温度.气体,这些物理量通过传感器感应,转化为连续 ...

  4. 吉布斯采样——原理及matlab实现

    原文来自:https://victorfang.wordpress.com/2014/04/29/mcmc-the-gibbs-sampler-simple-example-w-matlab-code ...

  5. [数字音频处理] 分贝/赫兹/频谱/采样 原理与概念 (Section 1)

    目录 #️ 分贝(Decibel) #️ 赫兹(Hertz) #️ 声音频谱(Sound Spectrum) #️ 采样率(Sampling Rate) #️ 采样位(Sampling Bit) 在数 ...

  6. matlab bnt工具箱吉布斯采样,吉布斯采样——原理及matlab实现

    原文来自:https://victorfang.wordpress.com/2014/04/29/mcmc-the-gibbs-sampler-simple-example-w-matlab-code ...

  7. skip-gram负采样原理

    skip-gram负采样 自然语言处理领域中,判断两个单词是不是一对上下文词(context)与目标词(target),如果是一对,则是正样本,如果不是一对,则是负样本. 采样得到一个上下文词和一个目 ...

  8. PCM格式及音频采样原理、转换

    https://www.jianshu.com/nb/41147770 https://www.jianshu.com/p/58b5487da3f8 https://blog.csdn.net/gre ...

  9. 带通采样(欠采样)原理以及其在ADC中下变频的应用

    目录 包含工程中的实际应用举例. 1.带通采样的原理 2.如何确定带通采样后的频谱中心位置,以进行下变频 1.带通采样的原理 射频信号频率高,带宽有限,直接奈奎斯特采样难度大. 采样的目的是无失真的恢 ...

最新文章

  1. 刚刚,DeepMind被IJCAI授予杰出成就奖,因为他家把AlphaGo Zero做成了暖心的新垣结衣?
  2. 在公司中,如何提升自己的段位,脱颖而出
  3. JAVA基础----java中E,T,?的区别?
  4. linux 版本 arch,Arch Linux是什么
  5. pandas 提取数字_经验轻松提取Meta原始文献特征
  6. 海量存储系列下--转载,值得一读
  7. %@taglib prefix=c uri=http://java.sun.com/jsp/jst1/core%报错
  8. 27代理模式(Proxy Pattern)
  9. java程序如何解代数方程_如何用java编程来解决方程问题?
  10. 判断一个字符串的字符是不是唯一
  11. Ubuntu 16.04 Apache https证书安装
  12. Qt编程之mapx组件编程
  13. Java生成Word的报告模板
  14. Android API19 设置Alarm闹钟
  15. 打印选课学生名单(25)
  16. 博实乐公布季度业绩,前三季度收入增长46.6%
  17. 51Nod 1278 相离的圆(好题)
  18. Photoshop CS5初学者必读(23)——应用色彩平衡
  19. 华硕B85主板刷nvme协议全过程
  20. 一边「盆满钵满」,一边「卸磨杀驴」,这家龙头企业也整骚操作?

热门文章

  1. 出差忘带电脑脑袋炸裂?鼓捣了下个人云,真香
  2. 输入一个数求他的因数c语言,【代码】求一个数的因数和、求优化、顺便也供新人参考算法...
  3. Ubuntu Kylin系统中配置Apache服务器
  4. 【Cherno的OpenGL视频】Welcome to OpenGL
  5. 正则表达式的语法规则
  6. Python sys.stdout
  7. 物联网技术与应用知识点——期末题库
  8. 小说作者推荐:浅墨飞语合集
  9. [python]遍历字典dict的几种方法
  10. GitHub 优秀的开源项目学习