文章目录

  • 1.前言
  • 2.深度图显示
  • 3.深度遮挡
    • 3.1 处理流程
    • 3.2 相关代码
  • 4.结语

1.前言

ARCore深度效果显示分为两部分:第一部分是深度图显示,另一部分为深度遮挡(即实现真实物体与虚拟物体的遮挡)。本文对这两部分的功能进行分析。

2.深度图显示

开启适度图显示是会在屏幕上显示整个是业内的深度信息,比如越远的地方深度值越大,则显示红色,负责显示蓝色。步骤比较简单:

1)ARCore获取环境深度图,并传入shader中“_CurrentDepthTexture”中,但是图片采样时只能采样部分区域,所以还需要传入_UvTopLeftRight,_UvBottomLeftRight值。然后插值得到真正图片的uv值

inline float2 ArCoreDepth_GetUv(float2 uv)
{float2 uvTop = lerp(_UvTopLeftRight.xy, _UvTopLeftRight.zw, uv.x);float2 uvBottom = lerp(_UvBottomLeftRight.xy, _UvBottomLeftRight.zw, uv.x);return lerp(uvTop, uvBottom, uv.y);
}

2)在偏远着色器中,根据uv以及深度图计算当前像素的深度,计算方法如下所示,由于计算方法涉及ARCore算法,所以不做特殊分析

inline float ArCoreDepth_GetMeters(float2 uv)
{// The depth texture uses TextureFormat.RGB565.float4 rawDepth = tex2Dlod(_CurrentDepthTexture, float4(uv, 0, 0));float depth = (rawDepth.r * ARCORE_FLOAT_TO_5BITS * ARCORE_RGB565_RED_SHIFT)+ (rawDepth.g * ARCORE_FLOAT_TO_6BITS * ARCORE_RGB565_GREEN_SHIFT)+ (rawDepth.b * ARCORE_FLOAT_TO_5BITS);depth = min(depth, ARCORE_MAX_DEPTH_MM);depth *= ARCORE_DEPTH_SCALE;return depth;
}

3)获取到深度值后,根据颜色条形图(256*1)去采样,深度越大,采样到的颜色越红。

Shader "ARCore/EAP/Camera Color Ramp Shader"
{Properties{_ColorRamp("Color Ramp", 2D) = "white" {}}SubShader{// No culling or depthCull OffZWrite OnZTest LEqualTags { "Queue" = "Background+1" }Pass{CGPROGRAM#pragma vertex vert#pragma fragment frag#include "UnityCG.cginc"#include "../../../../SDK/Materials/ARCoreDepth.cginc"struct appdata{float4 vertex : POSITION;float2 uv : TEXCOORD0;};struct v2f{float2 uv : TEXCOORD0;float4 vertex : SV_POSITION;};sampler2D _ColorRamp;// Vertex shader that scales the quad to full screen.v2f vert(appdata v){v2f o;o.vertex = float4(v.vertex.x * 2.0f, v.vertex.y * 2.0f, 1.0f, 1.0f);o.uv = ArCoreDepth_GetUv(v.uv);return o;}// This shader displays the depth buffer data as a color ramp overlay// for use in debugging.float4 frag(v2f i) : SV_Target{// Unpack depth texture value.float d = ArCoreDepth_GetMeters(i.uv);// Zero means no raw data available, render black.if (d == 0.0f){return float4(0, 0, 0, 1);}// Use depth as an index into the color ramp texture.return tex2D(_ColorRamp, float2(d / 3.0f, 0.0f));}ENDCG}}
}

如何显示到最前方如何将深度图显示到所有画面的最前方
ARCore借助于unity自带的quad模型(尺寸为1,左下角顶点为(-0.5,-0.5),右上角定点给为(0.5,0.5))。在顶点着色器中直接映射到裁剪空间内,在保证深度是z/w=1时,x,y值范围为(-1,1),所以可以显示在最前方,并铺满整个屏幕,相关代码如下:

 o.vertex = float4(v.vertex.x * 2.0f, v.vertex.y * 2.0f, 1.0f, 1.0f);

可以在shader中修改z值为0.5或者不修改z值修改w值为2,看一下效果。
正常显示在最前方时深度值应该为0,但是此结果z/w为1,具体原因不详,大概可能跟深度值精度问题转换为1/z问题导致。

3.深度遮挡

深度遮挡的实现较为麻烦,基本思路为分别获取unity场景渲染的深度图以及环境的深度图,然后计算深度差。最后在后处理模块(OnRenderImage)根据深度值来确定是否显示虚拟像素还是视频流背景。

3.1 处理流程

具体步骤如下:
1)在不透明物体渲染之前(CameraEvent.BeforeForwardOpaque),借助背景渲染的材质,获取视频流背景:

            m_BackgroundRenderer = FindObjectOfType<ARCoreBackgroundRenderer>();if (m_BackgroundRenderer == null){Debug.LogError("BackgroundTextureProvider requires ARCoreBackgroundRenderer " +"anywhere in the scene.");return;}m_BackgroundBuffer = new CommandBuffer();m_BackgroundBuffer.name = "Camera texture";m_BackgroundTextureID = Shader.PropertyToID(BackgroundTexturePropertyName);m_BackgroundBuffer.GetTemporaryRT(m_BackgroundTextureID,/*width=*/-1, /*height=*/ -1,/*depthBuffer=*/0, FilterMode.Bilinear);var material = m_BackgroundRenderer.BackgroundMaterial;if (material != null){m_BackgroundBuffer.Blit(material.mainTexture, m_BackgroundTextureID, material);}m_BackgroundBuffer.SetGlobalTexture(BackgroundTexturePropertyName, m_BackgroundTextureID);m_Camera.AddCommandBuffer(CameraEvent.BeforeForwardOpaque, m_BackgroundBuffer);m_Camera.AddCommandBuffer(CameraEvent.BeforeGBuffer, m_BackgroundBuffer);

ARCore背景渲染可以参考ARCore背景渲染
2)在update中更新真实场景深度图(Frame.CameraImage.UpdateDepthTexture(ref m_DepthTexture);),通过CommandBuffer在不透明物体渲染结束后,通过_CameraDepthTexture获取虚拟场景的深度值,并将深度值处理后的结果存储到图片的a通道,传递给下一步处理。根据如下代码显示,还做了blur处理,但是根据参数显示,blur效果有限或者说没有。

            m_Camera = Camera.main;m_Camera.depthTextureMode |= DepthTextureMode.Depth;m_DepthBuffer = new CommandBuffer();m_DepthBuffer.name = "Auxilary occlusion textures";// Creates the occlusion map.int occlusionMapTextureID = Shader.PropertyToID("_OcclusionMap");m_DepthBuffer.GetTemporaryRT(occlusionMapTextureID, -1, -1, 0, FilterMode.Bilinear);// Pass #0 renders an auxilary buffer - occlusion map that indicates the// regions of virtual objects that are behind real geometry.m_DepthBuffer.Blit(BuiltinRenderTextureType.CameraTarget,occlusionMapTextureID, m_DepthMaterial, /*pass=*/ 0);// Blurs the occlusion map.m_DepthBuffer.SetGlobalTexture("_OcclusionMapBlurred", occlusionMapTextureID);m_Camera.AddCommandBuffer(CameraEvent.AfterForwardOpaque, m_DepthBuffer);m_Camera.AddCommandBuffer(CameraEvent.AfterGBuffer, m_DepthBuffer);

通过OcclusionImageEffect shader中的pass 0,对深度进行处理。首先通过采样获取到真实深度和虚拟深度,然后计算一个occlusionAlpha值。当虚拟深度与真实深度差别较大时,且真实深度值较小时occlusionAlpha为1,反之为0;如果两者差别极小时则为0-1之间数据。

                float occlusionAlpha =1.0 - saturate(0.5 * (depthMeters - virtualDepth) /(_TransitionSizeMeters * virtualDepth) + 0.5);

3)在后处理过程中(OnRenderImage)根据第二步计算的occlusionAlpha值来决定是否显示虚拟物体

3.2 相关代码

实现的CS代码(DepthEffect)如下所示:

    [RequireComponent(typeof(Camera))]public class DepthEffect : MonoBehaviour{/// <summary>/// The global shader property name for the camera texture./// </summary>public const string BackgroundTexturePropertyName = "_BackgroundTexture";/// <summary>/// The image effect shader to blit every frame with./// </summary>public Shader OcclusionShader;/// <summary>/// The blur kernel size applied to the camera feed. In pixels./// </summary>[Space]public float BlurSize = 20f;/// <summary>/// The number of times occlusion map is downsampled before blurring. Useful for/// performance optimization. The value of 1 means no downsampling, each next one/// downsamples by 2./// </summary>public int BlurDownsample = 2;/// <summary>/// Maximum occlusion transparency. The value of 1.0 means completely invisible when/// occluded./// </summary>[Range(0, 1)]public float OcclusionTransparency = 1.0f;/// <summary>/// The bias added to the estimated depth. Useful to avoid occlusion of objects anchored/// to planes. In meters./// </summary>[Space]public float OcclusionOffset = 0.08f;/// <summary>/// Velocity occlusions effect fades in/out when being enabled/disabled./// </summary>public float OcclusionFadeVelocity = 4.0f;/// <summary>/// Instead of a hard z-buffer test, allows the asset to fade into the background/// gradually. The parameter is unitless, it is a fraction of the distance between the/// camera and the virtual object where blending is applied./// </summary>public float TransitionSize = 0.1f;private static readonly string k_CurrentDepthTexturePropertyName = "_CurrentDepthTexture";private static readonly string k_TopLeftRightPropertyName = "_UvTopLeftRight";private static readonly string k_BottomLeftRightPropertyName = "_UvBottomLeftRight";private Camera m_Camera;private Material m_DepthMaterial;private Texture2D m_DepthTexture;private float m_CurrentOcclusionTransparency = 1.0f;private ARCoreBackgroundRenderer m_BackgroundRenderer;private CommandBuffer m_DepthBuffer;private CommandBuffer m_BackgroundBuffer;private int m_BackgroundTextureID = -1;/// <summary>/// Unity's Awake() method./// </summary>public void Awake(){m_CurrentOcclusionTransparency = OcclusionTransparency;Debug.Assert(OcclusionShader != null, "Occlusion Shader parameter must be set.");m_DepthMaterial = new Material(OcclusionShader);m_DepthMaterial.SetFloat("_OcclusionTransparency", m_CurrentOcclusionTransparency);m_DepthMaterial.SetFloat("_OcclusionOffsetMeters", OcclusionOffset);m_DepthMaterial.SetFloat("_TransitionSize", TransitionSize);// Default texture, will be updated each frame.m_DepthTexture = new Texture2D(2, 2);m_DepthTexture.filterMode = FilterMode.Bilinear;m_DepthMaterial.SetTexture(k_CurrentDepthTexturePropertyName, m_DepthTexture);m_Camera = Camera.main;m_Camera.depthTextureMode |= DepthTextureMode.Depth;m_DepthBuffer = new CommandBuffer();m_DepthBuffer.name = "Auxilary occlusion textures";// Creates the occlusion map.int occlusionMapTextureID = Shader.PropertyToID("_OcclusionMap");m_DepthBuffer.GetTemporaryRT(occlusionMapTextureID, -1, -1, 0, FilterMode.Bilinear);// Pass #0 renders an auxilary buffer - occlusion map that indicates the// regions of virtual objects that are behind real geometry.m_DepthBuffer.Blit(BuiltinRenderTextureType.CameraTarget,occlusionMapTextureID, m_DepthMaterial, /*pass=*/ 0);// Blurs the occlusion map.m_DepthBuffer.SetGlobalTexture("_OcclusionMapBlurred", occlusionMapTextureID);m_Camera.AddCommandBuffer(CameraEvent.AfterForwardOpaque, m_DepthBuffer);m_Camera.AddCommandBuffer(CameraEvent.AfterGBuffer, m_DepthBuffer);m_BackgroundRenderer = FindObjectOfType<ARCoreBackgroundRenderer>();if (m_BackgroundRenderer == null){Debug.LogError("BackgroundTextureProvider requires ARCoreBackgroundRenderer " +"anywhere in the scene.");return;}m_BackgroundBuffer = new CommandBuffer();m_BackgroundBuffer.name = "Camera texture";m_BackgroundTextureID = Shader.PropertyToID(BackgroundTexturePropertyName);m_BackgroundBuffer.GetTemporaryRT(m_BackgroundTextureID,/*width=*/-1, /*height=*/ -1,/*depthBuffer=*/0, FilterMode.Bilinear);var material = m_BackgroundRenderer.BackgroundMaterial;if (material != null){m_BackgroundBuffer.Blit(material.mainTexture, m_BackgroundTextureID, material);}m_BackgroundBuffer.SetGlobalTexture(BackgroundTexturePropertyName, m_BackgroundTextureID);m_Camera.AddCommandBuffer(CameraEvent.BeforeForwardOpaque, m_BackgroundBuffer);m_Camera.AddCommandBuffer(CameraEvent.BeforeGBuffer, m_BackgroundBuffer);}/// <summary>/// Unity's Update() method./// </summary>public void Update(){m_CurrentOcclusionTransparency +=(OcclusionTransparency - m_CurrentOcclusionTransparency) *Time.deltaTime * OcclusionFadeVelocity;m_CurrentOcclusionTransparency =Mathf.Clamp(m_CurrentOcclusionTransparency, 0.0f, OcclusionTransparency);m_DepthMaterial.SetFloat("_OcclusionTransparency", m_CurrentOcclusionTransparency);m_DepthMaterial.SetFloat("_TransitionSize", TransitionSize);Shader.SetGlobalFloat("_BlurSize", BlurSize / BlurDownsample);// Gets the latest depth map from ARCore.Frame.CameraImage.UpdateDepthTexture(ref m_DepthTexture);// Updates the screen orientation for each material._UpdateScreenOrientationOnMaterial();}/// <summary>/// Unity's OnEnable() method./// </summary>public void OnEnable(){if (m_DepthBuffer != null){m_Camera.AddCommandBuffer(CameraEvent.AfterForwardOpaque, m_DepthBuffer);m_Camera.AddCommandBuffer(CameraEvent.AfterGBuffer, m_DepthBuffer);}if (m_BackgroundBuffer != null){m_Camera.AddCommandBuffer(CameraEvent.BeforeForwardOpaque, m_BackgroundBuffer);m_Camera.AddCommandBuffer(CameraEvent.BeforeGBuffer, m_BackgroundBuffer);}}/// <summary>/// Unity's OnDisable() method./// </summary>public void OnDisable(){if (m_DepthBuffer != null){m_Camera.RemoveCommandBuffer(CameraEvent.AfterForwardOpaque, m_DepthBuffer);m_Camera.RemoveCommandBuffer(CameraEvent.AfterGBuffer, m_DepthBuffer);}if (m_BackgroundBuffer != null){m_Camera.RemoveCommandBuffer(CameraEvent.BeforeForwardOpaque, m_BackgroundBuffer);m_Camera.RemoveCommandBuffer(CameraEvent.BeforeGBuffer, m_BackgroundBuffer);}}private void OnRenderImage(RenderTexture source, RenderTexture destination){// Only render the image when tracking.if (Session.Status != SessionStatus.Tracking){return;}// Pass #1 combines virtual and real cameras based on the occlusion map.Graphics.Blit(source, destination, m_DepthMaterial, /*pass=*/ 1);}/// <summary>/// Updates the screen orientation of the depth map./// </summary>private void _UpdateScreenOrientationOnMaterial(){var uvQuad = Frame.CameraImage.TextureDisplayUvs;m_DepthMaterial.SetVector(k_TopLeftRightPropertyName,new Vector4(uvQuad.TopLeft.x, uvQuad.TopLeft.y, uvQuad.TopRight.x, uvQuad.TopRight.y));m_DepthMaterial.SetVector(k_BottomLeftRightPropertyName,new Vector4(uvQuad.BottomLeft.x, uvQuad.BottomLeft.y, uvQuad.BottomRight.x,uvQuad.BottomRight.y));}}

shader则为OcclusionImageEffect:

Shader "Hidden/OcclusionImageEffect"
{Properties{_MainTex ("Main Texture", 2D) = "white" {}  // Depth texture._UvTopLeftRight ("UV of top corners", Vector) = (0, 1, 1, 1)_UvBottomLeftRight ("UV of bottom corners", Vector) = (0 , 0, 1, 0)_OcclusionTransparency ("Maximum occlusion transparency", Range(0, 1)) = 1_OcclusionOffsetMeters ("Occlusion offset [meters]", Float) = 0_TransitionSizeMeters ("Transition size [meters]", Float) = 0.05}SubShader{Cull Off ZWrite Off ZTest AlwaysCGINCLUDE#include "UnityCG.cginc"struct appdata{float4 vertex : POSITION;float2 uv : TEXCOORD0;};struct v2f{float2 uv : TEXCOORD0;float4 vertex : SV_POSITION;};v2f vert (appdata v){v2f o;o.vertex = UnityObjectToClipPos(v.vertex);o.uv = v.uv;return o;}ENDCG// Pass #0 renders an auxilary buffer - occlusion map that indicates the// regions of virtual objects that are behind real geometry.Pass{CGPROGRAM#pragma vertex vert#pragma fragment frag#include "../../../../SDK/Materials/ARCoreDepth.cginc"sampler2D _CameraDepthTexture;sampler2D _BackgroundTexture;bool _UseDepthFromPlanes;float _TransitionSizeMeters;fixed4 frag (v2f i) : SV_Target{float depthMeters = 0.0;if (_UseDepthFromPlanes){depthMeters = tex2Dlod(_CurrentDepthTexture, float4(i.uv, 0, 0)).r* ARCORE_MAX_DEPTH_MM;depthMeters *= ARCORE_DEPTH_SCALE;}else{float2 depthUv = ArCoreDepth_GetUv(i.uv);depthMeters = ArCoreDepth_GetMeters(depthUv);}float virtualDepth = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv)) -_OcclusionOffsetMeters;// Far plane minus near plane.float maxVirtualDepth =_ProjectionParams.z - _ProjectionParams.y;float occlusionAlpha =1.0 - saturate(0.5 * (depthMeters - virtualDepth) /(_TransitionSizeMeters * virtualDepth) + 0.5);// Masks out only the fragments with virtual objects.occlusionAlpha *= saturate(maxVirtualDepth - virtualDepth);// At this point occlusionAlpha is equal to 1.0 only for fully// occluded regions of the virtual objects.fixed4 background = tex2D(_BackgroundTexture, i.uv);return fixed4(background.rgb, occlusionAlpha);}ENDCG}// Pass #1 combines virtual and real cameras based on the occlusion map.Pass{CGPROGRAM#pragma vertex vert#pragma fragment fragsampler2D _MainTex;sampler2D _OcclusionMapBlurred;sampler2D _BackgroundTexture;fixed _OcclusionTransparency;fixed4 frag (v2f i) : SV_Target{fixed4 input = tex2D(_MainTex, i.uv);fixed4 background = tex2D(_BackgroundTexture, i.uv);fixed4 occlusionBlurred = tex2D(_OcclusionMapBlurred, i.uv);float objectMask = occlusionBlurred.a;// The virtual object mask is blurred, we make the falloff// steeper to simulate erosion operator. This is needed to make// the fully occluded virtual object invisible.float objectMaskEroded = pow(objectMask, 10);// occlusionTransition equal to 1 means fully occluded object.// This operation boosts occlusion near the edges of the virtual// object, but does not affect occlusion within the object.float occlusionTransition =saturate(occlusionBlurred.a * (2.0 - objectMaskEroded));// Clips occlusion if we want to partially show occluded object.occlusionTransition = min(occlusionTransition, _OcclusionTransparency);return lerp(input, background, occlusionTransition);}ENDCG}}
}

4.结语

1)处理流程较为复杂,简单使用时还有优化的空间。
2)提供一种不开启混合(Blend)时的半透明效果,即通过CommandBuffer获取不同时期的图片,然后根据透明度(alpha)进行插值混合。
3)对于3d遮挡问题,如果需要三维mesh重建自然没有问题,但是此时需要将mesh显示出来,又涉及到效率问题。此处提供一种解决思路,即通过三维重建网格数据,但进行绘制是使用unity的shadowcaster方式, 只写入深度,不做渲染,然后通过深度测试,自动实现3d物体遮挡。或者使用colormask 0,默认只写入深度,不做颜色显示

ARCore深度渲染问题分析相关推荐

  1. ARCore背景渲染

    文章目录 1.前言 2.ARCore流程 3.渲染流程 3.1 数据更新 3.2 渲染 4.结语 1.前言 像Vuforia.ARCore.EasyAR等sdk,使用时都会将背景与虚拟进行叠加.此功能 ...

  2. 【视频课】先搞懂你用的模型,深度学习模型分析课程来了!

    前言 欢迎大家关注有三AI的视频课程系列,我们的视频课程系列共分为5层境界,内容和学习路线图如下: 第1层:掌握学习算法必要的预备知识,包括Python编程,深度学习基础,数据使用,框架使用. 第2层 ...

  3. 鱼眼图像自监督深度估计原理分析和Omnidet核心代码解读

    作者丨苹果姐@知乎 来源丨https://zhuanlan.zhihu.com/p/508090405 编辑丨3D视觉工坊 在自动驾驶实际应用中,对相机传感器的要求之一是拥有尽可能大的视野范围,鱼眼相 ...

  4. 论文阅读学习 - 深度学习网络模型分析对比

    深度学习网络模型分析对比 [Paper - An Analysis of Deep Neural Network Models for Practiacal Applications] 从准确率Acc ...

  5. 镜头离焦对于ToF深度的影响分析

    作者 | 技述无忌 编辑 | 3D视觉开发者社区 "高度回避者依赖结构势力(角色和资源),不回避者依靠个人能力和别人的尊重(声望):-------<影响力> 1. 景深与弥散圆 ...

  6. 深度学习 情感分析_使用深度学习进行情感分析

    深度学习 情感分析 介绍 (Introduction) The growth of the internet due to social networks such as Facebook, Twit ...

  7. 深度学习模型分析人类复杂疾病的准确性

    原创 梅斯医学 MedSci梅斯既往研究显示,通过全基因组关联研究(GWAS)分析鉴定出的疾病风险变异主要位于基因组的非编码区域中.因此,全基因组图谱的深度学习模型在预测DNA序列的调控作用方面存在着 ...

  8. ijkplayer播放器剖析(六)视频同步与渲染机制分析

    一.引言: 在前面的博客中,将音频解码播放及视频解码都分析了,这篇博客将单独针对视频同步及渲染来分析,看下ijkplayer是如何做的.本博客分析的同步方式为以音频为主,视频去同步音频. 二.同步前提 ...

  9. 2019年中国城市商圈发展深度洞察与分析报告

    ​​地区经济.交通运输和旅游业的发展促进中国城市商圈发展,iiMedia Research(艾媒咨询)监测发现,广东地区对商圈的关注度最高,高收入企业白领是中国一线城市商圈主要消费者:非一线城市在交通 ...

最新文章

  1. android 自动化web,如何在android上使用selenium或appium自动化Chrome浏览器?
  2. Xcode7.1环境下上架iOS App到AppStore 流程 (2)
  3. 鱼油账号记录程序 - 零基础入门学习Delphi38
  4. mysql whrere 占位_【MySQL】(4)操作数据表中的记录
  5. BGP community
  6. 你绝对能懂的“机器学习”(四)
  7. 信息学奥赛一本通 1066:满足条件的数累加 | OpenJudge NOI 1.5 10
  8. How-to: 利用Visual Studio升级Windows Phone 7工程
  9. java bean状态_无状态和有状态的企业Java Bean
  10. window 后台启动java参数启动
  11. c语言中cnthe普通变量,不得不说,关于 *(unsigned long *) 和 (unsigned long)
  12. linux python3 装pip,linux 安装pip 和python3(示例代码)
  13. 【读书笔记】IOS帝国-Apple Ⅱ/Mac/皮克斯/iPod/iTunes/iPhone/App Store/iPad,苹果教父:史蒂夫·乔布斯传_2020.02.15
  14. 计算机怎么改鼠标标志,图文帮你如何自定义电脑鼠标指针的图标
  15. HttpClient的使用与连接资源释放
  16. 【ChinaDRM】符合ChinaDRM GY/T 277-2019标准的码流分析系统
  17. 磁盘区号 linux,区号
  18. 微信小程序云开发之云函数与本地数据库获取数据
  19. Android车载蓝牙相关开发1:概述及准备
  20. 机器学习中的正则化项(L1, L2)的理解

热门文章

  1. 详解JavaScript中shift()方法的使用
  2. 没有学历、没有项目经验应该如何找工作?
  3. Oracle薪酬核算系统,薪酬核算系统具有哪些优点?
  4. java 时间 pt_与Java pt 1交互
  5. 新期刊《Journal of Machine Learning》上线!
  6. 第一天:罗马房间记忆法(定桩记忆法)
  7. 美石油管道大亨遭勒索攻击被迫停服,宣布进入区域性紧急状态
  8. 007_ServletRequest
  9. 电脑桌面日历便签软件怎么通过月视图查看某一天的便签内容记录?
  10. 北京68个好吃不贵的地方