增强现实实现过程

I was looking for a personal project to keep me busy during lockdown and decided to have a crack at implementing a Beauty Mode similar to Snapchat and TikTok, that is, a skin smoothing filter that would run in real-time on videos. It was more challenging than I anticipated, here’s a breakdown of how I did it.

我一直在寻找一个个人项目,以使我在锁定期间保持忙碌,因此决定在实施类似于Snapchat和TikTok的“美容模式”时会遇到困难,即可以在视频上实时运行的皮肤平滑滤镜。 这比我预期的更具挑战性,这是我做事的方式的分解。

专业人士如何做? (How do professionals do it?)

‘Improving’ skin in photoshop is common knowledge nowadays and it’s quite easy to find resources on the topic. There are surprisingly many different techniques to make skin look better, most having common grounds: frequency separation.

如今,在Photoshop中“改善”皮肤已成为常识,并且很容易找到有关该主题的资源。 令人惊讶的是,有许多不同的技术可以使皮肤看起来更好,其中大多数都有共同点: 频率分离。

Original image
原始图片

What we are trying to achieve is to remove the various blemishes from the base picture above, the small variations in shape or colour on the skin, while keeping the overall look similar. Separating the small variations from the large variations can be achieved by separating the high frequency signal, which contains the blemishes, from the low frequency signal in the picture, the smooth skin.

我们正在努力实现的目标是,从上方的基础图片中消除各种瑕疵,皮肤形状或颜色的细微变化,同时保持整体外观相似。 分离从大的变化的小的变化可以通过分离的高频信号,其中包含了污点,从在画面中,光滑的皮肤的低频信号来实现。

To obtain the so-called high pass, we first apply a Gaussian blur with a radius just large enough to make the undesired small imperfections disappear (e.g. 10 pixels).

为了获得所谓的高通 ,我们首先应用半径刚好足以使不希望有的小瑕疵消失(例如10像素)的高斯模糊。

Original image with Gaussian blur
具有高斯模糊的原始图像

Then, we subtract the blurred image from the original one and add a 0.5 grey (128 for 8bits colour depth), that’s our high pass. In Gimp, the grain extract layer mode does exactly that.

然后,我们从原始图像中减去模糊的图像,然后添加0.5的灰度(8位色深为128),这就是我们的高通 。 在Gimp中, 谷物提取层模式可以做到这一点。

High pass
高通

For the next steps, we want to somehow attenuate the details from the high pass on the original image, through some layering mode. Let’s overlay the high pass on the normal image, we get a sharper skin with even more visible imperfections, the opposite of what we want.

对于下一步,我们希望通过某种分层模式以某种方式衰减原始图像上高通的细节。 让我们在普通图像上叠加高通 ,可以得到更清晰的皮肤和更多可见的瑕疵,这与我们想要的相反。

High pass overlaid on original image
高通覆盖在原始图像上

To get the desired result, we simply invert the high pass colours, now we get a nice smooth skin!

为了获得理想的结果,我们只需反转高通色, 现在我们得到了一个不错的光滑皮肤!

Inverted high pass overlaid on original image
反向高通叠加在原始图像上

Although.. we also smoothed parts of the image without skin.

虽然..我们也平滑了没有皮肤的图像部分。

Here’s a GLSL snippet to implement the overlay layering mode:

这是实现叠加分层模式的GLSL代码段:

vec4 overlay(vec4 a, vec4 b){  vec4 x = vec4(2.0) * a * b;  vec4 y = vec4(1.0) - vec4(2.0) * (vec4(1.0)-a) * (vec4(1.0)-b);  vec4 result;result.r = mix(x.r, y.r, float(a.r > 0.5));result.g = mix(x.g, y.g, float(a.g > 0.5));result.b = mix(x.b, y.b, float(a.b > 0.5));result.a = mix(x.a, y.a, float(a.a > 0.5));  return result;}

我们如何才能使皮肤光滑? (How do we only smooth the skin?)

For Photoshop work, an artist would then use brush tools to create a mask and apply the smooth skin only where desired and keep sharp details elsewhere: eyes, hairs, mouth, etc. In our filter, we can’t do that easily.

对于Photoshop,艺术家将使用画笔工具创建蒙版,并仅在需要的地方应用平滑的皮肤,并在其他地方(眼睛,头发,嘴巴等)保留清晰的细节。在我们的滤镜中,我们无法轻松做到这一点。

Luckily, there is a type of blur that can preserve sharp edges such as eyes and hairs: it’s called Surface Blur in Photoshop, Selective Blur in Gimp and Bilateral Filter in the literature. Applying it to our original image by itself gives a result close to what we want out of the box:

幸运的是,有一种类型的模糊可以保留锋利的边缘,例如眼睛和头发:在Photoshop中称为表面模糊 ,在Gimp中称为选择性模糊 ,在文献中称为“ 双边过滤 ”。 单独将其应用于我们的原始图像,可以得到接近我们想要的开箱即用结果:

Bilateral filter on original image
原始图像上的双边过滤器

Let’s use the same frequency separation technique as above and simply swap the Gaussian blur with the Bilateral Filter:

让我们使用与上述相同的频率分离技术,并简单地将高斯模糊与双边滤波器交换:

Frequency separation with bilateral filter
使用双边滤波器进行频率分离

Pretty good! We can mix the raw bilateral filtered image with the result of the frequency separation process to balance between skin smoothness and detail preservation. In the image above, I bring 10% of the former back.

非常好! 我们可以将原始的双边滤波图像与频率分离过程的结果混合在一起,以在皮肤平滑度和细节保留之间取得平衡。 在上图中,我带回了前者的10%。

嗯..我们可以做得更好吗? (Meh.. can we do better?)

Of course we can! Despite all ours efforts, while the skin looks pretty good, we’ve lost quite a few details in other areas that we’d like to keep. Remember, in Photoshop an artist would create a mask to apply the effect selectively.

当然,我们可以! 尽管我们付出了所有努力,但尽管皮肤看起来还不错,但我们想保留的其他区域却丢失了很多细节。 请记住,在Photoshop中,艺术家会创建一个蒙版以选择性地应用效果。

Let’s try to generate a mask of the skin using some heuristics, based on the colour of the skin.

让我们尝试基于皮肤的颜色,使用启发式方法生成皮肤蒙版。

CIELAB colour space
CIELAB色彩空间

For that we’re going to use the CIELAB colour space. It’s designed to take into account how humans perceive colours and has convenient properties for what we’re trying to do.

为此,我们将使用CIELAB颜色空间。 它的设计考虑到了人类对颜色的感知方式,并且对我们正在尝试的操作具有便利的属性。

Consider the 3 axes of the colour space with a range [0, 1]. The L axis defines luminance, darker towards 0, lighter towards 1. The a axis represents the amount of red (+1) or green (0) and the b axis the amount of yellow (+1) or blue (0).

考虑颜色空间的3个轴,其范围为[0,1]。 L轴定义亮度,颜色较深向0,朝1.打火机的一个轴表示红(1)的量或绿色(0)和B轴的黄色(1)的量或蓝色(0)。

Our goal is to mask out pixels with colours that are unlikely to represent skin. We don’t want to be too strict, on a video or photo of the real world, as a number of factors will affect the final colour of the pixels: lighting conditions, lens, exposure and post-processing from the camera. In particular, indoor lighting is generally not neutral and will shift the hue of perceived colour quite a lot.

我们的目标是遮盖不太可能代表皮肤的颜色的像素。 我们不想对现实世界的视频或照片过于严格,因为许多因素都会影响像素的最终颜色:照明条件,镜头,曝光和相机的后处理。 特别地,室内照明通常不是中性的,并且会大大改变感知颜色的色调。

So rather that aiming at isolating skin colour exactly, we can use some heuristics to reduce the range of colour we’re going to pick based on the probability of that colour representing some skin.

因此,为了精确地隔离肤色,我们可以基于某种颜色代表某种皮肤的可能性,使用一些启发式方法来缩小要选择的颜色范围。

The possible range of human skin colour is well understood, any possible skin tone has a mix of red and yellow tint. That is, these tones are reflected by the skin, while the tones opposite on the colour wheel are essentially absorbed.

人类肤色的可能范围已广为人知,任何可能的肤色都带有红色和黄色的混合色。 即,这些色调被皮肤反射,而色轮上相反的色调基本上被吸收。

“we can use some heuristics to reduce the range of colour we’re going to pick based on the probability of that colour representing some skin.”

“我们可以根据某种颜色代表某种皮肤的可能性,使用启发式方法来缩小我们要选择的颜色范围。”

If we consider the plane defined by axis a and b in Lab, clearly there is little chance skin will have a significant blue or green tint. We can discard colours with small a values and small b values. To get a smooth mask, we can use a smoothstep operator in a small range around 0.5, the middle of the colour axes.

如果我们考虑由轴ab定义的平面 实验室,显然皮肤几乎没有机会出现明显的蓝色或绿色。 我们可以舍弃小的a值和小的b值的颜色。 要获得平滑的蒙版,我们可以在0.5左右(色轴中间)的较小范围内使用smoothstep运算符。

Let’s now consider the other boundary on the a and b axes, values close to 1 are the most saturated yellow / red. Skin doesn’t reflect that much light (moderate albedo), so we can discard the highest values and reverse smoothstep in the range [0.9, 1.0]. For the same reason, we can attenuate the highest luminance values on the L axis with a reversed smoothstep in the range [0.98, 1.02].

现在让我们考虑ab轴上的另一个边界,接近1的值是最饱和的黄色/红色。 皮肤不会反射太多的光(中等反射率),因此我们可以舍弃最大值并在[ 0.9,1.0 ]范围内反转平滑步长 。 出于相同的原因,我们可以使用[ 0.98,1.02 ]范围内的反向平滑步长衰减L轴上的最高亮度值。

Putting this altogether, we get the following GLSL snippet:

综上所述,我们得到以下GLSL代码段:

float skin_mask(in vec4 color){  vec3 lab = rgb2lab(color.rgb); # return values in the range [0, 1]  float a = smoothstep(0.45, 0.55, lab.g);  float b = smoothstep(0.46, 0.54, lab.b);  float c = 1.0 — smoothstep(0.9, 1.0, length(lab.gb));  float d = 1.0 — smoothstep(0.98, 1.02, lab.r);  return min(min(min(a, b), c), d);}

The colour space conversion needs to be handled carefully. Photos and videos are typically encoded in sRGB colour space, properly converting to CIELAB looks likes this: sRGB to linear RGB to CIEXYZ to CIELAB.

色彩空间转换需要仔细处理。 照片和视频通常在sRGB颜色空间中编码,正确转换为CIELAB的样子是这样的: sRGB线性 RGBCIEXYZCIELAB。

Final result
最后结果

Now we can use the skin mask to mix the smoothed skin, that looks much better! Note how the hair, eyes and mouth look more natural.

现在我们可以使用皮肤面膜混合光滑的皮肤了,看起来好多了! 注意头发,眼睛和嘴巴看起来更自然。

One last thing about colour spaces, generally it’s a good idea to work in linear RGB space when manipulating colours, so we should convert the input image from sRGB to linear RGB, compute all our treatments in linear space, including the blur/filter, and convert the final result back to sRGB.

关于色彩空间的最后一件事,通常是在处理色彩时在线性 RGB空间中工作是个好主意,因此我们应该将输入图像从sRGB转换为线性 RGB ,计算线性空间中的所有处理量,包括模糊/滤镜,以及将最终结果转换回sRGB

好的,看起来不错,但是双边过滤器很慢! (Ok that looks great, but bilateral filter is slow!)

Indeed, if you ever tried the Selective Blur in Gimp, you’ll notice it takes its time to compute. A brute-force implementation of the bilateral filter looks like this pseudo code:

确实,如果您尝试过Gimp中的“ 选择性模糊” ,您会发现计算起来很费时间。 双边过滤器的暴力实施看起来像下面的伪代码:

For each pixel m in image:result = 0  For each neighbour pixel n:weight = spatial_filter(spacial_distance(m, n))weight *= range_filter(range_distance(image(m), image(n)))total_weight += weightresult += weight * image(n)filtered_image(m) = result / total_weight

We can use a Gaussian filter kernel for both the spatial_filter() and range_filter() kernel operators, the former would return a weight based on pixel distance in space (coordinates), and the later based on distance in pixel luminance (we can use the L axis of the CIELAB colour space). Both kernels are defined with their respective, fixed, standard variation σ.

我们可以对spatial_filter ()和range_filter ()内核运算符使用高斯滤波器内核,前者将基于空间中的像素距离(坐标)返回权重,而后者将基于像素亮度的距离返回权重(我们可以使用CIELAB色彩空间的L轴)。 两个内核都定义有各自的固定标准偏差σ

In the worst case (large neighbourhood), for each pixel, we would read every other pixel of the image, for a quadratic complexity O(N²)! Even with a moderate neighbourhood of 10 pixels, that’ll take too much time.

在最坏的情况下(大邻居),对于每个像素,我们将读取图像的每个其他像素,以获得二次复杂度O(N²)! 即使在10像素的适度邻域中,也将花费过多时间。

Fortunately, researchers found a way to compute a Bilateral Filter.. in real-time! The paper “Real-Time O(1) Bilateral Filtering” by Q. Yang et al. describes a simple technique where Bilateral Filtering can be computed as a linear combination of a constant number of spatially filtered images. Note that O(1) here refers to the complexity of the filter in terms of kernel size, applying the filter to an image yields a linear complexity O(N) with respect to the number of pixels.

幸运的是,研究人员找到了一种实时计算双边过滤器的方法。 Q. Yang等人的论文“ 实时O(1)双边过滤 ” 。 描述了一种简单的技术,其中可以将双边滤波计算为恒定数量的空间滤波图像的线性组合。 请注意,此处的O(1)指的是内核大小的过滤器复杂度,将过滤器应用于图像会产生相对于像素数的线性复杂度O(N)。

Basically, we divide the range of luminance into a constant amount of K image ‘slabs’ (called ‘PBFIC’ in the paper) and apply a spatial kernel filter to them (a.k.a. blur), for which we know an O(1) implementation. The result is obtained by linear interpolation of the filtered slabs based on the luminance of the original image. The pseudo code for this approach looks like this:

基本上,我们将亮度范围划分为恒定数量的K图像 'slabs'(在本文中称为'PBFIC')并对其应用空间内核滤波器(aka模糊),对此我们知道是O(1)实现。 通过基于原始图像的亮度对滤波后的平板进行线性插值获得结果。 这种方法的伪代码如下所示:

For each slab l in range(0, K):  slab_range = l / (K - 1)  For each pixel m in image:weights(m) = range_filter(range_distance(image(m), slab_range))blurred_slabs[l] = gaussian_blur(image * weights) / weightscombine_slabs(blurred_slabs, image)

The combine_slabs() operation linearly interpolates the 2 nearest (in range) images from the array blurred_slabs based on the luminance of the pixel in the original image and the slab_range value computed for each slab.

Combine_slabs ()操作基于原始图像中像素的亮度和为每个slab计算的slab_range值,线性内插数组blurred_slabs中最近的两个图像(范围内)。

For our application, we can get good results with 5 layers, a Gaussian kernel of σ=3 as spatial filter (in pixel size) and σ=0.1 as range filter (on CIELAB L axis).

对于我们的应用,我们可以使用5层获得良好的结果,其中高斯核的σ= 3作为空间滤波器(按像素大小),而σ= 0.1作为范围滤波器(在CIELAB L轴上)。

Gaussian kernel filters have known O(1) implementation on both CPU and GPU, and since the number of slabs K is constant, applying such solution on an image consequently offers a linear O(N) complexity. The gaussian_blur() operation can be implemented efficiently on the GPU with the optimised separable Gaussian filter described by Filip Strugar on his Intel blog: https://software.intel.com/content/www/us/en/develop/blogs/an-investigation-of-fast-real-time-gpu-based-image-blur-algorithms.html

高斯内核滤波器在CPU和GPU上都具有O(1)实现,并且由于平板数K是恒定的,因此在图像上应用这样的解决方案因此提供了线性O(N)复杂度。 使用Filip Strugar在他的英特尔博客中描述的优化的可分离高斯滤波器,可以在GPU上高效地实现gaussian_blur ()操作: https ://software.intel.com/content/www/us/en/develop/blogs/an -基于实时实时GPU的图像模糊模糊算法研究.html

OpenGL实现 (An OpenGL implementation)

We need to use 11 textures, one for the original image, and two for each slab of the bilateral filter. Multiple GLSL render passes are required, so we should use Framebuffer Objects to hold intermediate results.

我们需要使用11个纹理,一个用于原始图像,两个用于双边滤波器的每个平板。 需要多次GLSL渲染过程,因此我们应该使用帧缓冲区对象保存中间结果。

Here’s the order of operations, a separate pixel shader is used per pass and rendered on the entire image domain:

这是操作顺序,每遍使用一个单独的像素着色器,并在整个图像域上进行渲染:

  1. Prepare 5 slab textures from the original image converted to linear RGB

    从原始图像准备5平板纹理转换为线性RGB

  2. Run separable Gaussian blur horizontally on each slab texture在每个平板纹理上水平运行可分离的高斯模糊
  3. Run separable Gaussian blur vertically on the output of previous step在上一步的输出上垂直运行可分离的高斯模糊
  4. Combine slabs, apply frequency separation and convert back to sRGB

    组合平板,进行频率分离并转换回sRGB

The Gaussian blur pass can render all slabs at the same time by writing to an array of output FBOs, to which we attach 5 of the allocated textures. That’s why we need two textures per slab, one for the input, one for the output. The slabs can be rendered half resolution without visible degradation for a massive performance boost.

高斯模糊过程可以通过写入输出FBO数组来同时渲染所有平板,我们在这些FBO上附加了5个分配的纹理。 因此,每个平板需要两个纹理,一个用于输入,一个用于输出。 可以将平板呈现一半的分辨率,而不会出现明显的性能下降,从而极大地提高了性能。

This is still a lot of texture reads, but we can hardly do faster without compromising the quality of the result. This runs on a 720p video at 60fps and 1080p at over 30fps on an average GPU (Radeon Pro 555). Note that this is with a Python/OpenGL implementation running on OSX, an optimised implementation in C++/Vulkan or Swift/Metal could probably perform better.

这仍然是很多纹理的读物,但是在不影响结果质量的情况下,我们几乎不可能做得更快。 它可以在普通GPU(Radeon Pro 555)上以60fps的速度播放720p视频,以超过30fps的速度播放1080p。 请注意,这是与在OSX上运行的Python / OpenGL实现一起使用的,在C ++ / Vulkan或Swift / Metal中优化的实现可能会表现更好。

可能的改进 (Possible improvements)

I tested this video filter on various images, videos and live webcam on different people. The result looks quite robust, the effect noticeably smooth the skin while not producing strange results elsewhere.

我在不同的人的各种图像,视频和实时网络摄像头上测试了此视频过滤器。 结果看起来非常强劲,效果明显使皮肤光滑,同时在其他地方不会产生奇怪的结果。

If we’re picky, we could consider the skin is smoothed a bit too much, looking like baby skin. To alleviate that, a popular Photoshop technique is blurring the high pass a bit before layering it down with overlay, by just a pixel or two. This has the effect to smooth out the medium sized features but keep the very thin ones. I tried that and got interesting results on sharp photographies, but consumer grade videos from webcams and phones tend to be too blurry to be worth it. We can also simply mix the original image with the result a little to recover some sharpness or apply a subtle sharpness filter to the end result.

如果我们很挑剔,我们可以考虑使皮肤过于光滑,看起来像婴儿皮肤。 为了缓解这种情况,一种流行的Photoshop技术正在使高通模糊一点,然后再用叠加层将高通向下分层,仅增加一两个像素。 这样可以平滑中等大小的特征,但保留非常薄的特征。 我尝试了一下,在清晰的照片上得到了有趣的结果,但是来自网络摄像头和手机的消费者级视频往往太模糊了,不值得。 我们还可以简单地将原始图像与结果混合一点以恢复一些清晰度,或者对最终结果应用微妙的清晰度过滤器。

Also, the effect can sometimes pick other things than skin where the colour is similar. Wooden hard floor for example will loose fine patterns:

此外,效果有时会选择颜色相似的皮肤以外的其他东西。 例如,木质硬地板会松散细腻的花纹:

Smooth skin filter applied on wood
光滑的皮肤过滤器涂在木头上

The wood still looks like wood, so it’s not really a problem. If that’s annoying we could restrict the skin smoothing filter to faces, by tracking them with a facial landmark detection technique.. that will be a topic for a future article!

木头看起来仍然像木头,所以这不是一个真正的问题。 如果这很烦人,我们可以通过使用面部标志检测技术跟踪面部,将皮肤平滑过滤器限制在面部,这将是以后文章的主题!

翻译自: https://medium.com/swlh/how-i-implemented-my-own-augmented-reality-beauty-mode-3bf3b74e5507

增强现实实现过程


http://www.taodudu.cc/news/show-5723404.html

相关文章:

  • 图像美容之眼睛放大算法。
  • 网页的美容师---CSS
  • 图像美容之眼睛放大算法
  • 相机说明文档2.0
  • 阿里突袭腾讯游戏 这场战役有的一拼
  • 《阿里云科技评论》第五期
  • 操作系统,为何只有阿里坚持下来?
  • 深圳市住房公积金管理中心
  • 园区公积金服务器不稳定,住房公积金中心管理混乱、服务态度恶劣
  • 住房公积金
  • 基于Sring+bootstrap+MySQL的住房公积金管理系统
  • 基于SSM的企业住房公积金管理系统
  • 北京市住房公积金查询办法
  • 常州市住房公积金管理中心:美创防水坝守护数据安全
  • 住房公积金查询
  • 安徽省宣城市住房公积金管理中心的数据安全建设实践
  • vimrc的一些操作
  • gvim【三】【_vimrc配置】
  • .vimrc php,不重启Vim但重新加载 .vimrc的方法有哪些
  • lingo纳什均衡代码_英语演讲稿_理解“纳什均衡”是对纳什最好的怀念_沪江英语...
  • 怀念以前学PASCAL的时光~~~
  • 对未来计算机的畅想英语50个单词,《英语2》作业
  • 怀念旧时那一丝丝温暖的味道
  • 眼下烦恼的,正是日后怀念的
  • 怀念Peter
  • 怀念学习编程的日子
  • 怀念一些书
  • 学习英文-学以致用【场景:运动,怀念科比】
  • 大学生活怀念(一)歌曲
  • 2020期末的怀念

增强现实实现过程_我如何实现自己的增强现实美容模式相关推荐

  1. ubuntu 安装Pangolin 过程_余辉亮的学习笔记的博客-CSDN博客_pangolin安装

    ubuntu 安装Pangolin 过程_余辉亮的学习笔记的博客-CSDN博客_pangolin安装

  2. 无人驾驶运动学模型——线性时变模型预测控制的思路推演过程_百叶书的博客-CSDN博客_线性时变模型预测控制 转

    无人驾驶运动学模型--线性时变模型预测控制的思路推演过程_百叶书的博客-CSDN博客_线性时变模型预测控制

  3. linux数据库12c安装图解,Linux + Oracle 12c RAC安装配置详细记录过程_图文并茂.pdf

    Linux +Oracle12cRAC安装配置详细记录过程_图文并茂_v1.0 Version:<1.0> Linux +Oracle12cRAC安装配置详细记录过程_图文并茂_v1.0 ...

  4. 滴滴收购优步谈判过程_大流行之后,优步正在为绿色业务做准备

    滴滴收购优步谈判过程 Uber has promised to zero out its carbon emissions by 2040 and claims that 100% of rides ...

  5. word2003如何设置护眼模式_连锁企业如何设置「单店盈利模式」?

    文| Leo 文图 |来自网络 编辑 | 逸马集团会员中心 全文字数2540字,阅读时间5分钟30分 <万利连锁>:单店盈利到复制盈利,复制盈利到模式裂变.模式裂变到平台运营,平台运营到品 ...

  6. (需求实战_进阶_07)SSM集成RabbitMQ 订阅模式 关键代码讲解、开发、测试

    接上一篇:(企业内部需求实战_进阶_06)SSM集成RabbitMQ 订阅模式 关键代码讲解.开发.测试 https://gblfy.blog.csdn.net/article/details/104 ...

  7. iphone解锁_如何将iPhone或iPad置于“监督模式”以解锁强大的管理功能

    iphone解锁 Supervised Mode is intended for organizations, but you can enable it on your own iPhone or ...

  8. 增强现实与虚拟现实_到底什么是虚拟现实

    增强现实与虚拟现实 The objective of this article is to introduce virtual reality (VR) by describing and expla ...

  9. javascript案例_如何在JavaScript中使用增强现实-一个案例研究

    javascript案例 by Apurav Chauhan 通过Apurav Chauhan 如何在JavaScript中使用增强现实-一个案例研究 (How to use Augmented Re ...

最新文章

  1. mac php 连接mysql数据库_Mac环境下php操作mysql数据库的方法分享
  2. ubuntu修改环境变量
  3. 【 Grey Hack 】加强版nmap
  4. 国内计算机专业最好的大学有哪些
  5. 逆天的GPT-2居然还能写代码(但OpenAI却被无情吐槽)
  6. ios标签控制器怎么用_带故事板的iOS标签栏控制器
  7. html表格方式实现商品详情
  8. mydumper的安装与使用
  9. 怎么去学习绘画格子裙?该怎么画格子裙?
  10. hdu 5594 ZYB's Prime 最大流
  11. led大屏按实际尺寸设计画面_P10户外LED电子大屏幕按16:9计算屏幕实际尺寸
  12. 自动化测试robotframework框架(一)
  13. 地图软件OZI的使用:OZI for PC 入门(GPS地图绘制软件)
  14. 一文了解Java序列化与反序列化
  15. 软件培训学习中自律很重要
  16. 攻防世界pwn新手题wp(通俗易懂)
  17. 基于SSM的网上书店、电子书城、二手书城系统
  18. 解决 ‘Win7Win10系统电脑文件误删除且清空回收站条件下文件完美恢复’(完全免费)
  19. 新起典| 郑州嘻哈帮文创多媒体视觉秀震撼来袭
  20. excel 关键字多表数据匹配

热门文章

  1. GVIM的默认初试界面大小、启动位置设置
  2. 华为手机备忘录编辑内容误删一段文字怎么恢复
  3. 充電到 100 %時,為什麼 Vbat 只有 4.2V?
  4. php数组遍历方法,php常用的遍历数组的方法有哪些
  5. 微信小程序授权登录详解
  6. 《Adobe Illustrator CS6中文版经典教程(彩色版)》—第0课0.9节使用“图像描摹”...
  7. Latex插入GIF动态图片
  8. 区块链软件 NFT游戏开发养成类游戏
  9. js中的concat函数-字符串拼接+数组拼接
  10. 关于Kotlin课程的一些核心技术以及思维导图