论文笔记:Blind Super-Resolution With Iterative Kernel Correction
论文:Blind Super-Resolution With Iterative Kernel Correction
论文看得比较粗略,所以可能存在理解错误的地方,请指正(另外请忽略我糟糕的英语)
Blind Super-Resolution With Iterative Kernel Correction
- Movtivation
- Estimate the kernel
- Correct the kernel
- Method
- Network Architecture of SR model
- Network Architecture of P\mathcal{P}P and C\mathcal{C}C
- Experiments
- Comprehension
Movtivation
The point is that different kernel will generate different artifact-texture in the result. So you should choose the right kernel. For example, use Gaussian kernel with kernel width σLR\sigma_{LR}σLR, the SR results show unnatural ringing artifacts when σSR>σLR\sigma_{SR} > \sigma_{LR}σSR>σLR and over-smoothing on the other side.
Estimate the kernel
A straightforward method is to adopt a function that estimates kernel from the LR image. Let
k′=P(ILR)k'=\mathcal{P}(I^{LR})k′=P(ILR)
Then we can optimize the function by minimizing the L2L_2L2 distance
θP=arg minθP∣∣k−P(ILR;θP)∣∣22\theta_{\mathcal{P}}=\argmin_{\theta_{\mathcal{P}}}{||k-\mathcal{P}(I^{LR};\theta_{\mathcal{P}})||^2_2}θP=θPargmin∣∣k−P(ILR;θP)∣∣22
But accurate estimation of kernel is impossible as the problem is ill-posed. So they try to find a way to correct the estimation
Correct the kernel
The idea is to adopt the intermediate SR results. Let C\mathcal{C}C be the corrector function, then
θC=arg minθC∣∣k−(C(ISR;θC)+k′)∣∣22\theta_{\mathcal{C}}=\argmin_{\theta_{\mathcal{C}}}{||k-(\mathcal{C}(I^{SR};\theta_{\mathcal{C}})+k')||_2^2}θC=θCargmin∣∣k−(C(ISR;θC)+k′)∣∣22
To avoid over- or under-fitting, a smaller correction steps is used to refine the kernel until it reaches ground truth.
Method
Let F\mathcal{F}F be a SR model, P\mathcal{P}P is a kernel predictor and C\mathcal{C}C is a corrector. You can use PCA to reduce the dimensionality of the kernel space. The kernel after the dimension reduction is denoted by hhh where h=Mkh=Mkh=Mk, MMM is the dimension reduction matrix. An initial estimation h0h_0h0 is given by the predictor h0=P(ILR)h_0=\mathcal{P}(I^{LR})h0=P(ILR) and the first SR result is I0SR=F(ILR,h0)I_0^{SR}=\mathcal{F}(I^{LR}, h_0)I0SR=F(ILR,h0), Then the iterative kernel correction algorithm can be written as
Δhi=C(ISR,hi−1)hi=hi−1+ΔhiIiSR=F(ILR,hi)\begin{array}{rl} \Delta h_i &=& \mathcal{C}(I^{SR}, h_{i-1}) \\ h_i &=& h_{i-1}+\Delta h_i \\ I^{SR}_i &=& \mathcal{F}(I^LR, h_i) \end{array}ΔhihiIiSR===C(ISR,hi−1)hi−1+ΔhiF(ILR,hi)
After ttt iterations, the ItSRI^{SR}_tItSR is the final result of IKC.
Network Architecture of SR model
The SR method for multiple blur kerenls, SRMD have two problems.
- The kernel maps do not actually contain the information of the image.
- The influence of kernel information is only considered at the first layer.
So SFTMD is proposed which using spatial feature transform layer
Use SRResNet as the backbone (of cause you can change it) and then employ the SFT layer to provide the affine transformation for the feature maps FFF conditioned on the kernel maps H\mathcal{H}H by a scaling and shifting operation:
SFT(F,H)=γ⊙F+β\mathrm{SFT}(F,\mathcal{H})=\gamma \odot F + \betaSFT(F,H)=γ⊙F+β
The kernel maps H\mathcal{H}H is stretched by hhh, where all the elements of the iii-th map are equal to the iii-th element of hhh.
Network Architecture of P\mathcal{P}P and C\mathcal{C}C
Experiments
Always best.
Comprehension
- How can I train the predictor P\mathcal{P}P
Although the paper said that it is an ill-posed problem, then they get trained model. Over-fit or under-fit? What’s the loss curve like? and When should I stop training? - Spatially uniform
How? The paper said it is different from its application in semantic super resolution. Just employ the transformation characteristic of SFT layers.
So I don’t understand that if the segmentation information can provide spatial variability? - IKC With PCA is best?
Why? As the paper said that the PCA can provides a feature representation and IKC learns the relationship between the SR images and features rather than the Gaussian kernel. But why can’t IKC learn features from kernel?
论文笔记:Blind Super-Resolution With Iterative Kernel Correction相关推荐
- (NIPS2020)Unfolding the Alternating Optimization for Blind Super Resolution 笔记
(NIPS2020)Unfolding the Alternating Optimization for Blind Super Resolution https://github.com/great ...
- Unfolding the Alternating Optimization for Blind Super Resolution
Unfolding the Alternating Optimization for Blind Super Resolution 论文信息 Paper: [NeurIPS2020] Unfoldin ...
- 论文笔记(二十二):Soft Tracking Using Contacts for Cluttered Objects to Perform Blind Object Retrieval
Soft Tracking Using Contacts for Cluttered Objects to Perform Blind Object Retrieval 文章概括 摘要 1. 介绍 2 ...
- 论文翻译:2019_Speech Super Resolution Generative Adversarial Network
博客作者:凌逆战 论文地址:基于GAN的音频超分辨率 博客地址:https://www.cnblogs.com/LXP-Never/p/10874993.html 论文作者:Sefik Emre Es ...
- 论文翻译:Speech Super Resolution Generative Adversarial Network
博客作者:凌逆战 论文地址:https://ieeexplore.ieee.org/document/8682215 博客地址:https://www.cnblogs.com/LXP-Never/p/ ...
- 论文笔记,物体六自由度位姿估计,DenseFusion: 6D Object Pose Estimation by Iterative Dense Fusion
论文笔记,物体六自由度位姿估计,DenseFusion: 6D Object Pose Estimation by Iterative Dense Fusion 链接 摘要 1,引言 2,模型 2.1 ...
- 图像超分辨率论文笔记
持续更新 Progressive Multi-Scale Residual Network for Single Image Super-Resolution 论文链接:https://arxiv.o ...
- 论文笔记【A Comprehensive Study of Deep Video Action Recognition】
论文链接:A Comprehensive Study of Deep Video Action Recognition 目录 A Comprehensive Study of Deep Video A ...
- 论文笔记系列:经典主干网络(一)-- VGG
✨写在前面:强烈推荐给大家一个优秀的人工智能学习网站,内容包括人工智能基础.机器学习.深度学习神经网络等,详细介绍各部分概念及实战教程,通俗易懂,非常适合人工智能领域初学者及研究者学习.➡️点击跳转到 ...
最新文章
- java一个点向着另一个点移动_java – 在线性路径中从一个点移动一个对象
- python中的json注意事项
- openstack 网络简史
- git revert 后再次merge_git如何回滚错误合并的分支
- Android 进程监控(top命令)
- 信息学奥赛C++语言:百钱买百鸡
- 视觉检测无脊椎机器人或vipir_深入浅出人工智能前沿技术—机器视觉检测,看清人类智慧工业...
- vmware tools 的安装(Read-only file system 的解决)
- 模糊搜索简单算法fuzzywuzzy
- JDBC 学习笔记(三)—— 数据源(数据库连接池):DBCP数据源、C3P0 数据源以及自定义数据源技术...
- Core Data持久化数据存储(1)
- 中信行业分类判断标准
- 简单介绍小系统车载中控导航上蓝牙模块
- 用微软卸载工具msicuu2解决office安装不上的问题,清理office残留文件
- 科赫雪花java_java递归实现科赫雪花
- 独立开发一款简单的安卓app
- 第十三周练兵区——编程题——不计入总分
- seo外包公司可以为企业带来什么好处
- 安卓盒子 魔百盒311-1a YST 刷Armbian系统刷机教程
- 计算机选择题在线,计算机考试选择题