点击蓝字

关注我们

AI TIME欢迎每一位AI爱好者的加入!

9月16日 15:00~21:00

AI TIME特别邀请了多位PhD,带来ICML-4!

哔哩哔哩直播通道

扫码关注AITIME哔哩哔哩官方账号

观看直播

链接:https://live.bilibili.com/21813994

15:00-17:00

★ 嘉宾介绍 ★

朱鑫祺

悉尼大学三年级PhD,在 Prof. Dacheng Tao 和 Dr. Chang Xu 指导下进行解耦表征学习,计算机视觉相关的研究。

报告题目:

基于可交换李群变分自编码的解耦学习

内容简介:

We view disentanglement learning as discovering an underlying structure that equivariantly reflects the factorized variations shown in data. Traditionally, such a structure is fixed to be a vector space with data variations represented by translations along individual latent dimensions. We argue this simple structure is suboptimal since it requires the model to learn to discard the properties (e.g. different scales of changes, different levels of abstractness) of data variations, which is an extra work than equivariance learning. Instead, we propose to encode the data variations with groups, a structure not only can equivariantly represent variations, but can also be adaptively optimized to preserve the properties of data variations. Considering it is hard to conduct training on group structures, we focus on Lie groups and adopt a parameterization using Lie algebra. Based on the parameterization, some disentanglement learning constraints are naturally derived. A simple model named Commutative Lie Group VAE is introduced to realize the group-based disentanglement learning. Experiments show that our model can effectively learn disentangled representations without supervision, and can achieve state-of-the-art performance without extra constraints.

陈晓晖

Tufts University 一年级 PhD,在Prof. Liping Liu 和 Prof. Michael Hughes 的指导下研究 Generative Modeling 和 Graph Learning。

报告题目:

自回归图生成模型上的节点生成顺序建模

内容简介:

A graph generative model defines a distribution over graphs. One type of generative model is constructed by autoregressive neural networks, which sequentially add nodes and edges to generate a graph. However, the likelihood of a graph under the autoregressive model is intractable, as there are numerous sequences leading to the given graph; this makes maximum likelihood estimation challenging. Instead, in this work we derive the exact joint probability over the graph and the node ordering of the sequential process. From the joint, we approximately marginalize out the node orderings and compute a lower bound on the log-likelihood using variational inference. We train graph generative models by maximizing this bound, without using the ad-hoc node orderings of previous methods. Our experiments show that the log-likelihood bound is significantly tighter than the bound of previous schemes. Moreover, the models fitted with the proposed algorithm can generate high-quality graphs that match the structures of target graphs not seen during training. We have made our code publicly available at https://github.com/tufts-ml/graph-generation-vi.

张智杰

中科院计算所五年级博士生,导师为张家琳研究员。研究兴趣包括组合优化、近似算法、机器学习。最近的研究课题包括次模优化与影响力最大化。

报告题目:

网络推断与数据驱动的影响力最大化问题

内容简介:

Influence maximization is the task of selecting a small number of seed nodes in a social network to maximize the spread of the influence from these seeds, and it has been widely investigated in the past two decades. In the canonical setting, the whole social network as well as its diffusion parameters is given as input. In this paper, we consider the more realistic sampling setting where the network is unknown and we only have a set of passively observed cascades that record the set of activated nodes at each diffusion step. We study the task of influence maximization from these cascade samples (IMS), and present constant approximation algorithms for this task under mild conditions on the seed set distribution. To achieve the optimization goal, we also provide a novel solution to the network inference problem, that is, learning diffusion parameters and the network structure from the cascade data. Comparing with prior solutions, our network inference algorithm requires weaker assumptions and does not rely on maximum-likelihood estimation and convex programming. Our IMS algorithms enhance the learning-and-then-optimization approach by allowing a constant approximation ratio even when the diffusion parameters are hard to learn, and we do not need any assumption related to the network structure or diffusion parameters.

姚骅修

斯坦福大学计算机科学系博士后,合作导师为Chelsea Finn,于2021年在宾夕法尼亚州立大学取得博士学位。目前研究方向为元学习,模型鲁棒性与强化学习。

报告题目:

利用任务增强提升元学习的泛化

内容简介:

Meta-learning has proven to be a powerful paradigm for transferring the knowledge from previous tasks to facilitate the learning of a novel task. Current dominant algorithms train a wellgeneralized model initialization which is adapted to each task via the support set. The crux lies in optimizing the generalization capability of the initialization, which is measured by the performance of the adapted model on the query set of each task. Unfortunately, this generalization measure, evidenced by empirical results, pushes the initialization to overfit the meta-training tasks, which significantly impairs the generalization and adaptation to novel tasks. To address this issue, we actively augment a meta-training task with “more data” when evaluating the generalization. Concretely, we propose two task augmentation methods, including MetaMix and Channel Shuffle. MetaMix linearly combines features and labels of samples from both the support and query sets. For each class of samples, Channel Shuffle randomly replaces a subset of their channels with the corresponding ones from a different class. Theoretical studies show how task augmentation improves the generalization of meta-learning. Moreover, both MetaMix and Channel Shuffle outperform state-of-the-art results by a large margin across many datasets and are compatible with existing meta-learning algorithms.

19:30-21:00

杨智勇:

博士毕业于中国科学院信息工程研究所,现为中国科学院大学博士后。目前主要的研究方向主要为AUC优化、多任务学习、机器学习理论。在ICML、NeurIPS、T-PAMI等CCF-A类期刊/会议发表一作论文7篇。担任ICML、NeurIPS、ICLR、AAAI、IJCAI等会议PC member;IJCAI 2021 senior PC member;T-PAMI、T-IP等国际期刊审稿人。曾入选博新计划、百度AI华人新星百强榜单,曾获百度奖学金全球20强提名奖、中科院院长特别奖、NeurIPS top 10% 审稿人等荣誉。

报告题目:

TPAUC指标的end-to-end 优化方法

内容简介:

The Area Under the ROC Curve (AUC) is a crucial metric for machine learning, which evaluates the average performance over all possible True Positive Rates (TPRs) and False Positive Rates (FPRs). Based on the knowledge that a skillful classifier should simultaneously embrace a high TPR and a low FPR, we turn to study a more general variant called Two-way Partial AUC (TPAUC), where only the region with TPR≥α,FPR≤β is included in the area. Moreover, a recent work shows that the TPAUC is essentially inconsistent with the existing Partial AUC metrics where only the FPR range is restricted, opening a new problem to seek solutions to leverage high TPAUC. Motivated by this, we present the first trial in this paper to optimize this new metric. The critical challenge along this course lies in the difficulty of performing gradient-based optimization with end-to-end stochastic training, even with a proper choice of surrogate loss. To address this issue, we propose a generic framework to construct surrogate optimization problems, which supports efficient end-to-end training with deep-learning. Moreover, our theoretical analyses show that: 1) the objective function of the surrogate problems will achieve an upper bound of the original problem under mild conditions, and 2) optimizing the surrogate problems leads to good generalization performance in terms of TPAUC with a high probability. Finally, empirical studies over several benchmark datasets speak to the efficacy of our framework.

沈广宇

普渡大学计算机系二年级在读博士,在 Prof. Xiangyu Zhang 的研究组进行神经网络安全性相关的研究,包括对抗攻击,后门攻击以及防御。

报告题目:

基于多臂老虎机优化的神经网络后门扫描

内容简介:

Back-door attack poses a severe threat to deep learning systems. It injects hidden malicious be- haviors to a model such that any input stamped with a special pattern can trigger such behaviors. Detecting back-door is hence of pressing need. Many existing defense techniques use optimiza- tion to generate the smallest input pattern that forces the model to misclassify a set of benign inputs injected with the pattern to a target label. However, the complexity is quadratic to the num- ber of class labels such that they can hardly handle models with many classes. Inspired by Multi-Arm Bandit in Reinforcement Learning, we propose a K-Arm optimization method for backdoor detec- tion. By iteratively and stochastically selecting the most promising labels for optimization with the guidance of an objective function, we substan- tially reduce the complexity, allowing to handle models with many classes. Moreover, by itera- tively refining the selection of labels to optimize, it substantially mitigates the uncertainty in choos- ing the right labels, improving detection accuracy. At the time of submission, the evaluation of our method on over 4000 models in the IARPA Tro- jAI competition from round 1 to the latest round 4 achieves top performance on the leaderboard. Our technique also supersedes five state-of-the-art techniques in terms of accuracy and the scanning time needed. The code of our work is available at https://github.com/PurduePAML/ K-ARM_Backdoor_Optimization

闫雪

闫雪是中国科学院自动化所一年级博士生,研究兴趣包括机器学习,多智能体评估。

报告题目:

基于低秩矩阵填充的高效多智能体策略评估

内容简介:

Multi-agent evaluation aims at the assessment of an agent's strategy on the basis of interaction with others. Typically, existing methods such as -rank and its approximation still require to exhaustively compare all pairs of -tuple joint strategies for an accurate ranking, which in practice is computationally expensive. In this paper, we intend to reduce the number of pairwise comparisons in order to recover a satisfied ranking for -players. We explore the fact that agents with similar skills may achieve similar performance payoff against others, as evidenced from our experiments. Two situations are considered: the first one is when we can obtain the true payoffs (e.g., noise-free evaluation). The other one is when we can only access noisy payoff observations (e.g., noisy evaluation). Based on these formulations, we leverage low-rank matrix completion and design two novel algorithms for noise-free and noisy evaluations respectively leverage low-rank matrix completion. For both of these settings, we derive that  (  is num. of agents and  is the rank of the payoff matrix) comparisons are required to achieve sufficiently well evaluation performance. Empirical results on evaluating the players in three synthetic games and twelve real world games from OpenSpiel demonstrate that payoff evaluation of a few  pairs can lead to comparable performance compared to algorithms that know the complete payoff matrix.

直播结束后我们会邀请讲者在微信群中与大家答疑交流,请添加“AI TIME小助手(微信号:AITIME_HY)”,回复“icml”,将拉您进“AI TIME ICML 会议交流群”!

AI TIME微信小助手

主       办:AI TIME

合作媒体:学术头条、AI 数据派

合作伙伴:智谱·AI、中国工程院知领直播、学堂在线、学术头条、biendata、 Ever链动

AI TIME欢迎AI领域学者投稿,期待大家剖析学科历史发展和前沿技术。针对热门话题,我们将邀请专家一起论道。同时,我们也长期招募优质的撰稿人,顶级的平台需要顶级的你,

请将简历等信息发至yun.he@aminer.cn!

微信联系:AITIME_HY

AI TIME是清华大学计算机系一群关注人工智能发展,并有思想情怀的青年学者们创办的圈子,旨在发扬科学思辨精神,邀请各界人士对人工智能理论、算法、场景、应用的本质问题进行探索,加强思想碰撞,打造一个知识分享的聚集地。

更多资讯请扫码关注

我知道你“在看”哟~

点击 阅读原文 了解更多精彩

今天15:00| ICML专场四,7位PhD来袭!相关推荐

  1. 直播预告| ICML专场四~

    点击蓝字 关注我们 AI TIME欢迎每一位AI爱好者的加入! 9月16日 15:00~21:00 AI TIME特别邀请了多位PhD,带来ICML-4! 哔哩哔哩直播通道 扫码关注AITIME哔哩哔 ...

  2. 直播预告: EMNLP 2020 专场四| AI TIME PhD

    ⬆⬆⬆              点击蓝字 关注我们 AI TIME欢迎每一位AI爱好者的加入! 11月20日晚7:30-9:00 AI TIME特别邀请了3位优秀的讲者跟大家共同开启EMNLP 20 ...

  3. 直播预告|ICML专场最后一场啦!来蹲守直播间呀

    点击蓝字 关注我们 AI TIME欢迎每一位AI爱好者的加入! 9月28日 15:00~20:30 AI TIME特别邀请了多位PhD,带来ICML-6! 哔哩哔哩直播通道 扫码关注AITIME哔哩哔 ...

  4. 主讲:A1(老吴) 时间:2004-10-22 15:00 主题:0一点点编译。1解决DLL与EXE沟通时String和其它Memory的问题.2公布hmOlevariants.pas 3成批...

    主讲:A1(老吴) 时间:2004-10-22 15:00 主题: 0>一点点编译. 1>解决DLL与EXE沟通时String和其它Memory的问题. 2>公布hmOlevaria ...

  5. 直播预告 | ICLR专场四

    点击蓝字 关注我们 AI TIME欢迎每一位AI爱好者的加入! 6月2日晚7:30-9:00 AI TIME特别邀请了三位优秀的讲者跟大家共同开启ICLR专场四! 哔哩哔哩直播通道 扫码关注AITIM ...

  6. 今天15:00 | 当AI降临教育:阳光还是风暴?

     点击蓝字 关注我们 AI TIME欢迎每一位AI爱好者的加入! 哔哩哔哩直播通道 扫码关注AI TIME哔哩哔哩官方账号预约直播 2022年11月25日 15:00-16:00 讲者简介 陈江: 北 ...

  7. YouSwap将于5月10日15:00新增CATE流动性挖矿

    据最新消息,YouSwap将于5月10日15:00(UTC+8)于BSC链联盟区新增开启CATE/USDT和CATE/YOU流动性挖矿,用户可以通过质押以上币对的LP Token挖矿YOU. 截至5月 ...

  8. 乐视生态世界发布会官方图文直播(2016年01月12日 15:00)

    2016年01月12日 13:10 生态世界即将开启!乐视将再度引领潮流,让更多人感受到生态的魅力.再一次颠覆,你准备好了吗?彻底焕新,我们来了!1月12日15:00,乐视生态世界发布会全程直播,敬请 ...

  9. testhISE9vHk9 15:00 5SRk4uXkZCek5SLkJ2Lk5yLlpael5WXi5SVkYuXkYuTnR83j4+P4OvhgQ==

    hISE9vHk9/GGlJyXi5test Time=15:00 ek5SLkJ2Lk5yLlpael5WXi5SVkYuXkYuTnR83j4+P4OvhgQ== hISE9vHk9/GGlJyX ...

最新文章

  1. 大学计算机基础知识点_自学录——大学计算机基础
  2. 优化我们的业务之Timecard
  3. 重学 html の meta 标签
  4. Spring Mvc返回html页面404错误解决记录--转载
  5. C++用递归方式实现在对不更改随机数组的情况下查找最大值
  6. jQuery Mobile移动网站
  7. linux shell 等待输入_linux运维——基础篇
  8. 手机调用系统的拍照和裁剪功能,假设界面有输入框EditText,在一些手机会出现点击EditText会弹出输入法,却不能输入的情况。...
  9. python-Python 3中字符串可以被改变吗?
  10. python 下载csv文件保存到 redis
  11. Epic Citadel Demo展示互联网作为游戏平台的巨大能量
  12. 安卓版有道词典的离线词库-《21世纪大英汉词典》等
  13. python和pycharm版本要对应吗_pycharm的版本问题
  14. pandas空值填充
  15. 分布式技术核心(上)-ZookeeperDubbo
  16. 魔兽世界服务器维护后稀有宠物刷新,魔兽世界猎人稀有宠物图签与刷新方式时间介绍...
  17. 纯CSS打造淘宝导航菜单栏
  18. nodejs对PDF合并的几种方法
  19. 巴塞尔协议中的计算公式_巴塞尔协议演变及计算方法简单解析
  20. Spread 常用属性

热门文章

  1. 一文让你秒懂场效应晶体管的所有参数
  2. SEO新手怎么做好网站关键词优化?
  3. win10右键菜单添加“用记事本打开文件”
  4. 女性常掉头发的应对法(zt)
  5. Oracle-图形化界面-数据库安装
  6. gcc -I -i -L -l 参数区别 / -l(静态库/动态库)
  7. Leecode 417. 太平洋大西洋水流问题
  8. mysql使用什么替代like查询_mysql替代like模糊查询的最佳方法?????求赐教!!!!...
  9. 用C语言编制查询某班同学的平均成绩
  10. 上古5各技能训练师地点