娜塔莉·格罗弗(Natalie Grover) (by Natalie Grover)

As the coronavirus pandemic endures, the socio-economic implications of race and gender in contracting Covid-19 and dying from it have been laid bare. Artificial intelligence (AI) is playing a key role in the response, but it could also be exacerbating inequalities within our health systems — a critical concern that is dragging the technology’s limitations back into the spotlight.

随着冠状病毒大流行的持续,种族和性别对Covid-19的收缩和死亡的社会经济影响已经暴露。 人工智能(AI)在应对中发挥着关键作用,但它也可能加剧我们医疗系统中的不平等现象-至关重要的担忧正在将技术的局限性重新引起人们的关注。

The response to the crisis has in many ways been mediated by data — an explosion of information being used by AI algorithms to better understand and address Covid-19, including tracking the virus’ spread and developing therapeutic interventions.

应对危机的方式在许多方面都由数据来调解-AI算法使用信息的爆炸式增长来更好地理解和解决Covid-19 ,包括跟踪病毒的传播和制定治疗性干预措施 。

AI, like its human maker, is not immune to bias. The technology — generally designed to digest large volumes of data and make deductions to support decision making — reflects the prejudices of the humans who develop it and feed it information that it uses to spit out outcomes. For example, years ago when Amazon developed an AI tool to help rank job candidates by learning from its past hires, the system mimicked the gender-bias of its makers by downgrading resumes from women.

人工智能,就像它的人类制造者一样,也无法避免偏见。 该技术通常旨在消化大量数据并进行推论以支持决策,它反映了开发数据并将其用于产生结果的信息的人类的偏见。 例如,几年前,当亚马逊开发一种AI工具来通过向过去的员工学习以帮助对求职者进行排名时,该系统通过降低女性履历来模仿其制造商的性别偏见。

‘We were seeing AI being used extensively before Covid-19, and during Covid-19 you’re seeing an increase in the use of some types of tools,’ noted Meredith Whittaker, a distinguished research scientist at New York University in the US and co-founder of AI Now Institute, which carries out research examining the social implications of AI.

美国纽约大学的杰出研究科学家Meredith Whittaker指出:“在Covid-19之前,我们看到了AI的广泛使用,而在Covid-19期间,您正在看到对某些类型工具的使用的增加。 AI Now Institute的联合创始人,该研究所进行研究AI的社会意义的研究。

Monitoring tools to keep an eye on white collar workers working from home and educational tools that claim to detect whether students are cheating in exams are increasingly growing common. But Whittaker says that most of this technology is untested — and some has been shown to be flawed. However, that hasn’t stopped companies from marketing their products as cure-alls for the collateral damage caused by the pandemic, she adds.

监视工具以监视在家工作的白领,以及声称检测学生是否在考试中作弊的教育工具越来越普遍。 但是惠特克(Whittaker)表示,这项技术中的大多数未经测试-有些已被证明存在缺陷。 她补充说,然而,这并没有阻止公司将其产品作为大流行所造成的附带损害的万灵药。

In the US for instance, a compact medical device called a pulse oximeter, designed to gauge the level of oxygen in the blood, had some coronavirus patients glued to its tiny screens to decide when to go to the hospital, in addition to its use by doctors to aid in clinical decision making within hospitals.

例如,在美国,一种紧凑的医疗设​​备称为脉搏血氧仪 ,旨在测量血液中的氧气水平。除冠状病毒的使用方式外,还有一些冠状病毒患者被粘在其微小的屏幕上以决定何时去医院。医生协助医院内的临床决策 。

The way the device works, however, is prone to racial bias and was likely calibrated on light skinned users. Back in 2005, a study definitively showed the device ‘mostly tended to overestimate (oxygen) saturation levels by several points’ for non-white people.

但是,该设备的工作方式容易出现种族偏见,并且很可能已针对肤色浅的用户进行了校准。 早在2005年,一项研究就明确表明,对于非白人,该设备“大多倾向于高估 (氧气)饱和度数点”。

The problem with the pulse oximeter device has been known for decades and hasn’t been fixed by manufacturers, says Whittaker. ‘But, even so, these tools are being used, they’re producing data and that data is going on to shape diagnostic algorithms that are used in health care. And so, you see, even at the level of how our AI systems are constructed, they’re encoding the same biases and the same histories of racism and discrimination that are being shown so clearly in the context of Covid-19.’

惠特克说,脉搏血氧仪的问题已经有几十年了,制造商还没有解决。 “但是,即使如此,这些工具仍在使用,它们正在生成数据,并且这些数据正在影响医疗保健中使用的诊断算法。 因此,即使在我们AI系统的构建水平上,您也可以看到它们正在编码相同的偏见和相同的种族主义和歧视历史,而这些偏见和种族主义和歧视的历史却在Covid-19的背景下清楚地展现了出来。

Evidence

证据

Meanwhile, as the body of evidence accumulates that people of colour are more likely to die from Covid-19 infections, that diversity has not necessarily been reflected in the swathe of clinical trials christened to develop drugs and vaccines — a troubling pattern that has long preceded the pandemic. When it comes to gender diversity, a recent review found that of 927 trials related to Covid-19, more than half explicitly excluded pregnancy, and pregnant women have been excluded altogether from vaccine trials.

同时,随着大量证据表明有色人种更有可能死于Covid-19感染 ,这种多样性并不一定反映在为开发药物和疫苗而进行的一系列临床试验中 -这是一个令人困扰的模式,早在很久以前就出现了大流行。 关于性别多样性,最近的一项评论发现,在涉及Covid-19的927项试验中,有一半以上明确排除了妊娠,并且孕妇也被排除在疫苗试验之外 。

The outcomes of products in these clinical trials will not necessarily be representative of the population, notes Catelijne Muller, a member of an EU high-level expert group on AI and co-founder of ALLAI, an organisation dedicated to fostering responsible AI.

欧盟AI高级专家组成员,致力于培育负责任AI的组织ALLAI的联合创始人Catelijne Muller指出,这些临床试验中的产品结果不一定代表人群。

‘And if you then use those outcomes to feed an AI algorithm for future predictions, those people will also have a disadvantage in these prediction models,’ she said.

她说:“然后,如果您使用这些结果为未来的预测提供AI算法,那么这些人在这些预测模型中也将处于劣势。”

The trouble with use of AI technology in the context of Covid-19 is not different from the issues of bias that plagued the technology before the pandemic: if you feed the technology biased data, it will spout biased outcomes. Indeed, existing large-scale AI systems also reflect the lack of diversity in the environments in which they are built and the people who have built them. These are almost exclusively a handful of technology companies and elite university laboratories — ‘spaces that in the West tend to be extremely white, affluent, technically oriented, and male,’ according to a 2019 report by the AI Now Institute.

在Covid-19中使用AI技术的麻烦与大流行之前困扰技术的偏见问题没有什么不同:如果您提供技术偏见的数据,它将喷出偏见。 确实,现有的大型AI系统也反映出其构建环境以及构建它们的人员缺乏多样性。 AI Now Institute在2019年的一份报告显示,这些几乎完全是少数几家技术公司和一流的大学实验室-“西方的空间往往是非常白人,富裕,技术导向和男性化的空间 ”。

But the technology isn’t simply a reflection of its makers — AI also amplifies their biases, says Whittaker.

惠特克说,但这项技术不仅仅是制造商的反映,人工智能也加剧了他们的偏见。

‘One person may have biases, but they don’t scale those biases to millions and billions of decisions,’ she said. ‘Whereas an AI system can encode human biases and then can distribute those in ways that have a much greater impact.’

她说:“一个人可能有偏见,但他们没有将这些偏见扩大到成千上万的决定。” “而人工智能系统可以对人为偏差进行编码,然后以影响更大的方式分布这些偏差。”

Complicating matters further, there are automation bias concerns, she adds. ‘There is a tendency for people to be more trusting of a decision that is made by a computer than they are of the same decision if it were made by a person. So, we need to watch out for the way in which AI systems launder these biases and make them seem rigorous and scientific and may lead to people being less willing to question decisions made by these systems.’

她补充说,问题进一步复杂化,存在自动化偏差的问题。 “人们倾向于信任计算机做出的决定,而不是人们做出的相同决定。 因此,我们需要提防AI系统如何洗净这些偏见并使其显得严谨和科学,并可能导致人们不愿意质疑这些系统做出的决定。''

‘We need to watch out for the way in which AI systems launder these biases and make them seem rigorous and scientific.’

``我们需要提防AI系统如何洗净这些偏见并使其显得严谨和科学。''

- -Meredith Whittaker, New York University, US

美国纽约大学的梅雷迪思·惠特克(Meredith Whittaker)

Safe

安全

There is no clear consensus on what will make AI technology responsible and safe en masse, experts say, though researchers are beginning to agree on useful steps such as fairness, interpretability and robustness.

专家们说,尽管什么使研究人员开始就诸如公平,可解释性和鲁棒性之类的有用步骤达成共识,但关于什么将使AI技术负责和安全起来尚无明确共识。

The first step is to ask ‘question zero’, according to Muller: what is my problem and how can I solve it? Do I solve it with artificial intelligence or with something else? If with AI, is this application good enough? Does it harm fundamental rights?

根据穆勒的说法,第一步是问“零问题”:我的问题是什么,我该如何解决? 我要用人工智能还是其他方法解决它? 如果使用AI,此应用程序是否足够好? 它是否损害基本权利?

‘What we see is that many people think that sometimes AI is sort of a magic wand…and it’ll kind of solve everything. But sometimes it doesn’t solve anything because it’s not fit for the problem. Sometimes it’s so invasive that it might solve one problem, but create a large, different problem.’

“我们看到的是,许多人认为,有时候AI就像是魔杖……它将解决一切。 但是有时它不能解决任何问题,因为它不适合该问题。 有时它是如此具有侵略性,以至于它可以解决一个问题,但却会产生一个巨大的,不同的问题。”

When it comes to using AI in the context of Covid-19 — there is an eruption of data, but that data needs to be reliable and be optimised, says Muller.

Muller说,在Covid-19的背景下使用AI时,会发生数据喷发,但数据必须可靠且需要优化。

‘Data cannot just be thrown at another algorithm’ she said, explaining that algorithms work by finding correlations. ‘They don’t understand what a virus is.’

她说,“数据不能仅仅丢给另一种算法”,并解释说算法通过找到相关性而起作用。 他们不了解什么是病毒。

Fairness issues with AI showcase the biases in human decision making, according to Dr Adrian Weller, programme director for AI at the Alan Turing Institute in the UK. It’s wrong to assume that not using algorithms means everything will be just fine, he says.

英国艾伦·图灵研究所(Alan Turing Institute)的AI计划主任Adrian Weller博士表示,人工智能的公平问题彰显了人类决策的偏见。 他说,假设不使用算法意味着一切都会好起来是错误的。

There is this hope and excitement about these systems because they operate more consistently and efficiently than humans, but they lack notions of common sense, reasoning and context, where humans are much better, Weller says.

这些系统具有这种希望和兴奋,因为它们比人类更一致,更有效地运行,但是它们缺乏常识,推理和上下文的概念,而人类则更好。

Accountability

问责制

Having humans partake more in the decision-making process is one way to bring accountability to AI applications. But figuring out who that person or persons should be is crucial.

让人们更多地参与决策过程是使AI应用程序承担责任的一种方法。 但是弄清楚那个人应该是至关重要的。

‘Simply putting a human somewhere in the process does not guarantee a good decision,’ said Whittaker. There are issues such as who that human works for and what incentives they’re working under which need to be addressed, she says.

惠特克说:“仅仅在流程中放置一个人并不能保证一个好的决定。” 她说,存在一些问题,例如人类为谁工作以及他们在做什么工作下的激励机制。

‘I think we need to really narrow down that broad category of “human” and look at who and to what end.’

“我认为我们需要真正缩小广义的“人类”范畴,并研究谁以及目的何在。”

Human oversight could be incorporated in multiple ways care to ensure transparency and mitigate bias, suggest ALLAI’s Muller and colleagues in a report analysing a proposal EU regulators are working on to regulate ‘high-risk’ AI applications such as for use in recruitment, biometric recognition or in the deployment of health.

ALLAI的Muller及其同事在一份报告中分析了人类监督,可以通过多种方式纳入人的监督,以确保透明度并减轻偏见,该报告分析了欧盟监管机构正在努力规范“高风险” AI应用的提案,例如用于招聘,生物识别的应用或在健康部署中。

These include auditing every decision cycle of the AI system, monitoring the operation of the system, having the discretion to decide when and how to use the system in any particular situation, and the opportunity to override a decision made by a system.

这些措施包括审核AI系统的每个决策周期,监视系统的运行,在任何特定情况下自行决定何时和如何使用系统的酌处权,以及超越系统决策的机会。

For Whittaker, recent developments such as EU regulators’ willingness to regulate ‘high-risk’ applications or community organising in the US leading to bans on facial recognition technology are encouraging.

对于Whittaker而言,诸如欧盟监管机构愿意监管“高风险”应用程序或美国社区组织导致禁止面部识别技术之类的最新发展令人鼓舞。

‘I think we need more of the same…to ensure that these systems are auditable, that we can examine them to ensure that they are democratically controlled, and that people have a right to refuse the use of these systems.’

“我认为我们需要更多相同的东西……以确保这些系统是可审核的,我们可以对其进行检查以确保它们受到民主控制,并且人们有权拒绝使用这些系统。”

Meredith Whittaker and Catelijne Muller will be speaking at a panel to discuss tackling gender and ethnicity biases in artificial intelligence at the European Research and Innovation Days conference which will take place online from 22–24 September.

Meredith Whittaker和Catelijne Muller将在 9月22日至24日在线 召开 欧洲研究与创新日会议上 讨论解决人工智能中的性别和种族偏见的小组讨论

也可以看看 (See also)

  • New wave of medical ‘deep tech’ can help coronavirus response — but there’s resistance

    新一波的医疗“深度技术”可以帮助冠状病毒应对-但存在耐药性

  • Taking stock: Where next for research?

    盘点:下一步需要研究什么?

  • Teleworking is here to stay — here’s what it means for the future of work

    远程办公将继续存在—这就是未来工作的意义

Originally published at horizon-magazine.eu.

最初发布于 horizo​​n-magazine.eu

翻译自: https://medium.com/horizon-magazine/encoding-the-same-biases-artificial-intelligence-s-limitations-in-coronavirus-response-19879785b0db


http://www.taodudu.cc/news/show-7128962.html

相关文章:

  • ux设计中的各种地图_UX中的灰色图案,我们该在有用的设计与有害的设计之间划清界限...
  • python中return outside function-Python Tutorial 学习(九)--Classes
  • Nimbus线上AMA内容记录-第一期
  • Nimbus线上AMA内容记录-第二期
  • 20230313英语学习
  • oracle中插曲时间,Oracle 11G 数据库迁移【expdp/impdp】
  • desk next the to_Shegotthephotosofherson____upnexttoherdesksothatshecoulds
  • 可以作为PHP的输出语句的是,下列选项中,可以作为PHP的输出语句的是
  • C语言练习,求和,平均数,取余。
  • 非流式语音合成和流式语音合成
  • 语音听写与合成--(讯飞语音识别与合成百度语音识别)
  • 【Android语音】百度混合离线语音合成
  • 百度语音合成-实现文字转语音
  • type/dtype/astype的作用
  • dtype,type,astype()的差别
  • Swift - typealias
  • Java Types
  • swift associatedtype和typealias
  • Type Script的泛型(<?>)
  • js type
  • TypeSprict
  • TS Type
  • TypeReference
  • Advanced Types
  • What is `export type` in Typescript?
  • Type Description : This type is a general type that can be used to declare
  • Swift - is(类型转换)和 as(类型转换)
  • typescript:Mapped Types
  • Type and Value
  • TypeScritp数据类型

编码冠状病毒React中相同偏见的人工智能局限性相关推荐

  1. 生活中的观察者偏见例子_消除人工智能第2部分中的偏见,解决性别和种族偏见...

    生活中的观察者偏见例子 Chatbots that become racist in less than a day, facial technology that fails to recogniz ...

  2. react中纯函数_如何在纯React中创建电子邮件芯片

    react中纯函数 by Andreas Remdt 由Andreas Remdt 如何在纯React中创建电子邮件芯片 (How to create email chips in pure Reac ...

  3. [译] How to NOT React:React 中常见的反模式与陷阱

    原文地址:How to NOT React: Common Anti-Patterns and Gotchas in React 原文作者:NeONBRAND 译文出自:掘金翻译计划 本文永久链接:g ...

  4. react中使用构建缓存_完整的React课程:如何使用React构建聊天室应用

    react中使用构建缓存 In this video course, you'll learn React by building a chat room app. 在本视频课程中,您将通过构建聊天室 ...

  5. react中使用构建缓存_通过在React中构建Tic Tac Toe来学习ReasonML

    react中使用构建缓存 3. 7. 2018: UPDATED to ReasonReact v0.4.2 3. 7. 2018:更新为ReasonReact v0.4.2 You may have ...

  6. react中创建一个组件_如何使用React和MomentJS创建一个Countdown组件

    react中创建一个组件 Recently I had to create a Countdown for one of my other projects, and I thought that i ...

  7. React中JSX的理解

    React中JSX的理解 JSX是快速生成react元素的一种语法,实际是React.createElement(component, props, ...children)的语法糖,同时JSX也是J ...

  8. 在 React 中使用 TypeScript

    Create React App 内置了对 TypeScript 的支持. 需要创建一个使用 TypeScript 的新项目,在终端运行: npx create-react-app my-app -- ...

  9. React中JSX的用法和理解

    文章目录 React的特点 相关js库 虚拟DOM JSX(JavaScript XML) 使用jsx创建虚拟DOM 使用js创建虚拟DOM 区分js语句(代码)与js表达式 类 事件处理 高阶函数和 ...

最新文章

  1. python json模块有什么用_Python的json模块及使用
  2. 关于手机的,发送验证码,正则
  3. .net Remoting(3)——激活,激活方式
  4. python怎么读取word文件_python之python-docx编辑和读取word文档
  5. ioc框架 java_从零开始实现一个简易的Java MVC框架(三)--实现IOC
  6. 1.5编程基础之循环控制_41数字统计
  7. Linux Redis自动启动,Redis开机启动,Linux Redis设置开机启动
  8. 国内首个 Serverless 数据库来了,技术架构全揭秘!
  9. Link Vision 打破传统视频监控模式,开启新型物联网智能视频服务
  10. python常问问题_Python新手在作用域方面经常容易碰到的问题
  11. python中的join函数连接dataframe_python pandas处理CSV文件并使用join()方法拼接两个dataframe...
  12. \n 屏幕换行 源码换行
  13. cacti监控部署——网络流量监控
  14. 扫拖一体洗地机实用吗、扫拖一体洗地机哪个品牌好,看完就知道
  15. 制造业数字化转型内涵和过程
  16. 一家 50 平米小店的老板,如何用社群打垮平台电商?
  17. 沪漂程序员的两年,终说再见,你会不会是下一个离开的人?
  18. 顺丰测试开发面试总结
  19. 幼儿园计算机培训心得,幼儿园心得体会范文
  20. php实现踢下线,浅谈踢人下线的设计思路!(附代码实现方案)

热门文章

  1. php 取整 ceil,php取整函数ceil、floor、round、intval用法区别
  2. vue中实现锚点定位以及平滑滚动到指定位置
  3. OkHttp 原理解析
  4. Linux中iptables防火墙配置实例分享
  5. ODE(Open Dynamics Engine)学习笔记
  6. Vue父组件方法和子组件方法执行优先顺序
  7. 哈希碰撞实战- HashMap 原理
  8. 智能电视app安装步骤启动及卸载
  9. 【英语】睡不醒的November
  10. 物联网“兴奋剂”让传感器应用遍地开花