当前位置

首页 > 英语阅读 > 英语阅读理解 > 用人类智慧应对人工智能挑战大纲

用人类智慧应对人工智能挑战大纲

推荐人: 来源: 阅读: 1.86W 次

A lot of big claims are made about the transformative power of artificial intelligence. But it is worth listening to some of the big warnings too. Last month, Kate Crawford, principal researcher at Microsoft Research, warned that the increasing power of AI could result in a “fascist’s dream” if the technology were misused by authoritarian regimes.
关于人工智能的变革威力,人们提出了很多大胆的设想。但我们也有必要听听一些严重警告。上月,微软研究院(Microsoft Research)首席研究员凯特?克劳福德(Kate Crawford)警告称,如果被威权政府滥用,威力与日俱增的人工智能可能会酿成一场“法西斯梦”。

“Just as we are seeing a step function increase in the speed of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism,” Ms Crawford told the SXSW tech conference.
克劳福德在SXSW科技大会上表示:“就在我们看到人工智能的发展速度呈阶梯型上升时,其他一些事情也在发生:极端民族主义、右翼威权主义和法西斯主义崛起。”

The creation of vast data registries, the targeting of population groups, the abuse of predictive policing and the manipulation of political beliefs could all be enabled by AI, she said.
她表示,人工智能可能带来庞大的数据登记册、针对特定人口群体、滥用预测型警务以及操纵政治信仰。

用人类智慧应对人工智能挑战

Ms Crawford is not alone in expressing concern about the misapplication of powerful new technologies, sometimes in unintentional ways. Sir Mark Walport, the British government’s chief scientific adviser, warned that the unthinking use of AI in areas such as the medicine and the law, involving nuanced human judgment, could produce damaging results and erode public trust in the technology.
克劳福德并不是唯一对强大的新技术被错误使用(有时以意想不到的方式)感到担忧的人。英国政府首席科学顾问马克?沃尔波特(Mark Walport)警告称,在医学和法律等涉及细腻人类判断的领域不假思索地使用人工智能,可能带来破坏性结果,并侵蚀公众对这项技术的信任。

Although AI had the potential to enhance human judgment, it also risked baking in harmful prejudices and giving them a spurious sense of objectivity. “Machine learning could internalise all the implicit biases contained within the history of sentencing or medical treatment — and externalise these through their algorithms,” he wrote in an article in Wired.
尽管人工智能有增强人类判断的潜力,但它也可能带来有害的偏见,并产生一种错误的客观感觉。他在《连线》(Wired)杂志的一篇文章中写道:“机器学习可能会内部化在量刑或医疗历史中存在的所有隐性偏见,并通过它们的算法外部化。”

As ever, the dangers are a lot easier to identify than they are to fix. Unscrupulous regimes are never going to observe regulations constraining the use of AI. But even in functioning law-based democracies it will be tricky to frame an appropriate response. Maximising the positive contributions that AI can make while minimising its harmful consequences will be one of the toughest public policy challenges of our times.
就像一直以来的情况那样,识别危险仍然要比化解危险容易得多。没有底线的政权永远不会遵守限制人工智能使用的规定。然而,即便在正常运转的基于法律的民主国家,框定适当的回应也很棘手。将人工智能可以做出的积极贡献最大化,同时将其有害后果降至最低,将是我们这个时代最艰巨的公共政策挑战之一。

For starters, the technology is difficult to understand and its use is often surreptitious. It is also becoming increasingly hard to find independent experts, who have not been captured by the industry or are not otherwise conflicted.
首先,人工智能技术很难理解,其用途往往带有神秘色彩。找到尚未被行业挖走、且不存在其他利益冲突的独立专家也变得越来越难。

Driven by something approaching a commercial arms race in the field, the big tech companies have been snapping up many of the smartest academic experts in AI. Much cutting-edge research is therefore in the private rather than public domain.
受到该领域类似商业军备竞赛的竞争的推动,大型科技公司一直在争夺人工智能领域很多最优秀的学术专家。因此,很多领先研究位于私营部门,而非公共部门。

To their credit, some leading tech companies have acknowledged the need for transparency, albeit belatedly. There has been a flurry of initiatives to encourage more policy research and public debate about AI.
值得肯定的是,一些领先科技公司认识到了透明的必要性,尽管有些姗姗来迟。还有一连串倡议鼓励对人工智能展开更多政策研究和公开辩论。

Elon Musk, founder of Tesla Motors, has helped set up OpenAI, a non-profit research company pursuing safe ways to develop AI.
特斯拉汽车(Tesla Motors)创始人埃隆?马斯克(Elon Musk)帮助创建了非盈利研究机构OpenAI,致力于以安全方式开发人工智能。

Amazon, Facebook, Google DeepMind, IBM, Microsoft and Apple have also come together in Partnership on AI to initiate more public discussion about the real-world applications of the technology.
亚马逊(Amazon)、Facebook、谷歌(Google) DeepMind、IBM、微软(Microsoft)和苹果(Apple)也联合发起Partnership on AI,以启动更多有关该技术实际应用的公开讨论。

Mustafa Suleyman, co-founder of Google DeepMind and a co-chair of the Partnership, says AI can play a transformative role in addressing some of the biggest challenges of our age. But he accepts that the rate of progress in AI is outstripping our collective ability to understand and control these systems. Leading AI companies must therefore become far more innovative and proactive in holding themselves to account. To that end, the London-based company is experimenting with verifiable data audits and will soon announce the composition of an ethics board to scrutinise all the company’s activities.
谷歌DeepMind联合创始人、Partnership on AI联合主席穆斯塔法?苏莱曼(Mustafa Suleyman)表示,人工智能可以在应对我们这个时代一些最大挑战方面发挥变革性作用。但他认为,人工智能的发展速度超过我们理解和控制这些系统的集体能力。因此,领先人工智能公司必须在对自己问责方面发挥更具创新和更主动的作用。为此,这家总部位于伦敦的公司正在尝试可验证的数据审计,并将很快宣布一个道德委员会的构成,该委员会将审查该公司的所有活动。

But Mr Suleyman suggests our societies will also have to devise better frameworks for directing these technologies for the collective good. “We have to be able to control these systems so they do what we want when we want and they don’t run ahead of us,” he says in an interview for the FT Tech Tonic podcast.
但苏莱曼指出,我们的社会还必须设计更好的框架,指导这些技术为集体利益服务。他在接受英国《金融时报》Tech Tonic播客的采访时表示:“我们必须能够控制这些系统,使他们在我们希望的时间做我们想做的事,而不会自说自话。”

Some observers say the best way to achieve that is to adapt our legal regimes to ensure that AI systems are “explainable” to the public. That sounds simple in principle, but may prove fiendishly complex in practice.
一些观察人士表示,做到这点的最佳方法是调整我们的法律制度,确保人工智能系统可以向公众“解释”。从原则上说,这听上去很简单,但实际做起来可能极为复杂。

Mireille Hildebrandt, professor of law and technology at the Free University of Brussels, says one of the dangers of AI is that we become overly reliant on “mindless minds” that we do not fully comprehend. She argues that the purpose and effect of these algorithms must therefore be testable and contestable in a courtroom. “If you cannot meaningfully explain your system’s decisions then you cannot make them,” she says.
布鲁塞尔自由大学(Free University of Brussels)法律和科技学教授米雷列?希尔德布兰特(Mireille Hildebrandt)表示,人工智能的危险之一是我们变得过度依赖我们并不完全理解的“不用脑子的智慧”。她认为,这些算法的目的和影响必须是可测试而且在法庭上是可争论的。她表示:“如果你无法有意义地解释你的系统的决定,那么你就不能制造它们。”

We are going to need a lot more human intelligence to address the challenges of AI.
我们将需要更多的人类智慧来应对人工智能挑战。