当前位置

首页 > 英语阅读 > 双语新闻 > 机器人和真人 我们更愿意相信谁(下)

机器人和真人 我们更愿意相信谁(下)

推荐人: 来源: 阅读: 1.3W 次

“That’s the crux of why we think this happens,”she says. “People whotalk to a virtual agent know their data is anonymous and safe and that no one isgoing to judge them.”

“这就是问题的关键。”她说,“认为自己在跟虚拟代理沟通的人知道,他们的数据都是匿名的,非常安全,没有人会评判他们。”

A 2014 survey found that one in four peopleborn between 1980 and 1989 trust 'no one' for money-related information.

千禧一代撬动市场

Millennials driving the market

2014年的一项调查显示,1980至1989年间出生的人在与金钱相关的信息上不信任任何人。

机器人和真人 我们更愿意相信谁(下)

At this point in the robo-advisor cycle theappeal isn’t the anonymity, said Kendra Thompson, a Toronto, Canada-basedmanaging director at Accenture Wealth & Capital Markets. Companies don’tyet offer sophisticated advice through these sites. Convenience and cost –some chargeas little as 0.15% annually on assets invested, while advisor fees rangebetween 1% and 2% of assets —is the attraction now.

加拿大多伦多Accenture Wealth & Capital Markets公司董事总经理肯德拉·汤普森(KendraThompson)表示,机器人顾问领域目前的吸引力不在于匿名性。相关企业并没有通过这些网站提供复杂的建议。便利和成本才是真正的吸引力所在,有些企业的年费仅为已投资资产总额的0.15%,而人类顾问收取的费用则高达1%至2%。

However, that is likely to change, shesaid. In Asia, the demand for digital investment tools is growingexponentially. Elsewhere, the demand for more unbiased automated long-termadvice is expanding, but it’s mostly coming from younger savers.

但她表示,这种情况可能发生变化。在亚洲,数字投资工具的需求正在高速增长。在其他地方,人们对没有偏见的长期机器人顾问的需求也在扩大,但主要来自比较年轻的储户。

A 2014 survey from Fidelity Investmentsfound that one in four people born between 1980 and 1989 trust “no one” for money-relatedinformation, while a Bank of America report said that affluent millennials aremore likely to place a “great deal” of faith in technology compared to othergenerations “and this is no different in financial advisory services”.

富达投资2014年的一项调查发现,1980至1989年间出生的人中,每4个人就有1人在与金钱相关的信息上不相信任何人。而美国银行的报告则显示,富裕的千禧一代比其他几代人更有可能给予科技“极大的”信任,“在财务顾问服务领域同样如此。”

People who have a good relationship with anadvisor will open up, Thompson said, but it’s still hard forpeople to not feel judged.

汤普森表示,与顾问的私人关系较好的人更容易敞开心扉,但仍然很难消除被人评判的感觉。

“There are people who might say ‘I don’t get where therecommendations are coming from’ or ‘I don’t know why the advisor is asking methese questions’,” she said. “That’s the powerful thing about these tools – youcan play around with them without feeling like you’re exposingyourself.”

“有的人可能会说,‘我不知道这些建议从何而来’或者‘我不知道为什么顾问会问我这些问题’。”她说,“这就是这些工具的优势所在——你可以戏弄它们,而不会感觉自己被完全暴露给别人。”

A robot is still a robot

机器人终归是机器人

While automated devices may seem moretrustworthy than humans, it’s important to keep in mind that robots are still machines and theycan be manipulated by the end user.

虽然自动化的设备似乎比人类更值得信任,但我们必须明白的是,机器人终归是机器,它们可以被终端用户操纵。

Youcan play around with them without feeling like you’re exposingyourself.

你可以戏弄它们,而不会感觉自己被完全暴露给别人。

Alan Wagner, a social robots researcher atGeorgia Tech Research Institute in Atlanta, Georgia ran a study where hesimulated a fire in a building and asked people to follow a robot to robot, though, took them into wrong rooms, to a back door instead of the correctdoor, and (by design) it broke down in the middle of the emergency exit.

美国乔治亚理工研究院(Georgia Tech Research Institute)的社交机器人研究员阿兰·瓦格纳(AlanWagner)展开了一项研究,模拟了一栋大楼着火的情形,并要求志愿者跟随机器人前往安全地点。该机器人把他们带入了错误的房间,还带着他们来到了后门,而没有到达正确的出口,而且(故意)在紧急出口中央出现故障。

Yet, through all of that, people stillfollowed the robot around the building hoping it would lead them outside. Thisstudy proved to Wagner that people have an “automation bias”, or atendency to believe an automated system even when they shouldn’t.

然而,尽管出现了种种问题,但志愿者们仍然跟随机器人在大楼里四处搜寻,希望它能带领他们逃出火场。这项研究证明人们怀有“自动化偏见”。换句话说,即使在不应该相信自动化系统的情况下,人们依然会继续相信这些机器。

“People think the system knows better than they do,”Wagnersaid. Why? Because robots have been presented as all-knowing. Previousinteractions with automated systems have also worked properly, so we assumethat every system will do the right thing.

“人们认为这套系统比自己更了解情况。”瓦格纳说。为什么?因为机器人给人们的印象是“无所不知”。之前与自动化系统的互动都表现不错,所以我们会认为每套系统都能提供正确的答案。

As well, since robots don’t react orjudge what someone says, our own biases get projected onto these automatedbeings and we assume they’re rooting for us no matter what, he said.

另外,由于机器人不会对人们表达的内容作出反应或给予评判,我们自己的偏见也会投射到这些自动化系统中,让我们以为机器人会无条件支持我们。

However, Wagner says it’s important toremember that someone – a mutual fund company, an advisor – is controlling thebot in the background and they want to achieve certain outcomes. That doesn’tmean people shouldn’t be truthful with a robot, but these systems are fallible.

但瓦格纳表示,必须牢记一点:这些机器人其实都是由共同基金或人类顾问控制的,目的是达到他们想要的结果。这并不意味着人类不应该信任机器人,但这些系统同样也会犯错。

“You have to be able to say that right now I shouldn’t trust you,but that’s extremely difficult,”Wagner said.

“你必须能够告诉它们:我现在不应该信任你。但这确实非常困难。”瓦格纳说。