Who Can You Trust?: How Technology Brought Us Together – and Why It Could Drive Us Apart
Rate it:
Open Preview
32%
Flag icon
Is it up to Facebook to emphasize and provide information about the original news sources for news articles? Should they do more to contain hoax stories?
32%
Flag icon
When it comes to trust in distributed systems, we need to know who will tell the truth about a product, service or piece of news, and who to blame if that trust is broken.
34%
Flag icon
‘Trust has two enemies, not just one: bad character and poor information.’
34%
Flag icon
I think rebuilding trust is a stupid aim. [Instead] I would aim to have more trust in the trustworthy but not in the untrustworthy. In fact, I aim positively to try not to trust the untrustworthy,’
34%
Flag icon
Trust is not the same as trustworthiness.4 Encouraging generalized trust simply for the sake of creating a more ‘trusting society’ is not only meaningless, it’s dangerous.
34%
Flag icon
Instead, all of us making decisions about trust should be looking at the who, where and why of trustworthiness.
34%
Flag icon
trust signals. These are clues or symbols that we knowingly or unknowingly use to decide whether another person is trustworthy or not.
35%
Flag icon
S0 the trick to appearing trustworthy? Look slightly happy.
35%
Flag icon
So technology might hold people more accountable, but to what degree can technology enhance our assessment of who is trustworthy and who isn’t?
36%
Flag icon
But Murnighan’s experiment also sheds light on the power of online social connections – the wisdom of ‘friends’ can automatically enhance our ability to trust people we do not know.
36%
Flag icon
trust really lies within the group with the expertise (the babysitters) rather than the group with a similar need (the parents).
37%
Flag icon
In other words, each person on Facebook is connected to every other person by an average of three and a half other people.
37%
Flag icon
the three traits of trustworthiness are the same: Is this person competent? Is this person reliable? Is this person honest?
37%
Flag icon
if I trust you, it’s because I believe that you are going to take my interests seriously – whether it be for friendship, love, money or reputation. Why? You won’t take advantage of me because it benefits you not to do so. ‘You value the continuation of our relationship, and you therefore have your own interests in taking my interest into account,’ Hardin writes in Trust and Trustworthiness.20 For instance, I trusted the estate agent who recently sold my house to get a good price not because she was nice or cared about me but because her commission was directly tied to the sale price.
37%
Flag icon
We often mentally ask of someone, ‘Do I trust you?’ A better question is ‘Do I trust you to do x?’ We need to think of trust as trusting someone to do something.
38%
Flag icon
it’s an indicator of reliability in some weird way. If this person is really slow to respond, will they show up on time? Are they really interested?’ Perkins is right; time is often used as an indicator of reliability.
39%
Flag icon
‘Any trust mechanism, even asking a friend about someone, involves some level of intrusiveness. I think the dividing line is how you do it. Are you doing it ethically? Are you doing it transparently?
41%
Flag icon
drug users and drug retailers – who have been stereotyped as “untrustworthy”, who don’t have a reputation for being the most reliable people. So how do they create highly functioning markets that are non-violent and self-regulate based on trust?’
42%
Flag icon
the ratings we rely on to make assessments are often not an accurate reflection of the experience.
43%
Flag icon
whether limiting people to a score or star rating is really that helpful. Many marketplaces are now asking people to rate against a particular trait that is more relevant to trust in a specific context.
44%
Flag icon
16 October 2015, in Washington, DC, the company sued 1,114 individuals for selling positive five-star reviews to Amazon sellers and Kindle authors.
44%
Flag icon
Orlando Figes, a critically acclaimed and prize-winning author, who has written eight books. In 2010, he was caught posting damaging critiques of his rivals’ books on Amazon.
45%
Flag icon
The first is credit history. For example, does the citizen pay their electricity or phone bill on time? Do they repay their credit card in full? Next is fulfilment capacity, which it defines in its guidelines as ‘a user’s ability to fulfil his or her contract obligations’. The third factor is personal characteristics, which is verifying personal information such as someone’s mobile phone number and address. But it’s the fourth category, behaviour and preferences, where it gets interesting and, some might say, more sinister.
45%
Flag icon
So if a citizen is buying socially approved items, like baby supplies or work shoes, their score rises. But if they’re buying Clash of Clans, Temple Run 2 or any video game, and thus looking like a lazy person, their score takes a negative hit.
46%
Flag icon
Sharing what Sesame Credit refers to as ‘positive energy’ online, nice messages about the government or how well the country’s economy is doing, will make your score go up.
47%
Flag icon
The social pressure to conform to the party line and avoid any form of dissent will be immense. Negative or even contrary opinions will have no place. It’s mind-blowing to imagine the sameness this system encourages, how it will stamp out individualism. Who will dare to speak out?
48%
Flag icon
For instance, people with low ratings will have slower internet connectivity; restricted access to more desirable restaurants, nightclubs or golf courses; and the removal of the right to travel freely abroad with, I quote, ‘restrictive control on consumption within holiday areas or travel businesses’.
49%
Flag icon
Uber ratings are nothing compared to Peeple, an app launched in March 2016, which is like a Yelp for humans.33 It allows you to assign ratings and reviews to everyone you know – your neighbour, your boss, your teacher, your spouse and even your ex.
49%
Flag icon
In February 2017, China’s Supreme People’s Court announced that 6.15 million people in the country had been banned from taking flights over the past four years for social misdeeds.
50%
Flag icon
63 per cent of all fake goods, from watches to handbags to baby food, originate from China.
50%
Flag icon
Surprisingly, very few people ask the more pertinent question, ‘Could this happen in the Western world?’ Or rather, when can we expect it?
51%
Flag icon
November 2015, Victor Collins, a police officer from Arkansas, was found floating dead in the hot tub of his friend James Andrew Bates, who became a suspect. Two years later, attorney Nathan Smith, the lead prosecutor in the first-degree murder trial, ordered Amazon to hand over the audio recordings from Bates’s digital assistant, used in the Echo speakers in his home.
51%
Flag icon
how can you know when your always-connected digital assistant is recording what you say?
52%
Flag icon
Uber has a tool it rather ominously calls ‘God View’. Until recently, it allowed all employees to access and track where and when any Uber rider travels to or from, in real time and without obtaining any kind of permission.
53%
Flag icon
three robotic assistants would help a group of participants (‘real’ humans) to make omelettes by passing eggs, oil and salt. Bert A was super-efficient and faultless but couldn’t talk. Bert B was also mute but not perfect, dropping some of the eggs. Bert C was the clumsy bot above but he had facial expressions and could apologize for his mistake.
53%
Flag icon
People trust a robot that is more human-like over one that is mute but significantly more efficient and reliable.
53%
Flag icon
we are no longer trusting machines just to do something but to decide what to do and when to do it.
53%
Flag icon
the Turing test: can we create intelligent machines that exhibit behaviour indistinguishable from human behaviour? Turing said that when you were convinced you couldn’t tell a computer and human apart during a conversation, the computer would have passed the test.
54%
Flag icon
The point that Gates, Hawking and Musk all make is that there will come a time when we will no longer to be able to predict the machines’ next moves.
55%
Flag icon
DoNotPay is a free legal bot that will challenge unfairly issued parking tickets.
55%
Flag icon
How do we prepare ourselves for a future where our children might say ‘I have fallen in love with a bot’?
55%
Flag icon
the very nature of AI means that the only way it can learn is through interactions with us – the good, the bad and the ugly.
55%
Flag icon
When ‘thinking machines’ are smart enough to perform any intellectual feat a human can, or ultimately well beyond, AI becomes known as AGI (Artificial General Intelligence).
55%
Flag icon
Tay is an illustration of how in a world of distributed trust, technological inventions like chatbots learn from all of us, but not in equal measure. Bots will learn from the people who are louder and more persistent in their interactions than everyone else.
56%
Flag icon
The Tay debacle, however, raises serious questions about machine ethics and whose job it is to ensure that the behaviour of machines is acceptable.
56%
Flag icon
‘I think all bots should be required to have an authenticated identity so we can trust them,’
56%
Flag icon
‘All of us need to consider who manufactures that drug, why they are selling it to us, what are the benefits and detriments.’ In other words, it’s important to know something about the intentions of the bot creators.
56%
Flag icon
‘The trust we have in technology is linked to the entity that produced that technology,’
56%
Flag icon
‘Bots need licence plates that carry information about who built the bot, where it came from and who the party responsible for it is.’ In other words, if we have a way to look inside a program, see what is going on in the bot’s ‘brain’,
56%
Flag icon
Would you trust a bot to replace a teacher’s mind when grading papers? Would you trust a robot to put out a fire? How about trusting a robot as a caretaker for your elderly parents? Would you trust the robot waiting for you when you get home from work to have done its chores and made dinner? How about representing you on a legal matter? Would you trust a bot to diagnose your illness correctly or even perform surgery where there might be complications? Or to drive you around in a car?