Yossi Hoffman > Yossi's Quotes

Showing 1-7 of 7
sort by

  • #1
    Sayash Kapoor
    “... for the longest time, we have used the form of a piece of content to determine whether it is legitimate and credible, but that proxy is no longer available to us.”
    Sayash Kapoor, AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference

  • #2
    “In a paper published in the philosophy journal Mind, Alan Turing described what's called the Turing Test. This asks whether someone could distinguish, 30% of the time, whether they were interacting (for up to five minutes) with a computer or a person. If not, he implied, there'd be no reason to deny that a computer could really think.

    That was tongue in cheek. Although it featured in the opening pages, the Turing Test was an adjunct within a paper primarily intended as a manifesto for a future Al. Indeed, Turing described it to his friend Robin Gandy as light-hearted "propaganda," inviting giggles rather than serious critique.”
    Margaret A. Boden, AI: Its Nature and Future

  • #3
    “(Hofstadter adds that a dearly loved person can still exist after bodily death. The self of the "lost" person, previously fully instantiated in their brain, is now instantiated at a less fine-grained level in the brains of the loving survivor/s. He insists that this isn't merely a matter of "living on" in someone's memory, or of the survivor's having adopted some of the other's characteristics, e.g. a passion for opera. Rather, the two pre-death selves had interpenetrated each other's mental lives and personal ideals so deeply that each can literally live on in the other. Through her widower, a dead mother can even consciously experience her children's growing up. This counter-intuitive claim posits something similar to personal immortality—although when all the survivors themselves have died, the lost self is no longer instantiated. Lasting personal immortality, in computers, is foreseen by the "transhumanist" philosophers: see Chapter 7.)”
    Margaret A. Boden, AI: Its Nature and Future

  • #4
    Arvind Narayanan
    “While generative Al is modestly but meaningfully useful for a large number of people, it is more profoundly significant for some. An app called Be My Eyes connects blind people to volunteers who assist them in moments of need. The app records the user's surroundings through the phone camera, and the volunteer describes it to them. Be My Eyes has added a virtual assistant option that uses a version of ChatGPT that can describe images. Of course, ChatGPT isn't as helpful as a person, but it is always available, unlike human volunteers.”
    Arvind Narayanan, AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference

  • #5
    Arvind Narayanan
    “The surprising thing is not that chatbots sometimes generate nonsense but that they answer correctly so often.”
    Arvind Narayanan, AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference

  • #6
    Arvind Narayanan
    “Another myth is that tech regulation is hopeless because policymakers don't understand technology. In reality, policymakers aren't experts in any of the domains they legislate. They don't have degrees in civil engineering, yet we have construction codes that help ensure that our buildings are safe. The fact is that policymakers don't need domain expertise. They delegate all the details to experts who work at various levels of government and in various branches. The two of us have been fortunate enough to consult with many of these experts, and they tend to be extremely competent and dedicated. Unfortunately, there are too few of them, and the understaffing of tech experts in government is a real problem. But the idea that heads of state or legislators need to understand technology in order to do a good job is utterly without merit and reveals a basic misunderstanding of how governments work.”
    Arvind Narayanan, AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference

  • #7
    Arvind Narayanan
    “In one extreme case, U.S. health insurance company UnitedHealth forced employees to agree with Al decisions even when the decisions were incorrect, under the threat of being fired if they disagreed with the Al too many times. It was later found that over 90 percent of the decisions made by Al were incorrect.

    Even without such organizational failure, overreliance on automated decisions (also known as "automation bias") is pervasive. It affects people across industries, from airplane pilots to doctors. In a simulation, when airline pilots received an incorrect engine failure warning from an automated system, 75 percent of them followed the advice and shut down the wrong engine. In contrast, only 25 percent of pilots using a paper checklist made the same mistake. If pilots can do this when their own lives are at stake, so can bureaucrats.

    No matter the cause, the end result is the same: consequential decisions about people's lives are made using Al, and there is little or no recourse for flawed decisions.”
    Arvind Narayanan, AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference



Rss