AI: Maximizing the Potentials, Minimizing the Perils


It’s easy to see both the up- and downsides of artificial intelligence.
Just a few upsides: more accurate medical diagnosis, safe, fast automated vehicles, AI-driven instruction that ever adapts its style and pace based on the student’s ongoing performance.
On the downside, luminaries such as Bill Gates and Elon Musk worry that self-teaching AI computers could get smart enough that humans won’t be able to stop them from nefarious ends.
It would seem that, per Stuart Russell, author of Human Compatible, that we optimize risk/reward if we take two steps: 1. Don’t let the computer “know” the goal of the software. 2. Block the computer from making decisions beyond a certain magnitude—that when implications of a decision go beyond a certain point, human override is required. It’s kind of like the car salesperson who has discretion to give a 10 percent discount, but if more seems required, the boss must approve.
Another fear about AI is that an evil individual or entity could use it to nefarious ends. A few examples: release a murderous virus into the water supply, threaten to close down the electric grid unless paid a zillion dollars, or develop an algorithm for manipulating people into voting for Candidate X. (Whoops, that already pretty-much exists.) Of course, most powerful things, notably nuclear energy, could be used cataclysmically, yet most experts conclude that, rather than prohibit it, it’s wiser to install in-computer and human oversight.
A similarly moderate stance could apply to genomic research. On the upside it could better address such diseases as cancer, diabetes, and cardiovascular disease and create drought-resistant-, high-protein, and insect-repellant crops. Gene editing might eventually be used to create a super-intelligent human. Yes, that person’s brainpower could be used for social good but what if the gene-editing also caused him or her to have to live in physical pain? Or the person could use the hyper-intelligence for personal gain even if it causes great pain to the world. Again, it would seem that regulation, both legal and built-in might yield the risk/reward sweet spot.
What worries me is that such restrictions may move such research to jurisdictions that have looser ones. For example, the worldwide consensus has been that gene editing be conducted only for research, not clinically. He Jiankui defied that by using CRISPR to edit the early embryos of two recently born twin girls in what he said was an effort to prevent them from contracting HIV. Russian scientist Denis En toto, it’s probably wise to establish restrictions on AI: laws, professional standards, and, more difficult, building in limitations to AI software: forcing them to switch off when the stakes are great or the implications unclear. Social norms and fear of punishment will facilitate research that has a positive risk-reward ratio while restraining less advisable research. Outright bans would likely yield far worse net results, as occurred when religious influences restricted scientific research in the Dark Ages. It seems we must accept that the perfect is the enemy of the good. Despite the likely excesses, it seems wise to bet on humankind, that, net, we’ll probably derive positive effects from AI. It certainly will be interesting to watch.
 •  0 comments  •  flag
Share on Twitter
Published on May 18, 2020 00:46
No comments have been added yet.


Marty Nemko's Blog

Marty Nemko
Marty Nemko isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Marty Nemko's blog with rss.