More on this book
Community
Kindle Notes & Highlights
by
Mariya Yao
Read between
September 7 - September 11, 2018
Approaches that work well for solving narrow problems do not generalize well to tasks such as abstract reasoning, concept formulation, and strategic planning—capabilities that even human toddlers possess but our computers do not.
There is endless debate on ANI (atitificial narrow intelligence) vs AGI (artificial general intelligence). If you watch AI movies, you will start believing that AGI is close. From everything I read about AI and trying it out, I think that AGI is farther and farther. There may be several reasons but my mine reason is that we don't even fully understand human intelligence.
Rick Sam liked this
A common pattern observed in both academia and industry engineering teams is their propensity to optimize for tactical wins over strategic initiatives. While brilliant minds worry about achieving marginal improvements in competitive benchmarks, the nitty-gritty issues of productizing and operationalizing AI for real-world use cases are often ignored. Who cares if you can solve a problem with 99 percent accuracy if no one needs that problem solved?
This is a real problem we will face as more and more applications are built. We need to manage expectations regarding accuracy and continuously work towards improving it in applications.
the machine learning code is only a small fraction of any AI system. Critical components such as data management, front-end product interfaces, and security will still need to be handled by regular software.
There is another debate on whether AI will replace traditional software development. In my opinion it is a little too early to have that debate.

