Play Book Tag discussion

Supremacy: AI, ChatGPT, and the Race that Will Change the World
This topic is about Supremacy
9 views
2024: Other Books > Supremacy by Parmy Olson - 4 stars (BWF)

Comments Showing 1-3 of 3 (3 new)    post a comment »
dateUp arrow    newest »

Joy D | 10069 comments Supremacy: AI, ChatGPT, and the Race that Will Change the World by Parmy Olson - 4* - My Review

“The race to build the AGI (Artificial General Intelligence) had started with a question: What if you could build artificial intelligence systems that were smarter than humans. The two innovators at the forefront grappled with the answer as their quests turned into a heated rivalry. Demis Hassabis believed that AGI could help us better understand the universe and drive forward scientific discovery, while Sam Altman thought it could create an abundance of wealth that would raise everyone’s living standards.”

Parmy Olson, a journalist specializing in technology, examines the rise of artificial intelligence tools, particularly chatbots, and the people behind them. The author initially focuses on Deep Mind and OpenAI, the two primary firms, and the associated later involvement of Google and Microsoft. She starts with the founding CEOs, who planned to use their products for the greater good of humanity, Demis Hassabis of DeepMind and Sam Altman OpenAI. The author tracks the ways their ideals ended up intertwined with the tech industry giants, and the results.

Olson explores two main issues surrounding AI – the idea that AI will get out of hand and damage humans (safety concerns) and the possibility of replicating existing social biases or manipulating them (ethical concerns). To some extent, she also examines large language models, which are used in AI training and learning. One of the primary drivers fueling these concerns is the same one we have seen wreak havoc in recent years – namely, that profit-driven corporations will use technology, along with personal data and habits, to increase their wealth and the expense of the social welfare. We have, of course, seen this occur repeatedly throughout history, where short term profits are valued over long-term impact. One recent example is social media, owned by tech companies, to gain huge advertising revenues while enabling misinformation to spread unchecked.

It is not really a surprise that one of the most salient points in Supremacy is that big money and power tend to win out over altruism. Entrepreneurial efforts tend to repeatedly get swallowed up by the tech giants. The author points out that there is a fundamental lack of transparency among the corporate-owned AI tools in the current technology arena. This is a concern for all of us. Olson calls for the use of tools that are above board about their learning models, and who care about ensuring that their systems do not cause harm. Luckily, there are a few altruistic attempts still being made. The example provided by the author is Anthropic and its chatbot, Claude, which is funded by philanthropists. Artificial Intelligence is here already, and further developments are coming, so in my opinion, it is important to understand what is going on with this technology.

PBT September BWF Extra S - fits letter not tag


message 2: by Theresa (new)

Theresa | 15499 comments It is so important to learn what you can about the tech and its issues because they already are impacting our lives and will continue to do so. One of the CLE course I took to renew my license last month was about AI etc from a lawyer's practice point of view. There's already been a case where a lawyer used ChatGPT to write a brief and all the cases cited in it turned out to be fake -- when opposing counsel went to review them, they could not be found. Appalling and he's lucky his license was not suspended.

I'm already drafting memos to send to my board clients about not relying on AI, ChatGPT etc. when drafting communications.

Of course it's all good for lawyers because the legal issues will not only frequently be first impression and cutting edge (which are exciting and challenging) but our level of due diligence and fact checking escalates - and we bill for it all.


message 3: by Joy D (last edited Sep 29, 2024 04:40PM) (new) - rated it 4 stars

Joy D | 10069 comments From my experience using chatbots, it's not something I would ever rely upon to write anything for me. One thing that many people don't know is that not everything the chatbots write is true. I've actually been "training" the chatbot by letting them know when they give me a false answer! It's called "hallucinating" and machines do it too when they don't know the answer. I can't imagine a lawyer actually turning in a brief without checking the content.

There are bound to be legal issues with it, especially in the field of ethics. Personally, I'm not so worried about machines destroying the world (that seems mostly the stuff of science fiction) but I am worried about ethics, especially where money and power is involved.

One thing I would have loved to see explored further in this book is how the idea of massive data and information models do not really replicate human thinking. It's just a vast knowledge base, and not particularly analytical. It was outside the scope of this particular book, but it is fascinating. If you are interested, this book gives some great insights: A Thousand Brains: A New Theory of Intelligence by Jeff Hawkins.


back to top