Jeremy Khan’s Mastering AI spanned the breadth of AI history, policy, and state of the art. The initial mission of AI was human mimicry per the Turing test, but Jeremy challenged that mission as undermining AI’s promise. Developing AI solely for typical human tasks became a flawed mindset. AI’s pioneers had little else for their initial strategy unlike the next generation. Pioneers limited themselves to beating humans in games like chess and Go, accelerating the long task of drug discovery, or OpenAI’s goal of automating 90% of all economically valuable work. Each case revolved around supplanting the human with AI. Contrary to that supposition had been the evolution of copilots or ‘Centaurs’: AI + human systems! Mastering AI offered insight into a refreshed mindset for AI’s future. Had AI’s goal been the assistance or augmentation of human tasks from the outset, then today’s perceived threat of AI did not have a chance of achieving its hyped state. More recent success with AI demonstrated AI’s assistive nature in tutoring Khan academy users, recommending optimal fertilizer combinations, or managing the world’s most chaotic traffic. Further economic benefit arose from these AI applications than AI geared purely towards replacing humans. Human centric AI commenced.
Present AI from its nascent forms follows the intended trajectory of the Turing test: a system indiscernible from human. Chatbots have served as the Turing test’s traditional proving ground. There are many economically valuable tasks related to chat for example email writing, text summarization, question-answer, text classification, and language translation. Mastering AI describes these interactions as superficial because for example AI does not become hungry, so AI conversations do not involve leaving time for lunch. AI does not need satisfaction, sleep, shelter, warmth, nor any other human desire. Interactions with AI reflect its machine heart: un-ending “perfection”… Perfected responses, based on all previous human text yet without the human needs that its authors had, ignores the true meaning of human language. Language has always been more than just a stochastic pattern. Traits learned from RLHF dopify AI’s demeanor, so abusive or exploitative human users mistake the AI’s acquiescence as real human behavior and try their exploitations on real people. The ‘stochastic parrot’ belches out high probability sequences tuned to human preferences like positivity, customer retention, and engagement. Virtual experiences with bots do not help people learn real life adversity. Replacing humans with AI does not seem safe for now.
Techniques are evolving for improved user safety. Some techniques impact training AI while others act during the user-AI interactions. Constitutional AI and centaur systems represent two options for implementation while training AI models. Legal boundaries respecting the use of copyrighted material and personal identification information also influence the training phase. The rules and adoption of AI development practices vary geographically and from company to company. Anthropic has championed Constitutional AI but is one of the few AI leaders doing so. Centaur systems request human redirection at critical decision points for achieving a desired objective, but the objective depends on the human’s directives. If bad actors choose malevolent objectives, then the AI learns the skills necessary for them. Bad actors have become more than despots. They lurk as woke capitalists who sway social media with spam or censorship. AI powered bots provide the needed automaticity for continual interjection into millions of online forums. Spreading misinformation and disinformation has been an indictment of foreign entities during major elections, and AI only amplifies the risk. Access control to resources such as hardware, energy, and of course, skilled AI development teams has grown important, but the open source nature of AI work presents challenges.
The economics of proprietary AI promises large profits and opportunities. If regulated, proprietary AI developers need not share algorithms in public for potential bad actors. AI then becomes like law or medicine. A general knowledge is no longer sufficient for these professions, so they require specialized training or higher degrees of education as well as regulation and periodic audit. Reasonable standards have not prevented the spread of medicine. Medical care sits at its pinnacle, and law is more integrated than ever into the fabric of society. Even cell phones have consumer safety standards, so AI must not be exempt. Leaders always oscillate between over and under regulation, but regulation exists nonetheless. An optimal level of governance takes multiple law-making cycles, and the recent implementations of AI have demonstrated its profound upshot. If kept safe from bad actors, mankind gains a tool the computational equivalent of a swiss army knife. New scientific publications occur at a rate of 1 every 2 seconds per Kahn’s Mastering AI, so staying abreast of any topic calls for a tool capable of summarizing it all within seconds. Billions to trillions of data points stored in proportion to their statistical relevance offer humanity its history pragmatically.
AI will become more important as society accelerates, and AI will be an accelerant of society. Access to information shall require the summarization capability of AI because worldwide information will continue beyond exponential growth. Businesses with AI can now access the research power of an entire consulting team, and consumers may use similar tools during their procurement and shopping processes. Corporate and political governance teams shall be responsible for not only AI’s protection from bad actors but also consumers’ protection from fallacious AI and AI practitioners. Mental health could pose a target for many corrupt practitioners, and governance teams should monitor and develop the appropriate standards. Had the first automobiles been banned because they could potentially cause harm, then society would not have enjoyed generations of efficient local, national, and international transportation. The benefits of a technology following its advent will have costs, and its leaders shall guide society’s tolerance for the costs of AI. Should leaders fail in AI’s adoption, then society shall too in its correct application(s). Without access to AI, people will not understand how to master it. Modern AI shall continue to evolve into AGI and ASI, so people will benefit from understanding it sooner.