Artificial Intelligence - The Next Big Change is Already Here….
Among the readers of this newsletter, likely as broad or broader a range of knowledge exists regarding Artificial Intelligence (AI) as on any other topic. For some, AI probably lives largely as Skynet (the digital intelligence run catastrophically amok in the Terminator movies). For others, it has begun to appear fairly regularly as a topic for consideration at professional conferences that you attend or explored in publications to which you subscribe. For still others, AI has moved to the center of your data analytics screen, its current and future power clear and attractive, e.g., venture capitalists last year invested a record $115b in AI companies and, as noted in an article that I co-authored early in the pandemic, AI could have fundamentally affected the course of Covid.
Triage in a Pandemic: Can AI Help Ration Access to Care?
So, why worry? Well, a Pew Research Center survey of AI experts from June of 2021 found that over 2/3 did not believe that by 2030 use of AI would mainly support social good.
I’ve written or participated in writing/podcasting several pieces about AI over the last few years. I also cofounded the Mid-Atlantic AI Alliance (along with Dr Krzys Klaudinski and Cassie Solomon). The links to the articles and to a few other sources appear at the end of this newsletter. I bundle them here with a few comments, both for convenience of reference and, hopefully, to enhance your preparation for ever more pressing (and important) consideration of this particularly significant technology.
First, a few words about what we’re talking about here. AI enables mining huge sets of data in the service of solving problems with heightened certainty. “Reinforcement learning” entails executing many trials to enhance ‘machine’ expertise related to whatever is being mined, e.g., theming customer feedback, shepherding customers along desired paths, sorting images, detecting mood, answering research questions or even explaining jokes. The pattern searching algorithms themselves receive the label “deep learning algorithms”. Ever greater computing horsepower (especially ‘the cloud’) has facilitated creation of ever larger and more powerful AI efforts—AI programs today can possess over a trillion parameters, with current design aspirations for 500 trillion parameter programs.
A few BIG, consequential, even worrying, questions that you might ask as you consider AI and the change it can precipitate:
Do we build AI to mimic us or to augment us? The choice isn’t necessarily (or desirably) binary, but, however nuanced, the choice does matter. Mimicry leads, for example, more easily to replication and replacement. Curiosity, challenge, and hubris will draw us toward pursuit of passing the Turing Test: to build a machine which responds in a way undiscernibly different from humans. Not just Luddites might recommend otherwise.
Who builds and maintains AI? Universities likely cannot, given current estimates of the cost just of maintaining large AI programs run as high as $1b/year. That leaves large organizations such as Google and Microsoft and nations, singly or in alliances. The consequence of current and increasing concentration of wealth together with its organized and often predictably self-serving influence on government thereby creates the specter of still more concentration of economic and political power. A formidable and scary possibility presents itself: creation of a world combining Facebook marketing analytics and Orwell.
Who oversees the AI world? The ‘R’ word, “regulation”, fits here, because the answer to this question derives from attempting to identify and secure the ‘common good’. Challenges to such an attempt include understanding extremely complicated programming, programming potentially capable of changing its own programming, and programming already difficult to monitor because it can be buried deep (perhaps inaccessibly deep) within the Cloud. Garbage In Garbage Out (GIGO) came as a caution along with the arrival of the first computers in the mid-20th century. GIGO refers to the dangers of flawed data. GIGO fellow travelers include faulty programming, hacks, and even so-called machine ‘hallucinations’. To quote British entrepreneur Ian Hogarth, “We’re building a supercar before we have invented the steering wheel.”
In summary, AI comes with
huge potential upside
loads of risk
big questions
Hence, AI also comes with a clear need for thoughtful, ethical change leadership. Please join in. Ideally, this brief piece will help you to do so.
For further reading:
By me:
Triage in a Pandemic: Can AI Help Ration Access to Care?
(article and podcast) laid out near the beginning of the pandemic (March, 2020) AI’s power to speed up clinical learning dramatically for the marked benefit of patients and clinicians alike.
6 Steps to AI Governance in Healthcare
4/20/22 presents the importance of governance or oversight of AI and lays out 6 steps for setting it up in healthcare, 6 steps highly relevant to AI governance elsewhere.
Great Summer Reads: Books AI Experts Recommend
An invited, annotated listing of key AI references by AI ‘experts’, including, appropriately or not, moi.
By others:
I found the following writing of particular use in developing this newsletter and have included information from them in it:
Huge "Foundation Models" are Turbo Charging AI Progress
The Amazing Possibilities of Healthcare in the Metaverse
The Promises and Challenges of AI
Two Washington Post articles:
The Military Wants AI to Replace Human Decision-Making in Battle
Former Google Scientist says the computers that run our lives exploit us- and he has a way to stop them
Keep paddling.