AI of the Beholder
I submitted my dissertation to external examiners last week. It sort of occurred to me several days later that this was probably a meaningful milestone. I have written something (hopefully) minimal edits from my second doctoral dissertation. Cool.
AI as a concept is central to the dissertation, and one of the threads I set up in the literature chapter is the idea of instability in meaning of emerging technologies, which get stabilized at the social level. AI can be so many things because people imagine it to be many things. And so what is AI? Or perhaps more accurately, what do people imagine AI to be? I'd like to argue for systems characterized by learning, autonomy, and communicativeness.
I'm reflecting on a vignette I captured from attending a meeting between the developers and people from the hospital. It didn't make it into the dissertation, but it stuck with me. One of the developers describes some cascading prioritization logic that was built into the OR scheduling AI system. There's a surgeon there who listens to it and gets upset, and says something like "then it's doing it wrong. It's learning wrong!" I remember that exchange, because in my memory of that moment, I realized that the surgeon never fully understood the structure of the system, and where the learning and doing was happening.
But let me back up and tell you about the technology. The system consists of a prediction module for duration of surgery and risk of complication (and therefore time extension). This is the bit with the machine learning algorithms, hoovering up lots of data etc etc. The predicted time feeds into an optimization module to recommend OR schedules that minimize OR idle time, OR overtime, and unassigned cases. So basically recommend schedules where the cases + cleanup time stacks up to as close to 8 hour shift as possible. And it accounts for all the prioritization rules for cases.
Continuing from my previous post, the idea of training or learning (past or ongoing) being central to what people imagine for AI. The learning elements are in the algo used to build the model to predict the duration of surgery, and in the ongoing retraining from actual OR time, built into the op design. All the scheduling stuff, though happens through the optimizer, which is computational and heuristic, but not a learning algorithm. But because it is bundled together with the predictor, the surgeon imagined that the optimizer is going to learn new prioritization rules based on ongoing use. It's a cool imaginary. It's reasonably doable, too. But it's not what this system was.
The other aspect of the vignette was the autonomy that the surgeon imagined the tool in use would have, to update itself with learned behaviors. And to a degree the system does have autonomy- to update its own weights in the predictive model. But, again, not with the optimizer. Autonomy also relates to the idea of AI for effectuation, in the sense of the system figuring out and executing the precise steps required to complete some task. Hey someone published something about this a few years before ChatGPT ever launched. You should read that.
The third characteristic of AI, communicativeness, comes from my perception of people (including me) interacting with commercial, consumer-facing AI like ChatGPT and Claude, and not the employee-facing process specialized enterprise AI systems like from my study. And I'm making an educated guess that future enterprise AI systems architecture will include a communication module. We expect to be able to interact with "AI" in natural language, beyond fixed charts and KPIs. We want to be able to push back and argue. This is driven by the emerging "dominant design" of LLM modules in consumer AI.
Important to note that with consumer AI, while the criteria of learning, autonomy, and communicativeness are all met at the user/account level, its still a corporation-run product-as-a-service, run by people with agendas and expensive habits. So take the notion of autonomy with a grain of salt.
Comments
Post a Comment