FINAL PROPOSAL
AI Scoring System for Patient Triage and Resource
Allocation
GRAD 50200 Fall 2024 – Team Project Group
7
A. Problem Statement and Project Overview
Modern hospitals are constantly plagued with the challenge of appropriately managing
patient inflow while ensuring that their scarce critical resources, such as emergency beds,
wards, ICU, medical staff, and equipment, are used to their full potential. The issue is only
amplified in cases of emergencies and pandemics when many of the patients need
immediate treatment and surgeries. As a result, the healthcare system experiences delays,
inaccurate triage decisions, and inefficient resource use. (Esteva et al., 2019; Topol, 2019).
The domain of this proposal is the healthcare industry, specifically the field of administration
and emergency response for hospitals. The principal involved in this concept is the
administration department of a hospital network or chain (e.g., HCA Healthcare, Universal
Health, etc.) There are 3 main goals to be conducted for the prospective principal/s:
To develop a patient triage system that is accurate, timely and resource efficient.
When selecting patients for treatments, their characteristics must be accounted
for and used systematically to aid in making unbiased and consistent decisions.
To enable the principal in monitor & optimally use their limited resources (Jiang et al.,
2017).
B. Proposed AI Application: AI Scoring System for Patient Triage
Main Goals: The proposed AI application is an electronic patient health record-based scoring
system. The main goals of this application are to:
Accurately assess patients’ medical conditions upon their admission to hospital.
Prioritize patients based on their risk scores associated with their health conditions,
symptoms, and other data upon their admission to hospital.
Recommend the allocation of available resources/facilities, medical equipment, and
medical staff, to ensure the patients with the most urgent health problems are treated
first. (Jiang et al., 2017; Topol, 2019).
Some Key metrics to evaluate the effectiveness of the system include patient wait
times, accuracy of resource allocation (fit for purpose), and overall patient experience
(Jiang et al., 2017). In addition, medical personnel will have to undergo specialized
training to ensure effective use of the tools in the system (Nguyen et al., 2019).
Key Metrics:
Patient Wait Times:
o Measure the time between a patient’s arrival and when they are seen by a
medical professional.
o Track reductions in wait times after implementing the AI system.
o Evaluate the system's ability to prioritize critical patients without negatively
affecting lower-priority cases.
Accuracy of Resource Allocation (Fit for Purpose):
o Assess whether the AI system allocates the appropriate medical resources
(e.g., ICU beds, specialized staff) based on patient needs.
o Monitor the match between the triage decisions made by the AI and the medical
interventions ultimately required.
o Measure the reduction in instances of over- or under-utilization of critical
resources due to misclassification of patient urgency.
Overall Patient Experience:
o Collect feedback from patients on their perceived quality of care, focusing on
whether they feel their needs were prioritized appropriately.
o Use surveys to assess patient satisfaction with the speed, transparency, and
perceived fairness of triage decisions.
o Track metrics such as length of stay and recovery outcomes to gauge the
overall effectiveness of the AI system in improving patient care.
Medical Personnel Training and Usage Proficiency:
o Track the number of staff trained on the AI triage system and assess their
proficiency through periodic testing or performance reviews.
o Measure the time taken by staff to adapt to using the AI system and how
effectively they integrate it into daily workflows.
o Take a survey from the trainees and check their sentiment / overall review of the
training program and the AI tools they use.
o Evaluate how confident medical personnel feel in interpreting and acting on AI
recommendations, and address any gaps through additional training (Nguyen et
al., 2019)
Tasks and Outputs:
Data Gathering: Upon a patient’s entry to the hospital, medical staff captures the
patient information, electronically notes complains/physical conditions, and symptoms.
Based on patient information, the AI system also collects data from the electronic
health records (EHR), patients’ current and past vitals, and their past medical history.
Data Transformation & Analytics: Based on trained AI models, the system swiftly
generates a risk score for each patient to whom the data was associated with and
communicates it to the assigned medical staff.
Assessment: The AI system also prioritizes patients based on their risk score for
purposes of triage. It categorizes the patient according to 3 pre-defined levels of risks,
which it then also shows to the medical staff.
Patient Prioritization: The AI tool will use the assessed risk score to prioritize
patients based on this. These scores allow the AI to evaluate what kind of critical
attention, staff and resources for each case is necessary, to identify critical patients as
soon as possible. Machine learning is used to build the risk scoring model in which
historical health care data are employed to predict patient outcomes given current
symptoms (Jiang et al., 2017). In some cases, fair biases are induced (e.g., high-risk
groups such as the elderly or chronically ill). This enables the AI system to triage more
efficiently and prioritize patients with an immediate need for care, while ensuring
equity. The AI system is further optimized for continuous learning from new data,
hence reinforcing its triage capabilities over.
Resource Allocation: The triage system also recommends and notifies the medical
staff of the available resource from a set of beds, physician / specialist with specific
skills, and medical equipment, according to the level of risk associated with the
patient, in which the medical staff must then respond and act on.
In addition to local resource allocation, if such hospital is part of a strategic federation
or even from an entire region of interconnected chain of hospitals, that system will
also be synchronized with geo-referenced clusters and when needed alert the nearest
available hospital in that chain if a patient needs to be diverted and expedite the
processes and preparations there. When local resources are spent, patients can be
sent to the next available medical facility in the network or federation via long-distance
transport like ambulances or helicopters when warranted (Topol, 2019).
Notification and Reporting: An attending physician will then be assigned and
notified, and all the information captured along with the resources, and analysis by the
AI-based triage system will then be handed over and reported to the attending
physician who will then recommend the next course of action (Nguyen et al., 2019).
Patient Medical history summary: To improve the credibility and safety
of AI diagnosis, a tool that shows a patient's medical record will be
implemented. This is very important as it will give the medical staff and the
attending physician a snapshot of a patient’s historical medical information
and underlying conditions that may guide and contribute to the outcome,
diagnosis and treatment strategy of the current emergency (Topol 2019).
C. Challenges in Achieving Direct Alignment
Few challenges may come up in aligning this AI tool directly with the principal goals:
Complexity of Goals: The AI application must achieve both prompt responses,
ensure safety, and high quality of care associated with the assignment of medical
resources, i.e., avoiding both under or over estimation of the risk and resources.
Aligning these with actual principal goals is tricky and requires intricate modelling and
algorithms, and in many cases human judgement is still relied on a case-by-case
basis (Topol, 2019).
Data Quality: As with various AI systems, the quality of generated assessments is
contingent on the volume and quality of data processed. Thus, incomplete or
insufficient data may lead to flawed judgments concerning patient’s condition, with the
AI system underestimating the risk associated with sociodemographic factors or
physiological characteristics, thus misaligned with the principal’s motives (Esteva et
al., 2019).
Transparency and Explainability: For physicians to trust the conclusion drawn and
recommendations made by the AI system and have confidence to implement them
effectively, they need to know the context in which they are generated. With the use of
unstructured data and Neural networks, this may be unfeasible, yet the principle
behind the training, the concept behind inner layer functions and the output process
must be communicated. As a result, the decision process behind the AI model must
be relatively and logically simple to follow for the audience (Nguyen et al., 2019).
Handling challenges during Mass Casualty events: During mass casualty
incidents, time- critical decisions may need to be made. The manual efforts on
applying the HITL (Human-in-the-Loop) approach may introduce delays in managing,
quantifying and analyzing the impact of casualty as one or more medical staff are
required in collecting patient data. A possible remedy would be the creation of
physical automated tools, sensors or hardware components that perform preliminary
medical check-ups rapidly and autonomously even with human supervision or
oversight. This will help lighten the time pressure on staff and enhance mission-
readiness for AI systems (Nguyen et al., 2019).
Obtaining Patient Consent and Legal Ramifications: When possible, a disclaimer
and consent may be redistributed to patients to communicate their care pathway is
managed by the AI system. Redistributing the disclaimer can enhance patient trust by
fostering transparency and openness about the use of AI in their care. When patients
are informed that an AI system is involved, it can build confidence in such a way that
they feel more aware of the risks and having a sense of inclusivity in the process. This
transparency helps to demonstrate ethical responsibility and reinforces the idea that
the AI system is designed to improve their treatment quality and outcomes
(Obermeyer et al., 2019). In some cases, requiring immediate action, it may be
impossible to obtain patient consent. Continual governance oversight, monitoring and
testing of an AI system enables to shed light on liability concerns. With the regular
audits of triage decision, and controls/provisions for iteration, it will be possible to
recognize disparities and increase system performance, accuracy and treatment
quality (Esteva et al., 2019).
D. Externalities of the proposed AI Application
The implementation of this AI-based patient triage system may bring a few externalities:
Positive Externalities: This system is expected to improve patient response time
which can then lead to the improved outcomes of their condition, avoidance of medical
risks, and ultimately reducing costs on both patient and hospitals, while ensuring
patient information security and privacy. Furthermore, uniform and pragmatic decision-
making; and not solely based on opinion alone, may decrease the level of health care
disparities (Topol, 2019).
The AI Patient Triage and Resource Allocation system also improves patient
outcomes by quickly identifying critical conditions, such as sepsis or stroke, through
real-time data analysis. Faster diagnosis allows for earlier intervention, which is crucial
in life-threatening situations—each hour of delay in sepsis treatment increases
mortality by 7.6% (Singer et al., 2016). Additionally, the system optimizes resource
allocation, ensuring that the most urgent patients receive the necessary care,
improving survival rates during crises (Jiang et al., 2017). By reducing human error
through data-driven decision support, the system enhances the accuracy and speed
of care, leading to better overall patient outcomes (Saver, 2016).
Negative Externalities: The AI system might have the unwanted advantage of
reinforcing pre-existing biases within training data that unfairly favors certain patient
groups towards others. There is also the issue of over-trust in AI systems, where
medical personnel could rely too heavily or solely on the AI system’s
recommendations and potentially overlook important subtleties or tacit knowledge
when it comes to patient care (Nguyen et al., 2019).
E. Social Alignment and Recommendations for Developers
Social Alignment: For this proposed AI patient triage system, social alignment happens
when the system is used in a way that its goals match up with what makes the hospital
successful as well as social values that the hospital supports. For example, we see social
alignment when the AI makes choices that not only help the hospital, but also promote
fairness and equality and show respect for patients' rights.
F. Challenges in Achieving Social Alignment:
Bias in Training Data: The training data given by the principal themselves might
have an influence from unfair societal biases against patients. In addition, a mandate
from a stakeholder or the human developer's strong bias itself could get past the
company's efforts to stop it. To match the AI's goals with society’s interest, we need to
get rid of the bias in the data by extensive exploratory analysis and preprocessing
(Obermeyer et al. 2019).
Equity vs. Efficiency Trade-Offs: Trade-offs between equity and efficiency might
also exist. It is only sensible that underserved patients need more attention, and this is
what the AI system will logically recommend, yet there will be instances where this
could be impractical for the hospital, who might then need to halt the use of the AI
triage system for other patients. Making sure social alignment happens in these
situations is a big challenge. (Nguyen et al. 2019).
Perception between AI or Human Triaged cases: One of the major worries about
AI-based triage systems is that patients and families would not like to be "triaged by a
robot rather than a doctor." Research suggests that roughly 75% of patients distrust AI
in healthcare, mostly because they are worried about losing facetime with doctors and
nurses, particularly in high-stakes scenarios like triage. Furthermore, the apparent
insensitivity of AI systems could enhance anxiety patients feel during critical medical
situations. These demands of human control and transparency in the context of AI as
a support, instead of replacement for doctors would alleviate these concerns. (Carta
Healthcare, 2023; Medical Economics, 2023; BMC Medical Ethics, 2023).
G. Recommendations for Developers:
Bias Mitigation: The Data Scientists or Analysts assigned for this endeavor must
come up with ways to spot bias in training data and ensure it is not fed into training.
To make sure the AI system lines up with what society values, hospital leadership and
stakeholders also need to be aware and regularly check / watch the AI triage system’s
biases against different groups of people on a regular basis. Regular ethical oversight
and bias checks will maintain the system’s alignment with both hospital goals and
societal values (Obermeyer et al. 2019).
Transparency: It is important for developers to ensure explainability as much as
possible. They must ensure stakeholders (and if necessary, patients) understand how
AI makes decisions, to acknowledge any recommendations or decisions made.
Ethical Oversight: Set up a committee that regularly checks AI performance,
discusses concerns, check bias reports, and ultimately enforce AI system rules that
help patients who need significant protection (Jiang et al. 2017).
H. Guidelines for Development of the AI System
To make sure the AI aligns and, we should give these instructions to the coding team:
Prioritize the hospital's goals: The hospital’s most valuable objectives must be
embedded and inherited within the design and construction of the AI model’s code.
The most important guiding principles should be keeping patients safe and fair
treatment for everyone.
Learn from Real yet Fair Data: Rather than relying on rules alone, the AI model for
the system should learn from massive amount of clean and unbiased data. This helps
the system make accurate scoring, and fair recommendations.
Human-in-the-Loop: Human intuition had to be incorporated to some degree in the
entire process, especially with complex decision making or validations. This means
doctors, and if necessary, the ethics committee should be able to review, confirm or
change any recommendations the system makes. For instance, In the initial diagnosis,
human involvement remains crucial. While the AI system will analyze data and
generate risk scores, a healthcare professional will review the AI’s output and make
the final decision. As the AI improves, human oversight may focus on complex cases
flagged by the system, ensuring accuracy and accountability. The AI will support, not
replace, human judgment in diagnosis.
Define the ethical rules: The hospital leadership along with other domain experts,
technology experts and ethical experts (e.g., the AI ethics committee), must define the
ethical guidelines that must be considered for developing the AI patient scoring and
triage system. Developers need to build and embed ethics into AI model from the
start, and not add them later. This helps protect patients' rights and stops AI from the
risk of making socially unreasonable and inequitable recommendations (Esteva et al.
2019; Obermeyer et al. 2019).
I. Conclusion:
The AI Scoring System for Patient Triage and Resource Allocation offers a forward-thinking
solution to address the ongoing issues within healthcare, particularly in times of crisis. By
leveraging patient data and sophisticated AI models, the system allows for more precise
triage, permitting hospitals to allocate limited resources strategically and prioritize
emergency cases immediately. Beyond just operational enhancements, it also cultivates
fairness and transparency, countering potential prejudices through continuous oversight and
ethical review. The system's ability to learn endlessly enhances its adaptability, ensuring it
develops alongside evolving medical needs, ultimately delivering accelerated, more accurate
care to patients.
With integration into various hospital networks, this AI system can streamline processes
across multiple facilities, providing flexibility amid resource shortfalls or emergencies. By
decreasing human mistakes and improving decision-making, it contributes to improved
patient outcomes while optimizing the usage of scarce resources. If adopted prudently, the
AI triage system will substantially alter the manner in which healthcare is provided, balancing
effectiveness with moral responsibility and bettering the overall patient experience.
======================================================================
Appendix - Annotated Source List
Esteva, A., Robicquet, A., Ramsundar, B., Kuleshov, V., DePristo, M., Chou, K., Cui, C.,
Corrado, G., Thrun, S., & Dean, J. (2019). A guide to deep learning in healthcare. Nature
Medicine, 25(1), 24–29. https://doi.org/10.1038/s41591-018-0316-z
This source helps us talk about the problems of using deep learning in healthcare, like
data quality and biases. These issues are key to the concept paper's talk on direct
alignment and possible risks. It looks at the problems in direct alignment (for example
issues with data quality) and the dangers linked to biases in AI systems.
Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., Wang, Y., Dong, Q., Shen, H., & Wang,
Y. (2017). Artificial intelligence in healthcare: Past, present and future. Stroke and
Vascular Neurology, 2(4). https://doi.org/10.1136/svn-2017-000101
This review article looks at how AI has grown and where it's used in healthcare today. It
also talks about making AI clear and easy to understand, which are key parts of the
concept paper. The article dives into the problems with making AI open and
understandable as well as the tricky job of getting AI systems to work well with
healthcare goals.
Nguyen, A., Yosinski, J., & Clune, J. (2015). Deep neural networks are easily fooled:
High confidence predictions for unrecognizable images. 2015 IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), 427–436.
https://doi.org/10.1109/CVPR.2015.7298640
This paper talks about how AI systems can be vulnerable to mistakes and unfair
judgments in neural networks. This backs up what the concept paper says about
possible downsides and dangers. It also looks at how AI systems tend to make errors
and the risks of them not lining up with what we want because of these mistakes.
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019) Dissecting racial bias in
an algorithm used to manage the health of populations | Science. (n.d.). from
https://www.science.org/doi/10.1126/science.aax2342
This research examines racial bias in healthcare algorithms, which relates to the
concept paper's exploration of social alignment and bias reduction. It explores the
obstacles to achieving social alignment especially in reducing biases in AI systems.
Topol, E. J. (2019). High-performance medicine: The convergence of human and
artificial intelligence. Nature Medicine, 25(1), 44–56. https://doi.org/10.1038/s41591-018-
0300-7
This article explores how AI blends with medicine, including its possible advantages
and the need to align it with medical aims, which backs up the main idea of the concept
paper. It also looks at how AI can boost patient results and make resource use more
effective.
Carta Healthcare. (2023). Most U.S. patients don't trust AI in healthcare settings, feel
transparency would help. Fierce Healthcare. https://www.fiercehealthcare.com/health-
tech/most-americans-dont-trust-ai-healthcare-setting-carta-survey-finds
This article explores the trust gap between patients and AI in healthcare settings, noting
that 75% of U.S. patients do not trust AI-driven systems. The report highlights the
importance of transparency in improving patient comfort and trust. It provides valuable
insights into how patients perceive AI technology and its potential impact on doctor-
patient interactions.
Medical Economics. (2023). Patients don’t understand use of AI in healthcare, and many
don’t trust it. https://www.medicaleconomics.com/view/patients-don-t-understand-use-of-ai-in-
health-care-and-many-don-t-trust-it
This source discusses the lack of trust and understanding that patients have regarding
AI in healthcare. It reports that while AI is increasingly used in clinical settings, most
patients remain skeptical, fearing that it might reduce personal interaction with
healthcare providers. This article is particularly relevant to concerns about AI’s
impersonal nature in healthcare.
BMC Medical Ethics. (2023). Public perceptions of artificial intelligence in healthcare:
ethical concerns and opportunities for patient-centered care.
https://bmcmedethics.biomedcentral.com/articles/10.1186/s12910-024-01066-4
This study provides an in-depth analysis of patient perceptions of AI in healthcare,
focusing on ethical concerns like AI's lack of humanity and empathy. It also discusses
opportunities for improving patient-centered care through better integration of AI with
human oversight. This source supports discussions on the social concerns of AI-driven
triage systems.
Singer, M., Deutschman, C. S., Seymour, C. W., et al. (2016). The Third International
Consensus Definitions for Sepsis and Septic Shock (Sepsis-3). JAMA, 315(8), 801-810.
https://jamanetwork.com/journals/jama/fullarticle/2492881
This article discusses the importance of timely intervention in sepsis cases,
emphasizing that every hour of delayed treatment increases the risk of death by about
7.6%. This data supports the effectiveness of AI in early diagnosis, highlighting how
timely, data-driven decision-making can save lives in critical conditions like sepsis.
Saver, J. L. (2016). Time is brain—quantified. Stroke, 37(1), 263-266.
https://www.ahajournals.org/doi/10.1161/01.STR.0000196957.55928.ab
Saver’s work focuses on the importance of timely treatment in stroke cases, stating
that each minute saved in providing care can significantly reduce long-term brain
damage. This supports the value of AI in prioritizing and fast-tracking stroke patients
for immediate care, leading to better recovery outcomes.