Facial recognition technology (FRT) is rapidly changing how we interact with the digital world, turning our faces into powerful digital keys. Beyond simple identification, FRT is increasingly integrated into healthcare, offering tools from managing access to facilities to potentially diagnosing health conditions. As healthcare organizations globally adopt FRT, it’s crucial to examine both its benefits and the critical issues surrounding privacy, data security, and algorithmic bias. Understanding these aspects is vital to ensure responsible implementation of FRT in care environments.
FRT Applications in Healthcare and Carer Support
Patient safety and efficient care are paramount in healthcare. FRT offers innovative solutions in these areas, especially within long-term care settings. “For monitoring in places like long-term care homes, FRT can track the comings and goings of residents,” explains Nicole Martinez-Martin, a bioethics expert at Stanford University. This technology can accurately identify patients, quickly access their medical records, and enhance security by controlling and auditing access to sensitive areas within hospitals and care facilities. For example, Martin Luther King Jr. Community Hospital in Los Angeles has partnered with Alcatraz AI to use FRT to bolster security in server rooms, protecting crucial patient data and technological infrastructure.
Furthermore, FRT is showing promise in diagnostic applications, potentially acting as a valuable tool for carers and medical professionals. Martinez-Martin notes its utility in “genetic diagnostic purposes,” highlighting FRT’s ability to detect subtle facial patterns associated with rare genetic diseases more efficiently. Apps like Face2Gene are being used globally by healthcare providers. This AI-driven program analyzes facial features, comparing them against a database of syndromes to suggest possible genetic matches. Studies have demonstrated impressive accuracy rates, such as an 85.7% accuracy in identifying congenital dysmorphic syndromes in a Japanese study. Beyond genetics, FRT can also aid in recognizing emotional and behavioral cues linked to conditions like autism, providing carers with additional insights into patient needs.
Pain management, particularly for vulnerable populations, is another significant area for FRT. PainChek, an Australian FRT application, is designed to detect pain in individuals with dementia by analyzing微表情 (micro-expressions) and facial muscle movements. This technology aims to provide objective pain assessments, overcoming the challenges of subjective reporting, especially in patients who struggle to communicate verbally. St. Michael’s Health Group (SMHG) in Alberta, Canada, piloted PainChek and observed considerable benefits.
Tatsiana Haidukevich, Director of Care at SMHG, shared an instance where a resident exhibiting behavioral issues was assessed using PainChek and found to be experiencing significant pain. By shifting from behavioral medication to pain management based on FRT insights, the resident’s symptoms improved, demonstrating the potential for FRT to inform more effective and person-centered care plans.
“I anticipate this technology expanding further,” Haidukevich states, “not only in nursing care homes but also in broader hospital settings,” highlighting the growing recognition of FRT’s value in diverse care environments.
Addressing Bias in FRT for Equitable Care
Despite the advancements, it’s essential to acknowledge the documented biases within FRT systems, particularly concerning racial and gender disparities. A 2018 study revealed a significant accuracy gap, with FRT being 34% less accurate in identifying darker-skinned female faces compared to lighter-skinned male faces. This bias often stems from non-inclusive datasets used to train FRT algorithms, perpetuating and amplifying existing societal biases.
“I wouldn’t advise any agency to use it on darker-skinned faces in its current state,” cautions Gideon Christian, an AI and law expert at the University of Calgary. He emphasizes that bias must be addressed during the technology’s design phase, not as an afterthought in deployment.
The real-world consequences of FRT bias are stark. In Canada, two Somali women had their refugee status revoked due to misidentification by FRT. In another distressing case, a pregnant Black woman in Detroit was wrongly arrested based on a false FRT match. These incidents underscore the urgent need to rectify bias in FRT, especially in public-serving sectors like law enforcement and healthcare, where errors can have severe consequences.
Alt text: A person wearing a face mask uses a facial recognition scanner to enter a building in Beijing, China, highlighting the technology’s use during the pandemic.
The increased use of face masks, initially driven by the COVID-19 pandemic, further complicates FRT accuracy. A study on FaceVACS FRT software demonstrated a significant drop in accuracy, from 99.7% to 33.5%, when facial images were partially obscured by masks.
PainChek is actively working to mitigate bias. “We’re expanding our research to include African American and Latino cohorts,” says David Allsopp from PainChek, emphasizing their commitment to training algorithms on diverse datasets. He also noted that initial data indicates “the validity of the tool has been fairly even between Indigenous Australians and Anglo-Saxon people.”
To provide a more comprehensive understanding of an individual’s well-being, advanced FRT applications like PainChek are integrating the analysis of body movements and speech patterns alongside facial cues.
However, Martinez-Martin points out a fundamental challenge: “The training of machine learning relies on subjective judgments… how people express emotions is part of the process.” Research has shown that FRT can misinterpret emotions based on racial biases. For instance, negative emotions were more frequently attributed to Black men’s faces than white men’s, even when both groups were smiling.
Cultural and individual differences in emotional expression further complicate FRT interpretation. “People react differently… some are more stoic,” Haidukevich observes. SMHG staff noticed that some residents, due to cultural backgrounds or personal habits, might not express emotions facially in ways the technology expects.
To account for these limitations, SMHG uses PainChek as a supplementary tool within a broader pain assessment framework, rather than relying solely on FRT-derived data. This integrated approach ensures a more nuanced and culturally sensitive evaluation of patient needs.
The way machine learning is trained relies on subjective judgments . . . how [people] express emotions is part of the process
Nicole Martinez-Martin
Privacy and Ethical Considerations for Facial Scanning in Care
Recent controversies surrounding FRT misuse have heightened concerns about data privacy and security. The Clearview AI scandal, where the company illegally scraped social media for facial photos to create a massive database for government and police use, led to lawsuits and legal restrictions. Similarly, the U.S. pharmacy chain Rite Aid faced scrutiny for using FRT to surveil suspected shoplifters in predominantly minority, low-income neighborhoods. “A lack of awareness about the intrusive nature of these technologies has fostered societal acceptance,” argues Christian.
The transformation of facial images into biometric data presents significant security risks. “It’s analogous to leaving your house open to strangers… you have a right to autonomy within your personal space,” Christian emphasizes. Public education is crucial to inform individuals about how their biometric data is being used and protected. FRT goes beyond simply capturing a photograph; it creates a detailed mathematical map of the face, which is classified as sensitive biometric information. Christian stresses that biometric data capture requires explicit consent, which goes beyond the implied consent of simply being photographed or video recorded. A 2023 investigation into the Canadian Tire retail chain highlighted this issue, revealing that the company failed to obtain legal consent when using FRT to capture customer biometrics.
“Privacy was a major concern… we wanted to ensure transparency about our practices,” Haidukevich states, reflecting on SMHG’s initial implementation of PainChek. They addressed residents’ and families’ concerns through education, clarifying how the technology would be used and what data would be collected.
“No photos or videos are stored… we collect a very limited dataset focused on relevant personal information,” Allsopp clarifies regarding PainChek’s data handling practices. Their privacy policy outlines specific instances where data may be used, including identity verification and legal compliance, while emphasizing data protection measures.
FRT holds considerable potential to revolutionize healthcare delivery, offering improved communication, diagnostic capabilities, and enhanced safety for both patients and carers. However, a critical and ethical perspective is essential. Healthcare professionals, carers, and patients must be fully informed about FRT’s limitations and potential risks. Advocating for responsible development and implementation is crucial to ensure FRT truly benefits all members of society and enhances the quality of care for everyone.
Alt text: A surveillance camera in Paris, France, is being tested for AI-assisted crowd monitoring in preparation for the Olympics, raising questions about public surveillance and privacy.
EDITOR’S NOTE: This article is part of a series exploring the ways artificial intelligence is used to improve health outcomes across the globe. The rest of the series can be found here.
Nahid Widaatalla is a freelance journalist and current fellow in journalism and health impact at the University of Toronto’s Dalla Lana School of Public Health.