NEW

Canon Medical Receives FDA Clearance on One-Beat Spectral Cardiac CT

Meeting the growing cardiovascular needs of health care providers, Canon Medical Systems USA, Inc. announced that its Deep Learning Spectral CT has received 510(k) clearance for expanded capabilities into cardiovascular applications.

Vote Shows Medical Right to Repair Surge

Across the country, hospitals and health care providers are joining a chorus of biomedical repair technicians (biomeds) demanding the right to repair medical equipment.

Philips Installs Digital Imaging Solutions at Westmead Hospital in Australia

Royal Philips has announced the successful installation of its most advanced digital diagnostic and interventional neurovascular imaging solutions in the brand new Central Acute Services Building at Westmead Hospital in Sydney, Australia.

Ziehm Imaging Americas and Carestream Announce Partnership

In partnership with Ziehm Imaging, Carestream Health is announced the addition of a mobile C-arm, the Ziehm Vision RFD C-arm, into its growing product portfolio.

Seeing Color and Diversity with Imaging AI

Seeing Color and Diversity with Imaging AI

By Mark Watts

Mark WattsOne of my medical imaging heroes, Herman Oosterwijk posted a picture recently on LinkedIn. It was a Throwback Thursday post. The picture was of a “revolutionary piece of equipment … a view station that mimicked the Alternator.” The Alternator was an X-ray film holding device that had panels that rotated up and down to expedite the reading process for the radiologist. The monochrome X-ray film holder format was mistakenly maintained in the early digital imaging transformation. This lack of awareness by the engineers led them to create a solution that was limited in application and growth potential. The monitors could not even see colors on the screen.

In June 2020 a crisis erupted in the artificial intelligence world. Conversations on Twitter exploded after a new tool for creating realistic high-resolution images of people from pixelated photos showed its racial bias turning a pixelated yet recognizable photo of former President Barack Obama into a higher-resolution photo of a white man. Researchers soon posted images of other people of color being turned white. The conversations became intense to well-known AI corporate researchers, including Facebook’s chief AI scientist Yenn Lecunn and Google’s co-leader of AI ethics Timnit Gebru. They expressed strongly divergent views about how to interpret the tools error. A heated and multi-day online debate seemed to be dividing the field into two distinct camps. Some argued that the bias shown in their results came from bad (that is, incomplete) data being fed into an algorithm while others argued that it came from bad (that is shortsighted) decisions about the algorithm itself – including what data it was to consider.

A lung screening AI algorithm in 2017 from Stanford was toted to outperform radiologists at diagnosing pneumonia. An issue developed when positive findings for “Bilateral Densities” were noted on all female chest X-rays. The original training data set had been comprised of only male students.

Stanford researcher Pranav Raipurkar looked at the tendency for algorithms trained on proprietary or incomplete datasets to fail outside those friendly confines — that is, they do not generalize. As one example, he pointed to American-trained AI models for lung diseases that do not include tuberculosis (TB) in their labeling. TB is a noted problem for the developing world, but less so in America, so scans of tuberculosis are not found in the training dataset. True democratization requires AI to work everywhere and for everyone, he said. Simply adding images of tuberculosis to American training datasets would help generalize — and therefore democratize — valuable AI to other parts of the world.

Bias has plagued the artificial intelligence field for years. This particular AI tool’s black to white photo transformation isn’t completely unexpected. However, what the debate made obvious is that not all AI researchers have embraced concerns about diversity. This is a fact that will fundamentally affect any organization that plays in the AI space. Also, there’s a question here that many organizations should pay attention to. Why didn’t it occur to anyone to test the software on cases involving people of color in the first place?

I would argue that this is a case of invisibility. Sometimes people of color are present, and not seen. Other times the diversity population is missing, but that absence is not noticed.

Part of the problem is that few people of color are working in the field of AI. Black workers account for 2.5% of Google’s entire workforce and 4% of Facebook’s. Globally, only 22% of AI professional are female.

Considering the growing role that AI plays in organizations’ business processes, in the development of products and in the products themselves, the lack of diversity in AI and the invisibility of people of color will grow into a cascade of crises. Issues will pile up one upon another, if these biases are not addressed soon. I have seen companies pull their advertising dollars from Facebook because of its poor handling of hate speech. I have seen companies issue moratoriums on the sale of facial recognition software, which had long been recognized as having built-in racial and gender biases.

Due to the COVID-19 pandemic crisis – and the hasty adoption of AI to track COVID’s spread – we have a unique opportunity to institute change within the world of AI. We have an opportunity to rapidly improve the delivery of health care services with the assistance of AI. Success will be predicated on our mutual awareness that designing the solution without diversity will be like the doomed PACS systems without color, limited in application and growth potential.

Mark Watts is the enterprise imaging director at Fountain Hills Medical Center.

Previous

Next

Submit a Comment

Your email address will not be published. Required fields are marked *