1
IMPROVED WORKFLOW
REDUCED ERRORS
TIME- AND COST-EFFICIENCY
The MEDALLION project studies ways to give medical professionals better tools to analyze medical data. By using Extended Reality (XR) environments, doctors can use their human senses to explore and interact with data more intuitively. This will improve their workflows, reduce mistakes, and save time and money.
2
Within the MEDALLION project, we develop an XR metrology for quality control. This is essential in the medical domain.
3
REMOTE CO-WORK
FLEXIBLE DATA ACCESS
Using collaborative XR technology, groups can work together on the same data, even if they are not physically in the same location. They can access the data from their desks or while on the go, and collaborate with others in real-time or at different times.
4
GLOBAL COLLABORATION
In the MEDALLION project, we develop multimedia annotations that can be added to the 3D patient data and linked to patient records. Together with powerful language tools, this enables global collaboration.
5
UNDERSTANDABILITY
TRANSPARENCY
ENABLES TRUST
Explainable AI (XAI) systems will improve data analysis by making it more understandable to humans. These systems should explain their results and reasoning, as well as information about the algorithms and data used. They should also be able to customize their output based on the user’s knowledge and interests. This will create more transparent and trustworthy AI.
HOW EXPLAINABLE AI WORKS?
Traditional Artificial Intelligence (AI) generates medical data without making the reasoning behind the data transparent to the user. Therefore, using traditional AI can lead to situations where accurate results are produced, yet the reasoning behind these results is flawed [1]. As medical data impacts human lives, it is not possible to employ AI tools in the medical field if clinicians are not able to trust the AI-generated data.
Explainable AI, by its definition [1], aims to provide the user with the needed reasoning to make its operation easy to understand. This makes it possible for clinicians to trust the AI-generated results. When discovering the flawed reasoning of the AI assistant, the clinician can communicate with the AI about the mistake. Then, the AI assistant can re-generate the results based on the clinician’s instructions.
REFERENCES
[1] Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities, and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012