The Future of AI in Medicine: Aladdin’s Lamp, or Pandora’s Box?
Manage episode 362757613 series 3242797
In this episode, we talk a bit about the recent advances in large language models, also known as GPT/ChatGPT. We have two wonderful guests:
Christoph U. Lehmann, M.D., is a Professor of Pediatrics, Population and Data Sciences, and Bioinformatics at UT Southwestern, where he directs the Clinical Informatics Center. In addition, Chris was the first chair of the Examination Committee of the American Board of Preventive Medicine, Subcommittee for Clinical Informatics. Dr. Lehmann’s research focuses on improving clinical information technology and clinical decision support.
Yaa Kumah-Crystal Crystal MD, MPH, MS is an Assistant Professor of Biomedical Informatics and Pediatric Endocrinology at Vanderbilt University Medical Center. Yaa’s research focuses on studying communication and documentation in healthcare and developing strategies to improve workflow and patient care delivery. Yaa works in the Innovations Portfolio at Vanderbilt HealthIT on the development of Voice Assistant Technologies to improve the usability of the EHR through natural language communication.
Chris and Yaa bring very complementary perspectives to the topic of our future. Yaa's research focuses on how we can innovate to improve the use of technology in medicine. Chris is also internationally known as the Editor in Chief of Applied Clinical Informatics,
as well as one of the leaders in our clinical informatics board certification work. He is intimately familiar with the potential uses of this technology beyond clinical care, but, as an actively practicing neonatologist, more than holds his own when it comes to how medicine can benefit from--or be harmed by--new technologies such as AI.
We leave it to you to decide both which direction we're heading, and how we can put up the guardrails to keep us on the preferred track. And, I suspect this won't be our last discussion about AI in Medicine!----more----
By the way, in case you want to learn more about topics we brought up in this episode:
Belmont principles include autonomy
-
- Beneficence: AI is designed explicitly to be helpful to people, who use it or on whom it is used, and to reflect the ideals of compassionate, kind, and considerate human behavior
-
- Autonomy: Context AI: operates without human oversight. Context Ethics: “protecting the autonomy of all people and treating them with courtesy and respect and facilitating informed consent”
- Nonmaleficence: “Do No Harm”. Every reasonable effort shall be made to avoid, prevent, and minimize harm or damage to any stakeholder.
- Justice: Equity in representation in and access to AI, data, and the benefits of AI. Fair access to redress and remedy be available in the event of harm resulting from the use of AI. Affirmative use of AI to support social justice
Artists and AI:
- https://www.buzzfeednews.com/article/chrisstokelwalker/art-subreddit-illustrator-ai-art-controversy
- https://www.businessinsider.com/ai-image-generators-artists-copying-style-thousands-images-2022-10
TikTok voiceover person: https://www.theverge.com/2021/9/29/22701167/bev-standing-tiktok-lawsuit-settles-text-to-speech-voice
GPT and test performance: https://www.cnn.com/2023/01/26/tech/chatgpt-passes-exams/index.html, https://www.medrxiv.org/content/10.1101/2023.03.05.23286533v1.full
Deepfake concerns:
- https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/?sh=292f03982241
- https://www.kaspersky.com/resource-center/threats/protect-yourself-from-deep-fake
- https://www.nbcnews.com/tech/internet/deepfake-porn-ai-mr-deep-fake-economy-google-visa-mastercard-download-rcna75071
MidJourney and bias:
- https://uxdesign.cc/midjourney-is-incredible-but-you-can-see-there-are-definite-existing-biases-in-its-dataset-4b1131fb0533.
- https://nftnow.com/features/the-objectification-of-women-in-ai-art/
- https://arxiv.org/abs/2212.11261
Amazon AI Tool Bias: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
Apple credit biased against wives: https://www.wired.com/story/the-apple-card-didnt-see-genderand-thats-the-problem/
AMIA document about ethical principles around AI: https://amia.org/news-publications/amia-position-paper-details-policy-framework-aiml-driven-decision-support
AI in Medicine JAMA Viewpoint: https://pubmed.ncbi.nlm.nih.gov/36972068/
38 afleveringen