Interactive Q&A Slides
School of Medical Sciences, Universiti Sains Malaysia
Question 1: What is the main difference between Machine Learning and Deep Learning?
Question 2: What were the main reasons for the “AI Winter” periods, and what brought AI back to life?
Question 3: In supervised learning, what role does “labeled data” play in training an AI model?
Answer 1: Machine Learning is a subset of AI that enables computers to recognize patterns from data. Deep Learning is a specialized subset of ML that uses multilayered neural networks (multiple hidden layers) to handle complex, high-dimensional data like medical images. Think of it as: ML learns from data, while DL learns from data using brain-inspired architecture.
Answer 2: The AI Winter happened because of:
AI came back because of:
Answer 3: In supervised learning, labeled data acts like an answer key. The AI learns by comparing its predictions against known correct answers. For example, if we want AI to detect cancer, we show it thousands of images already labeled as “cancer” or “not cancer.” The AI learns the patterns that distinguish them, then can predict on new, unlabeled images.
Question 1: What is the main purpose of Predictive AI in healthcare?
Question 2: Can you name one advantage and one limitation of Decision Tree models?
Question 3: Which type of neural network (CNN or RNN) would you use to analyze chest X-rays, and why?
Answer 1: The main purpose of Predictive AI is to examine historical data and forecast future events or outcomes. In healthcare, this means:
It turns patterns in data into actionable insights for healthcare decisions.
Answer 2:
Advantage of Decision Trees:
Limitation of Decision Trees:
This is why we often use Random Forest (many trees combined) to overcome these limitations.
Answer 3: You would use CNN (Convolutional Neural Network) for chest X-rays.
Why?
RNNs are better for sequential/time-series data like ECG signals or analyzing clinical notes over time.
Question 1: What is the key difference between Predictive AI and Generative AI?
Question 2: How can Generative AI help reduce doctor burnout in clinical practice?
Question 3: What is “hallucination” in Generative AI and why is it dangerous in healthcare?
Answer 1:
| Aspect | Predictive AI | Generative AI |
|---|---|---|
| Purpose | Forecast outcomes from existing data | Create new content |
| Output | Predictions, risk scores | Text, images, synthetic data |
| Example | “This patient has 75% risk of diabetes” | “Here’s a draft clinical note based on the consultation” |
Simple way to remember: Predictive AI predicts, Generative AI creates.
Answer 2: Generative AI helps reduce doctor burnout through:
Clinical Note Documentation - 20% less time completing notes, 30% less after-hours work
Ambient AI Scribes - Listens to patient conversations and generates notes automatically
Patient Messaging - 72% of clinicians report reduced cognitive load
Chart Summarization - AI summaries equivalent or superior to physician summaries
This allows doctors to focus on patient care, not paperwork.
Answer 3: Hallucination is when AI generates information that sounds plausible and confident but is completely made up or incorrect.
Why it’s dangerous in healthcare:
Solution: Always verify AI-generated content. AI assists, but humans must validate and make final decisions.
Question 1: What does “precision medicine” mean, and how is it different from traditional medicine?
Question 2: What are the main types of data that AI uses to personalize treatment in precision medicine?
Question 3: Can you give one example of how AI improved outcomes in precision oncology (cancer treatment)?
Answer 1:
Traditional Medicine: “One-size-fits-all” approach. Same treatment given to all patients with the same diagnosis.
Precision Medicine: Treatment tailored to individual characteristics of each patient.
Think of it like clothing:
Precision medicine considers your genetics, lifestyle, metabolism, and environment to find the best treatment for YOU.
Answer 2: AI uses multiple types of data for personalization:
AI integrates all these to generate customized treatment strategies.
Answer 3: Examples of AI improving precision oncology outcomes:
Lung cancer: AI model achieved AUC of 0.82 for predicting N2 metastasis (cancer spread to lymph nodes)
Prostate cancer: 95% AUC for predicting aggressive cancer tendencies
Brain tumors: MRI + genetic marker integration led to 25% increase in tumor control
Overall, AI-personalized treatments show up to 30% enhanced patient response rates compared to conventional approaches.
Question 1: How does AI accelerate the literature review process for researchers?
Question 2: What is “synthetic data” and why is it useful for medical research?
Question 3: What are the key principles of responsible AI use in research?
Answer 1: AI dramatically accelerates literature review:
| Task | Traditional | With AI |
|---|---|---|
| Literature Review | Weeks to Months | Hours to Days |
| Finding relevant papers | Manual searching | Automated discovery |
| Evidence summarization | Reading everything | AI-generated summaries |
AI can search thousands of papers, identify relevant ones, extract key findings, and summarize evidence - tasks that would take researchers months to complete manually.
Answer 2: Synthetic data is artificial data generated by AI (using GANs - Generative Adversarial Networks) that looks like real patient data but doesn’t belong to any actual person.
Why it’s useful:
It enables research while protecting patient privacy.
Answer 3: Key principles of responsible AI use in research:
Transparency - Always disclose when AI tools were used
Validation - Verify AI-generated content for accuracy
Ethics - Maintain research integrity, don’t let AI “write” your entire paper
Documentation - Record AI methodology (which tool, how it was used)
Remember: AI is a tool to assist, not replace, the researcher’s expertise and judgment.
Question 1: What are the four ethical principles that should guide AI use in healthcare?
Question 2: What is “algorithmic bias” and why should we be concerned about it in healthcare AI?
Question 3: As a healthcare professional, what is one immediate action you can take to prepare for the AI revolution in medicine?
Answer 1: The four ethical principles for AI in healthcare:
Beneficence - AI should improve patient outcomes (do good)
Non-maleficence - Avoid algorithmic harm and bias (do no harm)
Autonomy - AI should support, not replace, clinical judgment (respect decisions)
Justice - Ensure equitable access to AI benefits (fair for all)
These mirror traditional medical ethics principles applied to AI.
Answer 2: Algorithmic bias occurs when AI systems produce unfair outcomes for certain groups because of biased training data.
Healthcare concerns:
Example: Skin cancer detection AI trained mainly on light skin may miss cancers on darker skin tones.
We must actively test AI across diverse populations.
Answer 3: Immediate actions to prepare for AI in medicine:
Embrace AI literacy - Take a course, attend workshops, understand basics
Start experimenting - Try tools like ChatGPT for drafting, literature search
Stay skeptical but open - Validate AI outputs, but don’t dismiss the technology
Collaborate - Connect with data scientists and AI experts
Follow developments - The field evolves rapidly; stay current
“AI won’t replace doctors, but doctors who use AI may replace those who don’t.”
These questions can be used to engage your audience during or after each section of the main presentation.
AI in Health - Discussion Questions