Chapter 1 gave you literacy. Chapter 2 gave you skill. Chapter 3 keeps both of those safe — for you, for MPV, and for the patients downstream. Responsible AI at a medical-device manufacturer is not a "values" topic — it is an operational, legal, and competitive necessity. Skip this chapter and you will be the source of MPV's first AI incident.
5 Golden Rules. Do-Not-Paste list. The shortcut: "Would my manager mind?"
AI Law in force since 1 March 2026. Healthcare grace until 1 September 2027.
Risk tiers · 6 risk categories · governance roles · incident response · audit trail.
Before any AI use, every MPV employee can recite these five rules. They are deliberately short. Print them. Stick them on your monitor. They will save MPV from its first AI incident.
AI assists; humans approve. Quality release, customer commitments, regulatory submissions, hiring, discipline — every decision that affects a patient, customer, employee, or regulator stays with a named human. This is also Article 4 of Vietnam's AI Law.
No customer/hospital data, no employee records, no designs, no formulae, no quality records, no financial detail into public AI tools. Either use MPV's enterprise plan with a DPA, or desensitise first. (Detailed Do-Not-Paste list in 3.2.)
LLMs hallucinate. Numbers, dates, names, ISO clauses, regulatory citations — all must be checked against source documents before any AI output is signed, sent, or filed. "AI drafts, human verifies" is the lifetime rule.
Final clinical decisions. Hiring/firing without human review. Surveillance without consent. Anything in Vietnam AI Law's Article 8 prohibited list. When in doubt, escalate — never assume permission.
For any AI output that becomes a record (CAPA, customer reply, supplier comm, hire decision): log the tool used, the date, who reviewed it, what was changed. This is your audit trail under ISO 13485 and your defence under the AI Law's incident-reporting requirements.
"Public AI" means the free tier of ChatGPT, Gemini, Claude, browser AI plugins, and any AI tool MPV has not signed a data-processing agreement (DPA) with. These tools may store, log, or train on what you type. The categories below are off-limits regardless of how convenient pasting would be.
Three approved patterns for working with sensitive content:
Replace identifiers with placeholders before pasting. "Bach Mai Hospital" → "Customer A". Names → roles. Specific quantities → rounded ranges. The analysis still works without the identifying data.
Use MPV's Microsoft 365 Copilot, ChatGPT Enterprise, or Claude Teams account — with a signed DPA and Decree 13/2023 compliance. Data is contractually protected.
FPT.AI, FPT AI Factory, VinAI, Viettel AI — data residency in Vietnam, contracts in VND. Use for the most-regulated workflows.
Vietnam now has the most comprehensive AI legal framework in Southeast Asia. The three documents below shape every AI decision MPV makes. Know them at this level of operational detail — you'll be quoted from them in any vendor or board conversation.
| Attribute | Detail |
|---|---|
| Passed | 10 December 2025 (National Assembly) |
| Effective | 1 March 2026 — already in force |
| Structure | 35 articles · principle-based · risk-tiered (4 tiers) |
| Grace period — healthcare/education/finance | 18 months from effect → full compliance by 1 September 2027 |
| Grace period — other sectors | 12 months → full compliance by 1 March 2027 |
| Risk tiers | High-Risk · Medium-Risk · Low-Risk · (Prohibited — Article 8) |
| Core principle | Human-centric — "humans remain the final arbiter" (Article 4) |
| Enforcement | Suspension, recall, fines up to 2% of annual revenue for severe breaches (Article 29) |
| Lead authority | Ministry of Science and Technology (MOST) + centralised one-stop AI portal |
| Inspiration | EU AI Act (similar structure, similar terms — with Vietnamese-specific provisions) |
| Attribute | Detail |
|---|---|
| Effective | 1 July 2023 (already in force, applies now) |
| Scope | Any processing of Vietnamese personal data, including by AI systems |
| Lawful bases | Consent, legitimate interest, legal obligation (similar to GDPR) |
| Breach notification | 72 hours to Ministry of Public Security and affected individuals |
| Cross-border transfer | Requires Impact Assessment + government notification |
| Sensitive data | Health data is "sensitive" — extra consent + impact assessment required |
| Attribute | Detail |
|---|---|
| What it is | The international standard for medical-device quality management. MPV holds certification — losing it shuts the company. |
| AI implication | If AI is used in any QMS process (drafting CAPAs, classifying complaints, scoring suppliers), the use must be controlled, traceable, and validated. |
| Key clauses for AI | 4.1.6 (software validation) · 8.2.1 (feedback & complaint) · 8.5.2 (CAPA) · 7.5 (production controls) |
| Auditor expectation | "Show me how you control AI in this process — who approves outputs, what's logged, how you'd roll back a bad decision." |
Vietnam's AI Law classifies systems by risk tier. The tier determines the compliance burden. Mis-classifying high-risk as medium-risk is a fast path to that 2%-of-revenue penalty.
Social-credit scoring · subliminal manipulation · biometric mass surveillance · exploitation of vulnerable groups. Cannot be deployed at all. MPV will not encounter these in normal operations.
Significant impact on life, health, safety, fundamental rights. Examples: AI in clinical decisions, hiring decisions without human override, critical infrastructure. Requires: registration, conformity assessment, human oversight, transparency, post-market monitoring, incident reporting.
AI that interacts with humans in ways where the AI nature isn't obvious, or that could mislead. Examples: customer chatbots, deepfake-capable tools. Requires: transparency labelling, accountability documentation, sample audits.
Everything else. Most productivity AI lives here. Requires: general accountability, monitoring on incidents/complaints, no pre-market burden.
| MPV use case | Likely tier | Notes |
|---|---|---|
| CAPA drafting (GenAI, QA reviews + approves) | Low-Risk | Human-in-the-loop on a regulated record — manageable |
| Email classification & routing | Low-Risk | Productivity, internal |
| Computer Vision defect inspection (production decisions) | Medium- to High-Risk | If AI rejects/releases product without human check → High-Risk. With human override on every reject → Medium-Risk. |
| Predictive maintenance ML | Low-Risk | Recommends maintenance window; humans schedule |
| HR helpdesk chatbot | Medium-Risk | Customer/employee-facing, must label as AI |
| Demand forecasting | Low-Risk | Informs planning; humans decide |
| CV-based recruiting screen | High-Risk | Hiring decision impact — heavy compliance burden if AI used without override |
Pro-innovation framework. Preferential 10% corporate tax for qualifying digital activities. R&D deductions become available.
Today's status. Vietnam is one of the first countries in ASEAN with a standalone AI Law.
The government's guiding decree, sanctions decree, list of high-risk systems, and National AI Ethics Framework. Watch MOST's portal.
All sectors except healthcare/education/finance must be fully compliant with the AI Law.
MPV must have all AI systems classified, documented, and compliant by this date.
The legal tiers tell you the compliance burden. The six categories below tell you what can actually go wrong. For every AI use case, walk through these six. If you have a mitigation for each, you're ready to deploy.
What it is: LLM confidently invents plausible-sounding facts. ISO clause numbers, dates, statistics, names.
MPV mitigation: "AI drafts, human verifies." Every regulatory citation in an AI-drafted document is checked against the source. Critical numbers are spot-verified. Flag any AI output with instructions to mark uncertainty.
What it is: AI reproduces patterns in training data, including unwanted biases (gender in hiring, region in lending, shift bias in defect classification).
MPV mitigation: Don't use AI for hiring/firing decisions in Phase 1. For pattern-detection AI (defect, anomaly), monitor outputs across operators/shifts/lines and flag if one group is systematically singled out.
What it is: Confidential MPV data pasted into public AI ends up in model training or exposed via prompt-injection attacks.
MPV mitigation: Do-Not-Paste list (3.2). Enterprise plans with DPA. Vietnamese vendors for the most-sensitive workflows. Annual employee refresher.
What it is: AI-generated content may incorporate copyrighted material; AI-drafted code may carry incompatible open-source licenses; AI-generated images may not be MPV's to use.
MPV mitigation: Use enterprise plans with IP indemnification (Microsoft, Anthropic, OpenAI offer this). Treat all AI text as draft until reviewed. For any external publication, verify originality.
What it is: An AI model that worked well 6 months ago drifts as data, prompts, or upstream systems change. Outputs degrade silently.
MPV mitigation: Periodic re-scoring of AI outputs (same rule-based rubric you used in Chapter 1). If quality drops, retrain / re-prompt / re-vendor. Build the re-scoring into the QA Manager's monthly review.
What it is: Vendor changes pricing, terms, model behaviour, or goes out of business. Your workflow breaks.
MPV mitigation: The three exit questions from Chapter 2.7. Multi-vendor strategy (don't put all Phase 1 productivity AI in one tool). Annual vendor review.
| Risk | ISO 13485 clause(s) | What an auditor will check |
|---|---|---|
| Hallucination | 4.1.6 (software validation), 7.5 (production controls) | Records of AI output review before any record is signed |
| Bias | 8.2.1 (feedback) | Monitoring of AI outputs for systematic disparate impact |
| Data leakage | 4.1.6, 7.5.6 (validation) | DPA exists, training records, Do-Not-Paste awareness |
| IP/copyright | 4.2.4 (documentation) | Vendor IP indemnification in contract |
| Model drift | 8.2.5 (monitoring & measurement) | Periodic re-scoring records |
| Vendor risk | 7.4 (purchasing), 4.1.5 (process outsourcing) | Vendor evaluation records, exit plan documented |
Rules don't enforce themselves. Six principles + named roles + an incident-response protocol turn this chapter from a policy document into a daily practice.
| Role | Owner (typical) | Responsibilities |
|---|---|---|
| AI Governance Lead | CIO / Head of Operations | Owns AI use policy, vendor approvals, annual review |
| Data Protection Officer (DPO) | Compliance / Legal | Decree 13/2023 compliance, DPA approvals, breach response |
| QMS Owner | Quality Manager | AI in ISO 13485 processes, CAPA on AI incidents |
| Department Champions | One per department | Coach team on CRAFT, Do-Not-Paste, score outputs |
| Every Employee | You | 5 Golden Rules + Do-Not-Paste + verify + log |
Within 1 hour: Stop using the AI tool involved. Notify your manager. If personal data was exposed, also notify the DPO immediately.
Within 24 hours: What happened, what data, what impact? If personal data → DPO files Decree 13 breach notice within 72 hours. If quality/regulatory record → CAPA opens.
Within 30 days: Root-cause, corrective action, preventive action, verification. Update the AI use policy if needed. Log in QMS for ISO audit.
✅ 5 Golden Rules: Human Decides · Protect Data · Verify Facts · No Prohibited Uses · Log Matters
✅ Do-Not-Paste list: Customer · Employee · IP · Quality · Financial · Regulated — plus 3 safe patterns
✅ Vietnam AI Law (No. 134/2025/QH15) in force since 1 March 2026 · 35 articles · 2% revenue penalty
✅ Healthcare grace period: until 1 September 2027 — MPV's window to align all AI systems
✅ Decree 13/2023: 72-hour breach notification · health data is sensitive
✅ ISO 13485 + AI: clauses 4.1.6 · 7.5 · 8.2.1 · 8.5.2 — AI use must be controlled, traceable, validated
✅ 4 risk tiers: Prohibited · High · Medium · Low — most MPV AI is Low; CV inspection & chatbot are Medium
✅ 6 risk categories: Hallucination · Bias · Data leakage · IP · Model drift · Vendor risk — each with mitigation
✅ 6 principles in the AI policy + 5 named governance roles + 3-step incident response