Quanskill
📖 THEORY | 90 Minutes | Chapter 3 of 4

Chapter 3: Responsible AI at MPV

The 5 Golden Rules every MPV employee memorises. The Do-Not-Paste list. Vietnam's AI Law (in force since 1 March 2026), Decree 13/2023, and ISO 13485 alignment. Six risk categories and how MPV mitigates each. The governance roles, incident response, and audit-ready trail.

🎯 Chapter Objectives

Chapter 1 gave you literacy. Chapter 2 gave you skill. Chapter 3 keeps both of those safe — for you, for MPV, and for the patients downstream. Responsible AI at a medical-device manufacturer is not a "values" topic — it is an operational, legal, and competitive necessity. Skip this chapter and you will be the source of MPV's first AI incident.

📌 Chapter 3 Learning Snapshot

1

Rules

5 Golden Rules. Do-Not-Paste list. The shortcut: "Would my manager mind?"

2

Law

AI Law in force since 1 March 2026. Healthcare grace until 1 September 2027.

3

Operate

Risk tiers · 6 risk categories · governance roles · incident response · audit trail.

3.1 The Five Golden Rules Every MPV Employee Memorises

Before any AI use, every MPV employee can recite these five rules. They are deliberately short. Print them. Stick them on your monitor. They will save MPV from its first AI incident.

1

Human Always Decides

AI assists; humans approve. Quality release, customer commitments, regulatory submissions, hiring, discipline — every decision that affects a patient, customer, employee, or regulator stays with a named human. This is also Article 4 of Vietnam's AI Law.

2

Protect the Data

No customer/hospital data, no employee records, no designs, no formulae, no quality records, no financial detail into public AI tools. Either use MPV's enterprise plan with a DPA, or desensitise first. (Detailed Do-Not-Paste list in 3.2.)

3

Verify Every Fact

LLMs hallucinate. Numbers, dates, names, ISO clauses, regulatory citations — all must be checked against source documents before any AI output is signed, sent, or filed. "AI drafts, human verifies" is the lifetime rule.

4

No AI for Prohibited Uses

Final clinical decisions. Hiring/firing without human review. Surveillance without consent. Anything in Vietnam AI Law's Article 8 prohibited list. When in doubt, escalate — never assume permission.

5

Log What Matters

For any AI output that becomes a record (CAPA, customer reply, supplier comm, hire decision): log the tool used, the date, who reviewed it, what was changed. This is your audit trail under ISO 13485 and your defence under the AI Law's incident-reporting requirements.

Bonus rule — When in Doubt, Ask: Before using AI on a task you haven't done before, ask your manager OR the AI Governance lead. 30 seconds of "is this okay?" beats 6 months of incident remediation.

Quick Recap — The 5 Golden Rules

1. Human Always Decides 2. Protect the Data 3. Verify Every Fact 4. No Prohibited Uses 5. Log What Matters

3.2 What Never Goes Into Public AI Tools — The Do-Not-Paste List

"Public AI" means the free tier of ChatGPT, Gemini, Claude, browser AI plugins, and any AI tool MPV has not signed a data-processing agreement (DPA) with. These tools may store, log, or train on what you type. The categories below are off-limits regardless of how convenient pasting would be.

The Do-Not-Paste List

Customer & Hospital Data
Patient identifiers, hospital PO numbers and quantities, distributor contracts, complaint records, tender responses tied to a hospital name.
Employee Records
Names + salaries, performance scores, ID numbers, medical notes, disciplinary records, CV/résumés with identifiers.
Designs & IP
Product drawings, mold specs, material formulas, supplier pricing, R&D protocols, FDA / MOH technical files.
Quality & Batch Records
CAPA reports with full detail, deviation investigations, ISO 13485 audit findings, FDA correspondence, batch genealogy.
Financial Detail
Margins by SKU, unreleased forecasts, M&A discussions, banking detail, tax filings, customer-by-customer revenue.
Anything Regulated
MOH DMEC submissions, FDA correspondence, ISO 13485 records, EU MDR/CE technical files, audit trails.

Safe Patterns Instead

Three approved patterns for working with sensitive content:

🟢 Pattern A — Desensitise

Replace identifiers with placeholders before pasting. "Bach Mai Hospital" → "Customer A". Names → roles. Specific quantities → rounded ranges. The analysis still works without the identifying data.

🟢 Pattern B — Enterprise plan

Use MPV's Microsoft 365 Copilot, ChatGPT Enterprise, or Claude Teams account — with a signed DPA and Decree 13/2023 compliance. Data is contractually protected.

🟢 Pattern C — VN-resident vendor

FPT.AI, FPT AI Factory, VinAI, Viettel AI — data residency in Vietnam, contracts in VND. Use for the most-regulated workflows.

The shortcut rule: If you would not paste this into a public website's contact form, do not paste it into a public AI chat.

Quick Recap — Do-Not-Paste

No customer/hospital data No employee records No designs/IP No quality/batch records Three safe patterns: Desensitise · Enterprise · VN vendor

3.3 Vietnam's AI & Data Framework — Operational Detail

Vietnam now has the most comprehensive AI legal framework in Southeast Asia. The three documents below shape every AI decision MPV makes. Know them at this level of operational detail — you'll be quoted from them in any vendor or board conversation.

Law on Artificial Intelligence (Law No. 134/2025/QH15)

AttributeDetail
Passed10 December 2025 (National Assembly)
Effective1 March 2026 — already in force
Structure35 articles · principle-based · risk-tiered (4 tiers)
Grace period — healthcare/education/finance18 months from effect → full compliance by 1 September 2027
Grace period — other sectors12 months → full compliance by 1 March 2027
Risk tiersHigh-Risk · Medium-Risk · Low-Risk · (Prohibited — Article 8)
Core principleHuman-centric — "humans remain the final arbiter" (Article 4)
EnforcementSuspension, recall, fines up to 2% of annual revenue for severe breaches (Article 29)
Lead authorityMinistry of Science and Technology (MOST) + centralised one-stop AI portal
InspirationEU AI Act (similar structure, similar terms — with Vietnamese-specific provisions)

Decree 13/2023 on Personal Data Protection

AttributeDetail
Effective1 July 2023 (already in force, applies now)
ScopeAny processing of Vietnamese personal data, including by AI systems
Lawful basesConsent, legitimate interest, legal obligation (similar to GDPR)
Breach notification72 hours to Ministry of Public Security and affected individuals
Cross-border transferRequires Impact Assessment + government notification
Sensitive dataHealth data is "sensitive" — extra consent + impact assessment required

ISO 13485:2016 — Medical Device QMS

AttributeDetail
What it isThe international standard for medical-device quality management. MPV holds certification — losing it shuts the company.
AI implicationIf AI is used in any QMS process (drafting CAPAs, classifying complaints, scoring suppliers), the use must be controlled, traceable, and validated.
Key clauses for AI4.1.6 (software validation) · 8.2.1 (feedback & complaint) · 8.5.2 (CAPA) · 7.5 (production controls)
Auditor expectation"Show me how you control AI in this process — who approves outputs, what's logged, how you'd roll back a bad decision."
The practical rule for MPV: AI is allowed and encouraged for productivity (writing, summarising, planning, analysis). AI is not allowed to make autonomous decisions in regulated processes. Every AI output that touches quality, customer commitments, employee personal data, or regulatory filings needs a named human reviewer with a logged sign-off.

Quick Recap — Legal Framework

AI Law: in force since 1 Mar 2026 Healthcare grace: 1 Sep 2027 Fines up to 2% revenue Decree 13: 72-hr breach notification ISO 13485: AI must be controlled in QMS

3.4 The Risk-Based Framework — Which Tier Are MPV's AI Systems In?

Vietnam's AI Law classifies systems by risk tier. The tier determines the compliance burden. Mis-classifying high-risk as medium-risk is a fast path to that 2%-of-revenue penalty.

Three Tiers + Prohibited

!

Prohibited (Article 8)

Social-credit scoring · subliminal manipulation · biometric mass surveillance · exploitation of vulnerable groups. Cannot be deployed at all. MPV will not encounter these in normal operations.

H

High-Risk

Significant impact on life, health, safety, fundamental rights. Examples: AI in clinical decisions, hiring decisions without human override, critical infrastructure. Requires: registration, conformity assessment, human oversight, transparency, post-market monitoring, incident reporting.

M

Medium-Risk

AI that interacts with humans in ways where the AI nature isn't obvious, or that could mislead. Examples: customer chatbots, deepfake-capable tools. Requires: transparency labelling, accountability documentation, sample audits.

L

Low-Risk

Everything else. Most productivity AI lives here. Requires: general accountability, monitoring on incidents/complaints, no pre-market burden.

Which Tier Are MPV's AI Systems In?

MPV use caseLikely tierNotes
CAPA drafting (GenAI, QA reviews + approves)Low-RiskHuman-in-the-loop on a regulated record — manageable
Email classification & routingLow-RiskProductivity, internal
Computer Vision defect inspection (production decisions)Medium- to High-RiskIf AI rejects/releases product without human check → High-Risk. With human override on every reject → Medium-Risk.
Predictive maintenance MLLow-RiskRecommends maintenance window; humans schedule
HR helpdesk chatbotMedium-RiskCustomer/employee-facing, must label as AI
Demand forecastingLow-RiskInforms planning; humans decide
CV-based recruiting screenHigh-RiskHiring decision impact — heavy compliance burden if AI used without override

Transition Timeline — What MPV Must Do by When

1 January 2026 — DTI Law in force

Pro-innovation framework. Preferential 10% corporate tax for qualifying digital activities. R&D deductions become available.

1 March 2026 — AI Law in force ✓ (now)

Today's status. Vietnam is one of the first countries in ASEAN with a standalone AI Law.

Through 2026 — Implementing decrees published

The government's guiding decree, sanctions decree, list of high-risk systems, and National AI Ethics Framework. Watch MOST's portal.

1 March 2027 — General compliance deadline

All sectors except healthcare/education/finance must be fully compliant with the AI Law.

1 September 2027 — Healthcare grace ends

MPV must have all AI systems classified, documented, and compliant by this date.

Quick Recap — Risk Tiers

Prohibited · High · Medium · Low Most MPV productivity AI = Low CV inspection & chatbot = Medium Healthcare deadline: 1 Sep 2027

3.5 Six Risk Categories — and How MPV Mitigates Each

The legal tiers tell you the compliance burden. The six categories below tell you what can actually go wrong. For every AI use case, walk through these six. If you have a mitigation for each, you're ready to deploy.

Risk 1 — Hallucination

What it is: LLM confidently invents plausible-sounding facts. ISO clause numbers, dates, statistics, names.
MPV mitigation: "AI drafts, human verifies." Every regulatory citation in an AI-drafted document is checked against the source. Critical numbers are spot-verified. Flag any AI output with instructions to mark uncertainty.

Risk 2 — Bias

What it is: AI reproduces patterns in training data, including unwanted biases (gender in hiring, region in lending, shift bias in defect classification).
MPV mitigation: Don't use AI for hiring/firing decisions in Phase 1. For pattern-detection AI (defect, anomaly), monitor outputs across operators/shifts/lines and flag if one group is systematically singled out.

Risk 3 — Data Leakage

What it is: Confidential MPV data pasted into public AI ends up in model training or exposed via prompt-injection attacks.
MPV mitigation: Do-Not-Paste list (3.2). Enterprise plans with DPA. Vietnamese vendors for the most-sensitive workflows. Annual employee refresher.

Risk 4 — IP & Copyright

What it is: AI-generated content may incorporate copyrighted material; AI-drafted code may carry incompatible open-source licenses; AI-generated images may not be MPV's to use.
MPV mitigation: Use enterprise plans with IP indemnification (Microsoft, Anthropic, OpenAI offer this). Treat all AI text as draft until reviewed. For any external publication, verify originality.

Risk 5 — Model Drift

What it is: An AI model that worked well 6 months ago drifts as data, prompts, or upstream systems change. Outputs degrade silently.
MPV mitigation: Periodic re-scoring of AI outputs (same rule-based rubric you used in Chapter 1). If quality drops, retrain / re-prompt / re-vendor. Build the re-scoring into the QA Manager's monthly review.

Risk 6 — Vendor Risk

What it is: Vendor changes pricing, terms, model behaviour, or goes out of business. Your workflow breaks.
MPV mitigation: The three exit questions from Chapter 2.7. Multi-vendor strategy (don't put all Phase 1 productivity AI in one tool). Annual vendor review.

ISO 13485 Alignment in One Page

RiskISO 13485 clause(s)What an auditor will check
Hallucination4.1.6 (software validation), 7.5 (production controls)Records of AI output review before any record is signed
Bias8.2.1 (feedback)Monitoring of AI outputs for systematic disparate impact
Data leakage4.1.6, 7.5.6 (validation)DPA exists, training records, Do-Not-Paste awareness
IP/copyright4.2.4 (documentation)Vendor IP indemnification in contract
Model drift8.2.5 (monitoring & measurement)Periodic re-scoring records
Vendor risk7.4 (purchasing), 4.1.5 (process outsourcing)Vendor evaluation records, exit plan documented

Quick Recap — 6 Risk Categories

Hallucination — verify Bias — monitor Data leakage — DPA + Do-Not-Paste IP — vendor indemnification Model drift — periodic re-score Vendor risk — multi-vendor + exit plan

3.6 How MPV Operationalises Responsible AI

Rules don't enforce themselves. Six principles + named roles + an incident-response protocol turn this chapter from a policy document into a daily practice.

Six Principles in MPV's AI Use Policy

1. Human-Centered
Humans decide, always. AI assists. From Article 4 of the AI Law.
2. Fair & Non-Biased
Monitor outputs for systematic disparate impact across operators/shifts/groups.
3. Transparent
Customer-facing AI labelled as AI. Employees can ask which decisions used AI.
4. Accountable
Every AI output that becomes a record has a named human reviewer and approval timestamp.
5. Secure & Private
Decree 13/2023 + DPA + Do-Not-Paste list. Annual training refresher.
6. Contestable
Any employee or customer affected by an AI-informed decision can request human re-review.

Governance Roles at MPV

RoleOwner (typical)Responsibilities
AI Governance LeadCIO / Head of OperationsOwns AI use policy, vendor approvals, annual review
Data Protection Officer (DPO)Compliance / LegalDecree 13/2023 compliance, DPA approvals, breach response
QMS OwnerQuality ManagerAI in ISO 13485 processes, CAPA on AI incidents
Department ChampionsOne per departmentCoach team on CRAFT, Do-Not-Paste, score outputs
Every EmployeeYou5 Golden Rules + Do-Not-Paste + verify + log

Incident Response in 3 Steps

1. Contain

Within 1 hour: Stop using the AI tool involved. Notify your manager. If personal data was exposed, also notify the DPO immediately.

2. Assess

Within 24 hours: What happened, what data, what impact? If personal data → DPO files Decree 13 breach notice within 72 hours. If quality/regulatory record → CAPA opens.

3. CAPA

Within 30 days: Root-cause, corrective action, preventive action, verification. Update the AI use policy if needed. Log in QMS for ISO audit.

Quick Recap — Governance

6 principles in policy Named roles: Lead · DPO · QMS · Champions · Everyone Incident: Contain · Assess · CAPA 72-hr breach notification (Decree 13)

📝 Chapter 3 Summary — What You Now Own

5 Golden Rules: Human Decides · Protect Data · Verify Facts · No Prohibited Uses · Log Matters

Do-Not-Paste list: Customer · Employee · IP · Quality · Financial · Regulated — plus 3 safe patterns

Vietnam AI Law (No. 134/2025/QH15) in force since 1 March 2026 · 35 articles · 2% revenue penalty

Healthcare grace period: until 1 September 2027 — MPV's window to align all AI systems

Decree 13/2023: 72-hour breach notification · health data is sensitive

ISO 13485 + AI: clauses 4.1.6 · 7.5 · 8.2.1 · 8.5.2 — AI use must be controlled, traceable, validated

4 risk tiers: Prohibited · High · Medium · Low — most MPV AI is Low; CV inspection & chatbot are Medium

6 risk categories: Hallucination · Bias · Data leakage · IP · Model drift · Vendor risk — each with mitigation

6 principles in the AI policy + 5 named governance roles + 3-step incident response

🔬 Continue to Chapter 3 Lab →