Ai Protection Framework
Ai Protection Framework
rk
l
t
de
Ar
Supporting
Organisations:
Table of Contents
Foreword 2
Preface 4
Introduction 6
1
Artificial Intelligence: Model Personal Data Protection Framework
Foreword
Inevitably, the new risks arising from the innovative applications of AI present
regulatory challenges. In response to the rapid development of AI, regulators
around the world have rolled out various laws and regulations, including
the Artificial Intelligence Act adopted by the European Parliament in March
2024, which aims to regulate AI systems according to their risk level, and the
Interim Measures for the Management of Generative Artificial Intelligence
Services issued by our Motherland in July 2023 with a view to promoting the
healthy development of generative AI and regulating its application.
I am pleased that the Office of the Privacy Commissioner for Personal Data
has published the Artificial Intelligence: Model Personal Data Protection
Framework and taken the initiative to provide guidance for Hong Kong
enterprises, enabling them to reap the benefits of AI technology while
brushing up on personal data privacy protection. This publication will
significantly enhance the level of AI governance within enterprises and ensure
the proper use of the technology.
2
Adopting a risk-based approach, the Framework provides a set of practical
and detailed recommendations for local enterprises intending to procure,
implement and use AI systems. It covers the entire business process and
provides pragmatic recommendations for enterprises, whether they are
procuring existing AI solutions or customising AI solutions based on their
needs. To ensure the protection of personal data privacy and the safe, ethical
and responsible use of innovative technology, I encourage enterprises to refer
to the Framework and implement the measures suggested within it when
procuring and using AI systems.
June 2024
3
Artificial Intelligence: Model Personal Data Protection Framework
Preface
4
The development of this Model Framework would not have been possible
without the unwavering support of the two supporting organisations, the
Office of the Government Chief Information Officer and the Hong Kong
Applied Science and Technology Research Institute. I am truly indebted to
our stakeholders, including members of my Office's Standing Committee on
Technological Developments and industry experts, for their invaluable inputs
and views. My heartfelt gratitude also goes to my team, particularly Ms Cecilia
SIU Wing-sze, Ms Joyce LIU Nga-yan, Ms CHAN Gwen-long, and Mr Jackey
CHEUNG Wai-yu, for their great dedication, meticulous research, and hard
work in the drafting process, particularly in considering and consolidating the
views of stakeholders and relevant best practices from other jurisdictions.
June 2024
5
Artificial Intelligence: Model Personal Data Protection Framework
Introduction
6
The Trend of AI Adoption
1 "Foundation model" generally refers to a machine learning model that is trained on broad data at scale, is designed for generality
of output, and can be adapted to a wide range of downstream distinctive tasks or applications, including simple task completion,
natural language understanding, translation, and content generation.
2 According to the US National Institute of Standards and Technology, natural language processing (NLP) is a powerful
computational approach that allows machines to meaningfully understand human spoken and written languages. Powering
activities such as algorithmic searches, speech translation and even conversational text generation, NLP is able to help us
communicate with computer systems to direct them to carry out a variety of tasks.
7
Artificial Intelligence: Model Personal Data Protection Framework
Vendors
Model Developers (e.g., AI software / hardware
companies, services integrators)
8
Compliance with the Personal Data (Privacy) Ordinance
9
Artificial Intelligence: Model Personal Data Protection Framework
11. To ensure that the Data Stewardship Values and the Ethical Principles
for AI (see paragraph 3 above) are implemented, organisations should
formulate appropriate policies, practices and procedures when they
procure, implement and use AI solutions by taking into consideration
the recommended measures in the following areas:
• AI Strategy and Governance (Part I);
• Risk Assessment and Human Oversight (Part II);
• Customisation of AI Models and Implementation and Management
of AI Systems (Part III); and
• Communication and Engagement with Stakeholders (Part IV).
10
Model Personal Data Protection Framework
13. Buy-in from and active par ticipation by top management (such
as executive or board level) are essential ingredients of success
in the ethical and responsible procurement, implementation and
use of AI systems. Organisations should have an internal AI
governance strategy, which generally comprises an (i) AI strategy,
(ii) governance considerations for procuring AI solutions, and (iii) an
AI governance committee (or similar body) to steer the process.
1.1 AI Strategy
3 Organisations should identify use cases of AI where the potential risks are so high that they should not be allowed. The list of use
cases should remain open to allow for the addition, removal or adjustment of use cases as AI technology evolves, as new risks
come to light and / or as new risk-mitigating measures are adopted.
11
Artificial Intelligence: Model Personal Data Protection Framework
12
PART I AI Strategy and Governance
(vi) Testing and auditing the system and its components for security
and privacy risks; and
(vii) Integrating the AI solution into the organisation's systems.
1. Sourcing AI Solutions
(i) The purposes of using AI and the intended use cases for AI
deployment;
(ii) T h e ke y p r i v a c y a n d s e c ur i t y o b l i g at i o n s a n d et hi c a l
requirements 4 to be conveyed to potential AI suppliers;
(iii) International technical and governance standards that potential
AI suppliers should follow 5;
4 Among other things, these obligations and requirements should be aligned with the organisation's privacy policy (which should
comply with the PDPO) and the Ethical Principles for AI. For example, depending on the use cases and circumstances, the
obligations and requirements may address dataset fairness, the kinds of machine learning algorithms and types of learning
suitable for addressing the organisation's purposes and how ethical expectations will be met (e.g., the transparency and
explainability of different types of AI models; see section 2.3).
5 Organisations may refer to standards developed and published by professional associations such as the International Organization
for Standardization (ISO) and Institute of Electrical and Electronics Engineers (IEEE). For example, ISO/IEC 27001:2022 and ISO/
IEC 27002:2022 cover information security, ISO/IEC 27701:2019 covers personal data protection, ISO/IEC 23894:2023 covers risk
management in AI and ISO/IEC 42001:2023 covers the establishment, implementation, maintenance and continual improvement of
an AI management system within organisations.
13
Artificial Intelligence: Model Personal Data Protection Framework
6 Organisations are encouraged to read the PCPD's Information Leaflet on Outsourcing the Processing of Personal Data to Data
Processors for more information: https://www.pcpd.org.hk/english/publications/files/dataprocessors_e.pdf
14
PART I AI Strategy and Governance
Purpose(s) of Using AI
Evaluation of AI Suppliers
15
Artificial Intelligence: Model Personal Data Protection Framework
16
PART I AI Strategy and Governance
19. The procurement team should work with the project team to select AI
solutions, determine the degree of organisational involvement that is
suitable for the purposes of the organisation7, and work with the legal
and compliance teams to address any potential data protection
compliance questions.
7 For example, the desired levels of accuracy and interpretability of the output of the AI system, as well as barriers to the
implementation of the system in the organisation's IT infrastructure, may be considered.
17
Artificial Intelligence: Model Personal Data Protection Framework
AI Governance Committee
Par ticipation by senior management and interdisciplinar y
collaboration should be the most significant attributes of an AI
governance committee. A cross-functional team with a mix of
skills and perspectives should be established, including business
and operational personnel, procurement teams, system analysts,
system architects, data scientists, cybersecurity professionals,
legal and compliance professionals (including data protection
officer(s)), internal audit personnel, human resources personnel
and customer service personnel.
A C-level executive (such as a chief executive officer, chief
information officer / chief technology officer, chief privacy officer or
similar senior management position) should be designated to lead
the cross-functional team.
(Optional) Independent AI and ethics advice may be sought from
external experts. An additional ethical AI committee may be
established to conduct an independent review when a project is
sufficiently large, with a considerable impact and / or a high profile,
and its ethical value may be challenged.
18
PART I AI Strategy and Governance
19
Artificial Intelligence: Model Personal Data Protection Framework
20
PART I AI Strategy and Governance
21
Artificial Intelligence: Model Personal Data Protection Framework
22. As part of the PMP, any personal data privacy protection training
covering the requirements of the PDPO and the organisation's privacy
policies should also cover the collection and use of personal data in the
procurement, implementation and use of AI systems.
Training and
awareness raising
Employees using AI
22
Part II Risk Assessment and Human Oversight
8 The AI governance committee may consult frameworks such as the ISO/IEC 23894:2023 (Information technology - Artificial
intelligence - Guidance on risk management) and the US National Institute of Standards and Technology's AI Risk Management
Framework in integrating risk management into the life cycle of AI systems.
23
Artificial Intelligence: Model Personal Data Protection Framework
2 3
1 Adopt
Identify and appropriate risk
Conduct risk evaluate the risks management
assessment by a of the AI system measures that are
cross-functional commensurate
team during the with the risks
procurement
processes or when
significant updates
are made to an
existing AI system
24
PART II Risk Assessment and Human Oversight
9 DPP 3 stipulates that personal data must not be used for new purposes without the prescribed consent of the data subjects.
10 DPP 1 stipulates that the amount of personal data to be collected shall be adequate but not excessive in relation to the purpose of
collection.
11 Personal data that are generally considered to be more sensitive include biometric data, health data, financial data, location data,
personal data about protected characteristics (e.g., gender, ethnicity, sexual orientation, religious beliefs, political affiliations),
and the personal data of vulnerable groups, such as children.
12 DPP 4(1)(a) stipulates that all practicable steps shall be taken to ensure that any personal data (including data in a form in which
access to or processing of the data is not practicable) held by a data user is protected against unauthorized or accidental access,
processing, erasure, loss or use having particular regard to the kind of data and the harm that could result if any of those things
should occur.
25
Artificial Intelligence: Model Personal Data Protection Framework
(iv) The quality of the data involved, taking into account the source,
reliability, integrity, accuracy (having regard to DPP 2 of the
PDPO), consistency, completeness, relevance and usability of the
data13;
(v) The security14 of personal data used in an AI system, taking into
account how personal data may be transferred in and out of the
AI systems across the organisation's technological ecosystem15,
and whether guardrails on AI-generated output are in place
to mitigate the risk of personal data leakage, having regard to
DPP 4 of the PDPO16; and
(vi) The probability that privacy risks (e.g., the excessive collection,
misuse or leakage of personal data) will materialise and the
potential severity of the harm that might result.
28. From a wider ethical perspective, and insofar as the use of AI systems
may have an impact on the rights, freedom or interests of stakeholders,
especially individuals, the risk assessment should also take into
account:
13 DPP 2 requires a data user to take all practicable steps to ensure that personal data is accurate having regard to the purpose for
which the personal data is used.
14 Using third-party-built or maintained AI solutions requires cautious assessment of the security risks, as the AI solution may
rely simultaneously on numerous forms of software and hardware developed in-house and / or based on open-source codes and
frameworks (see section 3.2).
15 DPP 4 requires a data user to take all practicable steps to safeguard the security of personal data held by the data user.
16 DPP 4(1)(e) stipulates that all practicable steps shall be taken to ensure that any personal data (including data in a form in which
access to or processing of the data is not practicable) held by a data user is protected against unauthorized or accidental access,
processing, erasure, loss or use having particular regard to any measures taken for ensuring the secure transmission of the data.
17 For example, taking into account the AI's degree of autonomy, its capability of interacting with the environment directly, the
complexity of that environment and the complexity of the decisions to be made by the AI should be considered.
26
PART II Risk Assessment and Human Oversight
Potential impact on
Security of data individuals, the organisation
and community
18 For example, financial harm, bodily harm, discrimination, loss of control of personal data, lack of autonomy, psychological harm,
and other adverse effects on rights and freedoms should be considered.
27
Artificial Intelligence: Model Personal Data Protection Framework
31. Human oversight is a key measure for mitigating the risks of using
AI. The risk assessment would indicate the appropriate level of human
oversight required in the use of the AI system. Ultimately, human actors
should be held accountable for the decisions and output made by AI.
32. In general, an AI system with a higher risk profile, i.e., one likely to have
a significant impact on individuals, requires a higher level of human
oversight than an AI system with a lower risk profile. Therefore:
28
PART II Risk Assessment and Human Oversight
Figure 12: Examples of AI Use Cases that May Incur Higher Risk
29
Artificial Intelligence: Model Personal Data Protection Framework
34. When seeking to mitigate AI risks to comply with the Ethical Principles
for AI, organisations may need to strike a balance when conflicting
criteria emerge (see Figure 13) and make trade-offs between the
criteria.
36. In any event, organisations are reminded that any applicable legal
requirements, including the requirements of the PDPO, must be
complied with.
30
PART II Risk Assessment and Human Oversight
Predictive accuracy /
performance 1 Output
explainability
Certain AI models, such as decision trees, Deep learning neural network models are
are easier to interpret but have less generally more accurate in their predictive
predictive accuracy output but are often referred to as “black
boxes” that are difficult to interpret
Statistical
accuracy of data 2 Data minimisation
To improve the accuracy and fairness of AI Organisations should ensure that only
models, more data (including personal data) adequate but not excessive personal data
may be required for training, customisation, are used for their purposes
and / or testing
4
Privacy enhancing
Output accuracy
technologies
PETs such as synthetic data 19 or differential Organisations should be mindful of the
privacy 20 can be deployed to minimise the potential implication on output accuracy
amount of personal data used of the AI
19 Synthetic data refers to a dataset that has been generated artificially and is not related to real people.
20 Differential privacy is an approach to privacy protection in the release of datasets, usually by adding noises (i.e., making minor
alterations) to the datasets before release. Unlike de-identification, differential privacy is not a specific process, but a quality
or condition of datasets that a process can achieve. A released dataset achieves differential privacy if it is uncertain whether a
particular individual's data is included in it. Differential privacy is generally considered to have stronger protection of privacy than
de-identification.
31
Artificial Intelligence: Model Personal Data Protection Framework
38. AI models may continue to learn and evolve and the environment in
which an AI system operates may also change. Therefore, continuous
monitoring, review and user support are required after the adoption of
an AI model to ensure that the AI systems remain effective, relevant
and reliable.
Management and
Data Preparation Customisation and
Continuous
and Management Implementation
Monitoring
21 Fine-tuning is the process of taking AI models trained on large and general datasets and updating / adapting them for using other
specific data for a specific purpose or need.
22 Grounding is the process of linking AI models to verifiable real-world knowledge and examples from external sources. One of the
most popular methods of grounding for generative AI models is Retrieval-Augmented Generation, which augments the capabilities
of an LLM by adding an information retrieval system that provides grounding data, to improve the performance of the LLM in
specific use cases or domains.
32
PART III Customisation of AI Models and Implementation and Management of AI Systems
39. Internal proprietary data, often involving personal data, may be used
in both the customisation and decision-making or output stages.
Good data governance in the customisation and operation of AI not
only protects individuals' personal data privacy but also ensures
data quality, which is critical to the robustness and fairness of AI
systems. Poorly managed data may result in the "garbage in, garbage
out" problem and may have an adverse effect on the results that an AI
system produces (e.g., unfair output of predictive AI and "hallucinations"
by generative AI23).
23 Poor data governance may not be the sole cause of "hallucinations" by generative AI. "Hallucinations" tend to be inherent in
generative AI models which use the transformer architecture, but can be minimised effectively through mechanisms such as
grounding and prompt-engineering.
33
Artificial Intelligence: Model Personal Data Protection Framework
34
PART III Customisation of AI Models and Implementation and Management of AI Systems
24 Anonymised data refers to a dataset that has been processed in such a manner that no individual can be identified from it. As
anonymised data cannot be used to identify individuals, they are not personal data.
25 Pseudonymised data refers to a dataset that has had all personally identifiable information removed from it and replaced with
other values, preventing the direct identification of individuals without additional information. Pseudonymised data are personal
data because individuals can still be identified indirectly with the aid of additional information.
26 These three data minimisation techniques may not apply to certain types of non-text data, such as images.
27 For example, if personal data are loaded into a generative AI system during the grounding process in response to an individual's
queries, the personal data should be discarded after fulfilling the request.
28 An expert system is "a form of AI that draws inferences from a knowledge base to replicate the decision-making abilities of a
human expert within a specific field." (Source: IAPP AI Glossary) Expert systems may be built by creating a set of rules according
to expert knowledge of the field, without relying on data and machine learning.
35
Artificial Intelligence: Model Personal Data Protection Framework
(iii) The quality of the data used to customise and use an AI model
should be managed (DPP 2), especially for high-risk AI models.
The data should be accurate, reliable, complete, relevant,
lawfully obtained29and representative of the target population,
and the data should not be discriminatory or contain unjust bias
in relation to the purposes for which customisation is being
conducted. In this regard, organisations should consider the
following:
• Understanding the source, accuracy, reliability, integrity,
consistency, completeness, relevance and usability of the
data used for model customisation;
• Conducting relevant data preparation processes, such
as annotation, labelling, cleaning, enrichment and
aggregation;
• Identif ying outliers and anomalies in datasets and
removing or replacing these values as necessary while
maintaining a record of such actions;
• Testing the customisation data for fairness before using it
to customise AI models;
36
PART III Customisation of AI Models and Implementation and Management of AI Systems
Data Preparation
Management of data for customising and using AI
32 For example, to process internal documents and data, to assist with the drafting of documents of a particular domain of expertise
or to generate content in a particular corporate style.
37
Artificial Intelligence: Model Personal Data Protection Framework
33 For example, regression testing (i.e., testing performed to confirm the recent code / programme changes that does not affect the
existing AI application's performance negatively).
34 Fairness can be defined mathematically by different metrics (demographic parity, equality of opportunity, etc.) in a classification
model. Certain metrics of fairness are mutually incompatible and cannot be satisfied simultaneously. The organisation should
select the suitable metrics to use in a given context.
35 Accuracy can be defined mathematically by different metrics (e.g., accuracy, precision, recall, F1 score, specificity) which test
different types of errors in a classification model. Certain metrics of accuracy are mutually incompatible and cannot be satisfied
simultaneously. The organisation should select the suitable accuracy metric(s) to use in the given context to know what to optimise
for.
36 Overfitting is where "an [AI] model becomes too specific to the training data and cannot generalise to unseen data, which means it
can fail to make accurate predictions on new datasets" (IAPP AI Glossary). Overfitting generally makes an AI system more prone to
attacks which may compromise personal data contained in the training / customisation dataset.
37 Reproducibility refers to whether an AI system produces the same results when the same datasets or methods of prediction are
used. Reproducibility is important in assessing the reliability of an AI system.
38
PART III Customisation of AI Models and Implementation and Management of AI Systems
44. Organisations may need to take into account other considerations for
compliance with the PDPO, depending on how the AI solution is to be
integrated, i.e., whether it will be hosted on an on-premises server or
on a cloud server provided by a third party. Hosting an AI system within
the organisation's own premises naturally gives the organisation more
control over data security than hosting on a third-party cloud. However,
the organisation should determine whether it has the expertise to
securely run and protect the on-premises system. If the organisation
deploys the AI solution on a third-party cloud38, and personal data are
processed in its use, the organisation should, by way of contractual
agreement, address issues including:
(i) Compliance with the PDPO (and any other applicable laws) in
cross-border data transfers (where applicable);
(ii) Each party's roles and responsibilities as a data user or data
processor (as the case may be) as defined under the PDPO; and
38 Organisations are encouraged to read the PCPD's Information Leaflet on Cloud Computing for more information: https://www.
pcpd.org.hk/english/resources_centre/publications/files/IL_cloud_e.pdf.
39
Artificial Intelligence: Model Personal Data Protection Framework
40
PART III Customisation of AI Models and Implementation and Management of AI Systems
42 Traceability refers to the ability to keep track, typically by means of documentation, of the development and use of an AI system,
including the training and decision-making processes and the data used. Ensuring traceability can help enable auditability.
41
Artificial Intelligence: Model Personal Data Protection Framework
Mechanisms to ensure
transparency, output
User Acceptance Tests
traceability and system
auditability
Legal obligations and security
Security measures to
considerations in relation to
prevent adversarial attacks
hosting of the AI system
42
PART III Customisation of AI Models and Implementation and Management of AI Systems
43 For example, organisations are recommended to handle, anonymise and appropriately erase these logs in accordance with a
robust data management process.
44 Simple security patches and bug-fixing usually do not trigger the need for re-assessing the risks of an AI system.
45 "Model drift" or "model decay" is where the accuracy or performance of a model degrades over time due to either changes in the
environment or target variable on which the AI model produces output ("concept drift") or changes in the input data that the AI
model is using to produce output ("data drift").
43
Artificial Intelligence: Model Personal Data Protection Framework
46 If a data breach incident occurs as part of an AI incident, the organisation should simultaneously engage its data breach response
plan.
44
PART III Customisation of AI Models and Implementation and Management of AI Systems
47 OECD (2023), "Stocktaking for the development of an AI incident definition", OECD Artificial Intelligence Papers , No. 4, OECD
Publishing, Paris, https://doi.org/10.1787/c323ac71-en.; https://oecd.ai/en/wonk/incidents-monitor-aim
48 https://incidentdatabase.ai/
45
Artificial Intelligence: Model Personal Data Protection Framework
46
Part IV Communication and Engagement with Stakeholders
52. Where personal data are involved in the customisation and use of AI,
organisations must communicate the required information to the data
subjects concerned in accordance with DPP 1(3) and DPP 5 of the
PDPO, including, but not limited to:
(i) The purpose for which the personal data are used, e.g., for
AI training and / or customisation, or facilitating automated
decision-making and so on;
(ii) The classes of persons to whom the data may be transferred, e.g.,
the AI supplier; and
(iii) The organisation's policies and practices in relation to personal
data in the context of customisation and use of AI.
47
Artificial Intelligence: Model Personal Data Protection Framework
54. In cases where the AI supplier may be better placed than the
organisation to provide the above information, especially information
about the technical aspects of an AI system, the organisation is
recommended to coordinate closely with the AI supplier throughout
procurement and beyond and, where necessar y, leverage their
expertise to address any concerns raised by stakeholders.
56. For an AI system that produces decisions / output that may have a
significant impact on individuals, organisations should, to the extent
possible, provide channels for individuals to provide feedback, seek
explanation, and / or request human intervention. Organisations should
also carefully consider whether to provide individuals with the option to
opt out from using the AI system.
49 Organisations may consider disclosing relevant information about AI systems using AI model cards, which are "short documents
provided with machine learning models that explain the context in which the models are intended to be used, details of the
performance evaluation procedures and other relevant information" (https://iapp.org/news/a/5-things-to-know-about-ai-model-
cards/).
50 Subject to whether the disclosure would compromise commercially sensitive or proprietary information.
51 Subject to whether the disclosure would compromise commercially sensitive or proprietary information.
48
PART IV Communication and Engagement with Stakeholders
4.3 Explainable AI
58. Making the decisions and output of AI explainable is the key to building
trust with stakeholders. Explanations, where feasible, may include the
following information especially when the use of the AI system may
have a significant impact on individuals52:
(i) How and to what extent AI has been involved in the decision-
making process, including a high-level overview of the key tasks
for which the AI system is deployed and the involvement of
human actors (if any);
(ii) How personal data has been used in the automated or AI-
assisted decision-making or content generation processes and
why those data are considered relevant and necessary; and
(iii) The major factors leading to the automated decisions / output
by the AI system (global explainability), and the major factors
leading to the individual decisions / output (local explainability).
If it is not feasible to provide an explanation, then that should be
made explicit.
52 Organisations may consider referencing the guidance on Explaining Decisions Made with AI published by the Information
Commissioner's Office, UK and The Alan Turing Institute in 2020 for more advice on how automated decisions made by AI may be
meaningfully explained.
49
Artificial Intelligence: Model Personal Data Protection Framework
53 For example, where Retrieval-Augmented Generation was involved in the customisation process.
50
Acknowledgement
The Office of the Privacy Commissioner for Personal Data (PCPD) would
like to thank the following individuals and organisations, as well as major
AI suppliers, for giving us invaluable feedback during our consultation, by
alphabetical order:
Supporting Organisations
Hong Kong Applied Science and Technology Research Institute
Office of the Government Chief Information Officer
Organisations
AI & Humanity Lab, The University of Hong Kong
Asia Securities Industry & Financial Markets (ASIFMA)
Centre for Information Policy Leadership
Deloitte
Ernst & Young
Hong Kong Association of Banks
Hong Kong Computer Society
Hong Kong Monetary Authority
Hong Kong Productivity Council
Hong Kong Science and Technology Parks Corporation
Research Centre for Sustainable Hong Kong, City University of Hong Kong
51
Artificial Intelligence: Model Personal Data Protection Framework
52
DPP 4 - DATA SECURITY
DPP 4 requires data users to take all practicable steps to protect the personal
data they hold against unauthorized or accidental access, processing,
erasure, loss or use.
If a data user engages a data processor in processing the personal data held,
the data user must adopt contractual or other means to ensure that the data
processor complies with the aforesaid data security requirement.
53
Artificial Intelligence: Model Personal Data Protection Framework
54 https://aiverifyfoundation.sg/downloads/Discussion_Paper.pdf
55 https://aiverifyfoundation.sg/downloads/Proposed_MGF_Gen_AI_2024.pdf
56 https://store.isaca.org/s/store#/store/browse/detail/a2S4w000008Kn59EAC
57 https://oecd.ai/en/ai-principles
58 https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/advisory-guidelines/advisory-guidelines-on-the-use-of-personal-data-in-
ai-recommendation-and-decision-systems.pdf
59 https://www.cnil.fr/en/ai-how-sheets
60 https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1185508/Full_report_.pdf
61 https://www.mfa.gov.cn/eng/wjdt_665385/2649_665393/202310/t20231020_11164834.html
62 https://ico.org.uk/media/for-organisations/guide-to-data-protection/key-dp-themes/guidance-on-ai-and-data-protection-2-0.pdf
63 https://www.isaca.org/resources/white-papers/2023/the-promise-and-peril-of-the-ai-revolution
64 https://www.iso.org/obp/ui/en/#iso:std:iso-iec:23894:ed-1:v1:en
65 https://www.iso.org/standard/81230.html
54
• Meta, Llama 2 - Responsible Use Guide (2023)66
• National C y ber Secur it y Centre, UK , and C y ber secur it y and
Infrastructure Security Agency, US, Guidelines for secure AI system
development (2023)67
• National Institute of Standards and Technology, US Department of
Commerce, Artificial Intelligence Risk Management Framework (AI
RMF 1.0) (2023)68
• National Technical Committee 260 on Cybersecurity of Standardization
Administration, the People’s Republic of China, Practical Guidance of
Cybersecurity Standards - Labelling Methods for Content Generated by
Generative Artificial Intelligence Services (2023)69
• Organisation for Economic Cooperation and Development, Advancing
Accountability in AI: Governing and Managing Risks throughout the
Lifecycle for Trustworthy AI (2023)70
• Office of the Government Chief Information Officer, Hong Kong SAR,
China, Ethical Artificial Intelligence Framework (Customised version for
general reference by public) (2023 revised edition)71
• Office of the Privacy Commissioner, Canada, Principles for responsible,
trustworthy and privacy-protective generative AI technologies (2023)72
• United Nations AI Advisory Body, Interim Report: Governing AI for
Humanity (2023)73
• United Nations Educational, Scientific and Cultural Organization,
Recommendation on the Ethics of Artificial Intelligence (2023)74
• World Economic Forum, Adopting AI Responsibly: Guidelines for
Procurement of AI Solutions by the Private Sector (2023)75
• Information Commissioner’s Office, UK, AI and data protection risk
toolkit (2022)76
66 https://llama.meta.com/responsible-use-guide/
67 https://www.ncsc.gov.uk/files/Guidelines-for-secure-AI-system-development.pdf
68 https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf
69 https://www.tc260.org.cn/upload/2023-08-25/1692961404507050376.pdf
70 https://www.oecd.org/sti/advancing-accountability-in-ai-2448f04b-en.htm
71 https://www.ogcio.gov.hk/en/our_work/infrastructure/methodology/ethical_ai_framework/doc/Ethical_AI_Framework.pdf
72 https://www.priv.gc.ca/en/privacy-topics/technology/artificial-intelligence/gd_principles_ai/
73 https://www.un.org/en/ai-advisory-body.
74 https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence
75 https://www.weforum.org/publications/adopting-ai-responsibly-guidelines-for-procurement-of-ai-solutions-by-the-private-
sector/
76 https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/
ai-and-data-protection-risk-toolkit/
55
Artificial Intelligence: Model Personal Data Protection Framework
77 https://www.isaca.org/resources/news-and-trends/newsletters/atisaca/2022/volume-38/developing-an-artificial-intelligence-
governance-framework
78 https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-
Requirements-3.pdf
79 https://www.wiley.com/en-us/Trustworthy+AI%3A+A+Business+Guide+for+Navigating+Trust+and+Ethics+in+AI-p-9781119867951
80 https://www.most.gov.cn/kjbgz/202109/t20210926_177063.html
81 https://www.gov.uk/government/publications/guidelines-for-ai-procurement/guidelines-for-ai-procurement
82 https://www.pcpd.org.hk/english/resources_centre/publications/files/guidance_ethical_e.pdf
83 https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/SGModelAIGovFramework2.pdf
84 https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/explaining-decisions-made-with-
artificial-intelligence/
85 https://ai.google/responsibility/responsible-ai-practices/
86 https://iapp.org/resources/article/key-terms-for-ai-governance/
56
Tel : 2827 2827
Fax : 2877 7026 PCPD Website:
pcpd.org.hk
Address : Unit 1303, 13/F., Dah Sing Financial Centre,
248 Queen’s Road East, Wanchai, Hong Kong
E-mail : [email protected]
Download this
Publication
This publication is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) licence. In essence, you are
free to share and adapt this publication, as long as you attribute the work to the Office of the Privacy Commissioner for
Personal Data, Hong Kong. For details, please visit creativecommons.org/licenses/by/4.0.
Disclaimer
The information and suggestions provided in this publication are for general reference only. They do not serve as an exhaustive guide to the application of
the law and do not constitute legal or other professional advice. The Privacy Commissioner makes no express or implied warranties of accuracy or fitness
for a particular purpose or use with respect to the information and suggestions set out in this publication. The information and suggestions provided will not
affect the functions and powers conferred upon the Privacy Commissioner under the Personal Data (Privacy) Ordinance.
June 2024