Keynote Speech by Deputy Commissioner, Mr Yeong Zee Kin, at AI and Commercial Law: Re-imagining Trust Governance and Private Law Rules, on Thursday, 5 December 2019, at Singapore Management University

05 Dec 2019

1. 2019 is a significant year for data protection in Singapore. Not only did we mark the fifth year of data protection law since the PDPA came into force in 2014, we also passed a landmark in the maturity of our data protection practice with our pivot from compliance to accountability. The development of data protection law and our accountability pivot will form the subject matter of an upcoming talk. The focus of today’s address is: how an engaged data protection regulator can collaborate with industry to co-create policies and supporting frameworks.

2. I have spoken about how PDPC is an engaged data protection regulator. We involve industry stakeholders in deliberations on emerging trends in technology and business models, and we collaborate with industry stakeholders in a process that we refer to as policy prototyping. This process has allowed us to identify areas where policy interventions are needed, as well as to detail the contours of extant policies to better respond to advancements in technology and innovation in business models. I will first speak of the policy prototyping process, which we also refer to as our regulatory sandbox. I will then talk about a couple of initiatives to support industry’s use of data and data-driven technologies through the introduction of trust-building frameworks.

Regulatory sandbox

3. The PDPC introduced a regulatory sandbox through our guide to data sharing in 2017. Since then, the regulatory sandbox has evolved into a three-stage process. In the engagement stage, organisations may approach the PDPC when they have plans to make use of data in novel ways, particularly where these new methods are enabled by recent technological advances. The PDPC then engages them in a period of in-depth discussions, where we get to understand the commercial objectives and the possibilities enabled by the technology. In many cases, these in-depth discussions lead to a realisation that extant policies are quite sufficient. What may be called for could be adjustments in the process of deploying the technology. For example, consent can be obtained as and when required by maximising the existing touchpoints with consumers. We refer to this as dynamic or just-in-time consent, and it has progressed from concept to reality. Dynamic consent is an approach to user experience design that focuses on the users experience when obtaining consent. Consent is obtained in bite-sizes and in context. This design approach also gamifies the notification-consent dynamics, so that organisations appreciate that gaining users’ consent is equivalent to gaining users’ trust. One of the fruits of our collaboration with Facebook in the Startup Station accelerator programme is working with organisations to develop innovative UX designs for dynamic consent. An electronic handbook collecting the prototypes that were developed will soon be published by TTC Labs and made available on our website.

4. In certain situations, novelty requires something more than engagement. In the second or guidance stage, the discussions are more extensive, with further details on data flows, data use, and the specifics of technology deployment. Compared with the engagement stage, the business plans are frequently more firm and the new product or service much closer to market launch. Where necessary, particularly in situations where extant policies are silent or ambiguous, the Commission can issue written guidance to the organisations to provide clarity on how the PDPA applies. To benefit the industry, practical guidance issued under this stage can, and have been, published. Practical guidance are redacted before publication to remove commercially sensitive information and edited to be of more general application. Published practical guidance stands alongside our advisory guidelines, adding to the anthology of written policies that industry may refer to. A recent example is the practical guidance on data sharing that was issued to support a datathon, where datasets from participating organisations were first pseudonymised and used for identifying common customers across these organisations. The merged dataset was then further anonymized and the anonymised dataset formed the base for creating a synthetic dataset, which was used to conduct the datathon. This data sharing arrangement also adopted our trusted data sharing framework.

5. Our regulatory sandbox has a third stage, in which we collaborate with industry to co-create new policies to address gaps that have been identified. For policies that the Commission is planning to introduce, we engage the industry closely to work through the details. For example, we are looking to enhance the consent regime under the PDPA by introducing legitimate interest and deemed consent by notification, through legislative amendments. While these were broadly supported by industry, we feel that it is important to engage industry on the details of implementation and co-create the advisory guidelines and risk assessment guides. This is to ensure that when we introduce these proposals, they meet their objectives in benefitting consumers and supporting organisations. We have also engaged industry to develop new policies from ground up. An example of such co-creation is the policy around the use of data for business innovation. From our engagement with industry, we realised that there was a need for greater clarity of the extent to which data can be used for product development, operational improvements, and understanding customers better. So, together with the PDPC, a private sector-led committee spent about four months formulating a new policy proposal. This proposal was eventually included in our public consultation earlier this year, and which received broad industry support.

6. The sandboxing approach to policy prototyping that I have just outlined benefits both organisations and the Commission. For organisations, there is greater certainty moving forward with business plans for technology deployment. For the Commission, we have greater confidence to adjust our data protection policies so that they remain relevant to developing industry practices while ensuring a high level of data protection practice.

7. As an engaged data protection regulator, we believe that we should not shy away from difficult areas. Where there is uncertainty, we see a role in contributing towards building clarity. In this spirit, we have introduced a couple of frameworks this year: a trusted data sharing framework and a model AI governance framework. These frameworks apply to all types of data and facilitate the inculcation of accountability principles. In the context of personal data, the former provides guidance on disclosure of personal data between willing partners, while the latter provides guidance on use of personal data with new technologies like artificial intelligence.

Data sharing

8. Let me turn our attention to the trusted data sharing framework. Since the introduction of the regulatory sandbox, we have engaged close to 30 companies. Separately, the PDPC has also engaged other companies in the IMDA data collaborative programme. The Trusted Data Sharing Framework is a distillation of the experiences from these two programmes into a framework that guides organisations intending to share data, including personal data, to do so in a responsible manner.

9. There are four components to the trusted data sharing framework. First, the formulation of a data sharing strategy that articulates the objectives for the sharing arrangement, addresses data sharing requisites such as having the right dataset and the value expected from the arrangement, and selects an appropriate data sharing model. A data sharing model could, for example, be bilateral data sharing arrangement or sharing amongst multiple organisations through a data service provider. To ease discussions about the commercial value of the data sharing arrangement, the framework incorporates a data sharing valuation guide.

10. Second, the data sharing framework guides data sharing partners to think through key areas for legal and regulatory compliance. The framework provides a set of reference baseline legal templates for non-disclosure agreements, data sharing agreements and consent clauses. Our work in promoting dynamic consent becomes relevant at this stage, if consent has to be obtained. If there is sufficient novelty in the data sharing arrangement or technology that will be deployed, data sharing partners can consider obtaining regulatory guidance or participating in a regulatory sandbox.

11. The third part provides guidance to organisations on selecting the technical delivery mode for the chosen data sharing model, as well as administrative and technical considerations as they design and implement safeguards and controls. The fourth part focuses on monitoring and reporting processes to ensure transparency and accountability during and after data exchange and managing secondary use of data, in order to build maintain vigilance and build consumer trust.

Artificial Intelligence

12. While the data sharing framework builds up trust in disclosure of data between willing partners, the other framework that I wish to now turn attention to encourages accountable practices when an organisation uses data-driven technologies like artificial intelligence. The Model AI Governance Framework provides guidance to businesses deploying AI at scale in their operations, products or services, to do so responsibly. It is an accountability based governance framework around the use of data for autonomous decision-making. We want to help organisations translate ethical principles into implementable practices. The Model Framework helps to achieve the core AI principles that decisions made by AI will be explainable, transparent and fair; and that AI systems will be human-centric.

13. The Model Framework organises the issues into four areas. First, ensuring that businesses realise that deploying AI involves making a number of important decisions, ranging from product or service design, to how data is collected, selected and prepared for model training. Thus, the first area that the Model Framework deals with is ensuring that companies put in place the correct management oversight structure so that decisions are made by the right internal owners.

14. Second, the Model Framework understands that AI is often used to assist in decision-making, whether it is to augment a human decision-maker or to make decisions autonomously. These decisions can have legal consequences for companies that use AI. The Model Framework provides guidance on identifying the right level of human involvement in AI-augmented decision-making that is proportionate to their impact on individuals affected by the decision. For example, fully autonomous approvals of travel insurance applications are probably harmless but may be unsuitable when evaluating claims from policy holders injured on vacation.

15. The management of data used to train the machine learning model, and how the fitted model is eventually tested and monitored after it is deployed, is the third area that the Model Framework shines a spotlight on. It provides guidance on practical tools like provenance records and audit logs that will enable the organisation to document significant decisions made in model training and selection. Such tools will help when the organisation has to explain how an AI feature is expected to behave, what will influence its behaviour and why a particular recommendation or decision was made.

16. The final area that the Model Framework provides guidance on is stakeholder communications. Placing the consumer in the centre, how can organisations provide information to her so that she understands the AI-powered feature and is prepared to use it, is able to control it and understands how a recommendation or decision was made. It also provides suggestions on how to manage communications with consumers who may have queries or wish to contest a decision. This melds transparency and explainability into a communication strategy throughout the consumer journey, for the purpose of building trust.

17. These are voluntary frameworks that organisations are encouraged to adopt. While adherence to these frameworks are not mandatory, they nevertheless are important steps that industry and regulators must take as we strive to establish what a reasonable standard of conduct looks like. In new areas like data sharing and data-driven technologies such as artificial intelligence, an engaged regulator does a service to both consumers and industry by stepping forward to provide a strawman to start the conversation. This enables constructive conversations that can lead to future policy interventions which uphold high standards of data protection practices and keep data protection laws and policies relevant in the face of developments in technology and business models, all for the ultimate goal of building up consumer trust and confidence.

 

 

Tags: