Keynote Speech by Deputy Commissioner, Mr Yeong Zee Kin, at 2nd Singapore-Nanjing International AI Forum on Smart Cities on Friday, 10 May 2019 in Nanjing, China

10 May 2019

Executive Deputy Mayor Yang Xue Peng

Senior Minister of State for Ministry of Trade and Industry Dr Koh Poh Koon

Ladies and Gentlemen

1. On behalf of Singapore, I would like to first express my sincerest appreciation to be invited to speak at the 2nd Singapore-Nanjing International AI Forum on Smart Cities today. The hallmark of Smart Cities is their deployment of technology to improve the daily lives and experiences of its residents. Some of these are cutting edge technologies while others are more mature technologies but deployed in novel ways. Artificial Intelligence is not a new technology; AI is a term coined in the 1950s. But recent advances in processing power and sensor technologies have allowed us to collect and process vast amounts of data generated by activities of humans and the operation of machines. AI is a key technology that has enabled the processing of data to extract insights, and to deploy these insights — in the form of machine learning models — through intelligent systems.

2. Products and services can become more personalised as they are increasingly infused with AI-powered features. Sometimes, these can become too personal and make consumers feel uneasy. As we encourage businesses to explore the use of AI to power their operations, products and services, we need to ensure that consumers also feel empowered and comfortable in making use of these products and services. Singapore has articulated a strong AI adoption strategy for our industries and we have developed a Model Framework for AI and data governance to complement this strategy.

3. Our efforts in building an ecosystem of trust balances the twin goals of innovation and consumer protection. There are three focal areas – first, bring relevant stakeholders together to build the trusted ecosystem; second, provide guidance on ethical and responsible use of AI that organisations can voluntarily adopt; and third, funding research to anticipate and create solutions for legal and regulatory issues as AI makes its way into different sectors. I will elaborate on these in turn.

Bringing stakeholders together

4. Last year, Singapore established an Advisory Council on the Ethical Use of AI and Data (“Advisory Council”), bringing together international leaders in AI technology, advocates of social and consumer interests, and leaders of companies that deploy AI. The Advisory Council advises the Government on ethical issues that arise from deployments of AI that may require regulatory or policy attention. Members of the Advisory Council engage different stakeholders like ethics board of commercial enterprises, consumers and academia. Feedback from industry and consumer groups are brought to the Council which gives its advice and guidance. In this way, the Council will help the government develop ethical standards and reference governance frameworks, and publish advisory guidelines, practical guides and codes of practice for voluntary adoption by the industry.

Model AI Governance Framework

5. One key development in the past year is the Discussion Paper on AI and Personal Data, which was released in June last year. This has since been developed into a Model AI Governance Framework (“Model Framework”), which was released in January this year. It is a living document developed with the input of local and international stakeholders, and is used as a reference document for businesses to deploy AI at scale in their operations, products or services. We concluded from our global survey that there were already a number of efforts by various governments, think tanks and international organisations in developing ethical principles. While there are variations in ethical principles across cultures as influenced by history, there is a core that is common and particularly pertinent to AI. By implementing the Model Framework, organisations will implement this set of core principles, namely that decisions made by AI will be explainable, transparent and fair; and that AI systems will be human-centric.

6. We also wanted to contribute to the global discourse by developing a governance framework that will help organisations translate these principles into implementable practices that businesses can adopt. From inception, the Model Framework is meant to be a document that can be easily picked up and used for guidance when making decisions on AI deployments.

7. The Model Framework organises the issues into four areas. First, ensuring that businesses realise that deploying AI involves making a number of important decisions, ranging from product or service design, to how data is collected, selected and prepared for model training. Thus the first area that the Model Framework deals with is to provide guidance to companies in adopting the correct management oversight structure that ensures that decisions are made by the right internal owner.

8. Second, the Model Framework understands that AI is often used to assist in decision-making, whether it is to augment a human decision-maker or to make decisions autonomously. These decisions can have legal consequences for companies that use AI. The Model Framework provides guidance on selecting a decision-making models that is proportionate to their impact on individuals affected by the decision. For example, in the highly competitive travel insurance market, a human-out-of-the-loop model for approval of travel insurance applications is probably harmless. But this decision-making model is probably unsuitable when the insurance company is evaluating a claim from the policy holder who suffered an injury while on vacation.

9. The third area that the Model Framework shines a spotlight on is around the management of data that is used to train the machine learning model, and how the fitted model is eventually tested and monitored after it is deployed. The Model Framework provides guidance on practical tools like provenance records and audit logs that will enable the organisation to explain how an AI feature is expected to behave, what will influence its behaviour and why a particular recommendation or decision was made. This is the area where data governance comes to the fore. Biased algorithms can result in unintended discriminatory decisions and such mistakes can lead to public outcry.

10. Last but not least, the Model Framework provides guidance on transparency. Placing the consumer in the centre, how can organisations provide information to her so that she understands the AI feature and is prepared to use it, is able to control it and understands how a recommendation or decision was made. It also provides suggestions on how to manage communications with consumers who may have queries or wish to have a decision reviewed.

Anticipating and creating solutions

11. There is a third component to our ecosystem. We believe that businesses are only just beginning to explore how to make use of AI and this technology is still evolving. Both businesses and technology need space to grow and mature. We do not believe that it is necessary to introduce laws and regulations around AI. But we are fully aware that as AI enters different sectors, sectoral laws and regulations will need to deal with unique issues that are raised. For example, who is responsible when an autonomous vehicle collided into a stationary bus? We have therefore established a Research Programme on Governance of AI and Data Use (“Research Programme”) to identify and anticipate these issues and explore viable solutions for them.

12. This Research Programme is hosted in the Centre for AI and Data Governance (CAIDG), Singapore Management University School of Law. It aims to build up a body of knowledge of the various issues concerning AI and data use, so that when the issues do occur, we have a base of materials to work off. There is a second benefit. This public discourse helps us identify and groom expertise we can call on when that time comes. The Centre will plug into international networks and we aim to establish Singapore as a centre for knowledge exchange in this area. We look forward to establishing links with universities represented in this forum.

The role of data in AI governance

13. Data has been variously described as the new oil for the Fourth Industrial Revolution, and some say as important as air. For many multi-national operations, data is generated in the various markets where customers are located and in the regions where products are manufactured or where the supply and distribution chain touches. Data from these places have to be brought together into regional and global centres within the multi-national corporation where corporate decisions are made.

14. Data is crucial for machine learning model training. I wish to return to the topic of data governance and its role in the Model Framework as I draw my address to a close. In the context of machine learning model training, good data governance entails making sure that the correct dataset is selected for model training. This means that the data should be updated and accurate, and data elements are correctly interpreted. Even with the best quality data, there is the risk of bias. Fundamentally, the training dataset has to be representative of the intended target audience. There are tools and techniques for de-biasing but these merely mitigate the risks. Hidden biases in the training dataset can lead to models that do not perform effectively. For companies developing products and services, this can affect their commercial reputation and upset customers. Some product features need to be localised to better serve the market that the company serves, but other features have to be uniform across all markets. For example, recommendation engines should probably be localised in order that shopping recommendations are customised for each market. But this is not the same for other AI models, for example those that identify fraudulent online transactions or those that improve operational efficiencies for just-in-time manufacturing, warehousing and distribution. Training for these types of AI models will benefit from aggregating data across all markets.

15. It is therefore crucial that we devise practical methods to facilitate the cross-border flow of data. In contrast to the analogy with air, data should not flow like the wind. We believe that the commercial value in data requires that it be properly safeguarded and when data is shared, the right practices are adopted. If not properly handled, loss or misuse of customers’ personal data can lead to disastrous consequences for commercial reputation and sometimes even legal consequences. The preferred approach is to work with like-minded countries to establish good standards and to build stable and trusted bridges upon which data can travel. This is why we have joined platforms like the APEC Cross-Border Privacy Rules and Privacy Recognition for Processors systems and are actively developing the ASEAN Framework for Personal Data Protection and the Framework for Digital Data Governance. These systems are the figurative trusted bridges.

Conclusion

16. Both China and Singapore place a premium on protecting personal data and have robust laws establishing high standards. We are in a good position to build on our present achievements so that businesses and customers in our markets can benefit. By co-developing standards for data and AI governance, we create opportunities for companies in both countries while ensuring that their AI-powered products and services have a trustworthy reputation that our consumers are keen to use.

Thank you.

Tags: