Artificial Intelligence in Southeast Asia
Overview
The maturation of Southeast Asia’s digital ecosystems has continued to support remarkable digital economy growth, reaching 20 percent year-on-year and is projected to reach US$200 billion in gross merchandise value by 2023. As artificial intelligence (AI) technologies continue to be adopted by enterprises, AI has the potential to boost productivity, enhance resource allocation, economic competitiveness, and transform industries. As Southeast Asian nations continue to advance on their digital transformation journeys, AI could potentially add 10 to 18 percent to GDP across the region, equivalent to nearly US$1 trillion, by 2030. However, aside from Singapore, Southeast Asian countries invested less than US$1 per capita in AI in 2019, indicating a wealth of opportunity for further productive investment in AI and other emerging technologies.
Singapore has emerged as both a regional and global frontrunner in AI readiness and experimentation. In 2019, Singapore released its National AI Strategy to achieve its goal of becoming a leader in AI solutions by 2030 by focusing on key sectors with high economic and/or social impact and relevance. In 2021, Singapore launched two new national AI programs, focusing on further advancing the government’s digital transformation efforts and developing Singapore as a global hub for financial institutions to research, develop, and deploy AI solutions. In the 2021 Government AI Readiness Index by Oxford Insights, Singapore ranked second overall and ranked first in the index’s government pillar due to the government’s proactive approach to digitizing public services and continued promotion of AI technologies.
Following Singapore’s announcement of its National AI Strategy, other countries in the region have also adopted national frameworks and roadmaps to facilitate further AI experimentation and innovation.
- In July 2022, Thailand’s Cabinet approved its National AI Strategy and Action Plan (2022 – 2027), which seeks to increase Thailand’s AI readiness through improving human capital and infrastructure development as well as promoting AI innovation and utilization by public and private actors.
- In March 2021, Vietnam released its National Strategy on R&D and Application of Artificial Intelligence (2021 – 2030), outlining ambitious objectives to enable Vietnam to achieve its goal of becoming a regional AI hub by 2030.
- In August 2020, Indonesia announced its National Strategy for Artificial Intelligence (2020 – 2045), outlining five national priorities where AI is expected to have the most impact.
- Malaysia’s National Artificial Intelligence (AI) Roadmap (2021 – 2025), which was released in March 2021, and the Philippines’ National Artificial Intelligence (AI) Strategy Roadmap, which was released in May 2021, both seek to guide government and private sector stakeholders in developing and using AI technologies, viewing AI as an opportunity to modernize their economies to make them more innovative and productive.
- Cambodia, Lao PDR, and Myanmar have all scored below the global average in the 2021 Government AI Readiness Index due to the absence of national AI policies and smaller tech sectors.
As more countries seek to tap into the growth opportunities presented by AI, the pace of adoption may be also hampered by AI talent shortages and potential labor disruptions as many low-skill jobs risk being replaced by AI. Although increased utilization of AI is expected to create new jobs, there will be more widespread demand for digital literacy and higher-order skills, requiring increased investment in retraining and upskilling initiatives by both the private and public sectors. Skills training initiatives, such as Singapore’s AI for Industry Program, will be critical to address AI talent shortages and facilitate labor market adaptation to potentially disruptive technologies and industry transformation.
Global Developments and Outlook
With the rapid adoption of AI, public and private stakeholders have begun to grapple with ethical challenges posed by AI, including potential biases and privacy concerns. AI systems have sparked controversies due to being trained on data that underrepresented groups or reflected historical or social inequities, producing harmful results and eroding public trust in AI. Furthermore, concerns have emerged regarding the potential unethical use of AI, particularly in high-risk areas such as criminal justice and the military. To encourage the development of trustworthy AI, stakeholders will need to assess high-risk AI use cases and take steps to mitigate potential biases and privacy concerns as well as ensure AI systems are transparent and explainable.
The EU has sought to establish the first regional AI framework through its AI Act, which was first proposed in 2021. As one of the first movers on major AI legislation, the EU AI Act has the potential to influence international standards. The AI Act adopts a horizontal, risk-based approach that primarily focuses on regulating high-risk AI that may threaten people’s safety, livelihood, or rights. Applications that are deemed to have an unacceptable risk will be banned whereas high-risk applications will be subject to specific legal requirements. Low-risk applications are not regulated under the current draft. Although the EU’s risk-based approach has received the support of many industry leaders, the current draft has faced some pushback for its use of a broader definition of AI as well as the level of proposed fines, which may disproportionately impact small and medium-sized enterprises. The European Parliament is aiming to vote on the EU AI Act during the first quarter of 2023.
The U.S. is undertaking a more sector-specific approach to AI regulation, focusing primarily on AI applications in human services, including health, education, and labor. In October 2022, the U.S. White House Office of Science and Technology Policy released the Blueprint for an AI Bill of Rights, seeking to guide the design, development, and deployment of AI to protect the public’s civil rights, civil liberties, and privacy. The Blueprint consists of five legally non-binding principles: safe and effective systems, data privacy, notice and explanation of use of automatic and impact on users, and provision of human alternatives where appropriate. The U.S. simultaneously announced actions that will be undertaken by federal agencies to protect people’s rights, including curbing commercial surveillance, advancing health equity, and protecting workers’ rights. The Blueprint has been met with mixed reactions, ranging from being “toothless” to potentially stifling AI innovation. However, proponents of the Blueprint have praised it as a good starting point from which the U.S. can encourage the development of human-centric, trustworthy AI.
In December 2022, the U.S.-EU Trade and Technology Council (TTC) released its first Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management (AI Roadmap) following the conclusion of the Third TTC Ministerial. The roadmap seeks to align EU and U.S. approaches on trustworthy AI to advance: 1) shared terminologies and taxonomies; 2) leadership and cooperation in international technical standards development activities and analysis and collection of tools for trustworthy AI and risk management; and 3) monitoring and measuring existing and emerging AI risks. The roadmap seeks to inform U.S. and EU approaches to AI risk management and trustworthy AI and efforts to advance collaborative approaches in international standards bodies based on a shared dedication to democratic values and human rights.
In May 2022, Singapore launched the pilot stage of AI Verify in May 2022, which serves as the world’s first AI governance testing framework and toolkit minimum viable product (MVP) for companies to measure and demonstrate the safety and reliability of their AI products and services. AI Verify incorporates technical tests and process checks to promote transparency between companies and their stakeholders. Singapore seeks to work with AI system owners and developers at a global level to enhance the applicability of AI Verify and positively contribute to the development of international standards on AI governance.
However, diverse data governance models in ASEAN create potential bottlenecks for cross-border data flows, limiting AI development and deployment in Southeast Asia. Data transfers are integral throughout the AI life cycle, with data often originating from geographically dispersed locations. Data localization requirements by select Southeast Asian countries restrict cross-border data flows, limiting the accuracy and insights of AI models. High regulatory requirements and a lack of regional regulatory harmonization impose additional costs that will present challenges in developing and using AI-based products and services that depend on utilizing data from various jurisdictions.
Advancements in AI technology and policy present various opportunities and challenges for Southeast Asia. Through the development and deployment of new use cases, public and private actors are utilizing AI to help spur growth, increase operational efficiency, and improve public services. To enable greater innovation, experimentation, and entrepreneurship in AI, policymakers must consider how to improve regional regulatory harmonization while setting standards that best suit their country’s needs and objectives. In 2023, Singapore plans to propose working towards a roadmap to develop a regional AI governance framework in ASEAN.