AI : Opportunities and Threats for Pacific

On the second day of the Pacific Regional and National Security Conference (PRNSC), a dedicated Artificial Intelligence (AI) session highlighted AI’s transformative potential and the need for precise governance. Speakers discussed the vast opportunities and significant risks of AI, particularly for the Pacific Islands and other regions. They emphasized the importance of developing AI in ways that are sustainable, culturally sensitive, and inclusive.

Professor Jeannie Paterson, Co-Director of the University of Melbourne’s Centre for AI and Digital Ethics, referred to AI as a “powerful tool,” influenced by user intentions and actions. She mentioned, “AI’s current challenge is our limited understanding of it, coupled with its rapid operation and scalability. This can quickly escalate security issues, leading to widespread harms that may be addressed inadequately or sluggishly.”

The session also tackled the dual nature of AI’s rapid advancement and scalability, discussing how these attributes could drive significant progress but also pose governance and response challenges. Mr. Semi Tukana, founder of SOLE FinTech, drawing from his extensive experience in software design and development, highlighted the productivity benefits of AI in streamlining software development.

However, the energy consumption required by AI technologies, especially large language models (LLMs), was also a point of concern. Professor Paterson added, “A critical issue discussed today relates to the energy demands of building data centers and LLMs, pivotal in modern AI applications. The continuous expansion of AI raises significant concerns about its energy use.”

The need for more sustainable AI practices, perhaps through smaller, less energy-intensive models, was widely agreed upon. The session continually returned to the importance of robust governance frameworks. Mr. Tukana cautioned against the uncritical adoption of new technologies by leaders, advocating for a protective approach against undue hype.

Professor Paterson discussed the OECD’s high-level AI guidelines, advocating for principles that uphold human dignity and oversight but emphasized the need to customize these frameworks to local contexts. She pointed out, “Every country currently handles AI governance differently, which leads to a variety of approaches in fostering productivity and mitigating AI risks.”

Deepfakes and misinformation were identified as significant threats, with AI-generated falsehoods intensifying risks and contributing to a growing ‘truth deficit.’ The session also addressed the cultural and ethical implications of AI, such as biases in AI-generated content and concerns over cultural misappropriation.

Professor Paterson expressed concerns about the potential decline in critical thinking and essential skills due to increased automation, underscoring the need for interdisciplinary teams and critical technology engagement.

The panelists stressed the importance of educating communities, particularly in the Pacific, through hands-on demonstrations and direct interactions to effectively communicate AI’s capabilities and risks. From a national security standpoint, they advocated for regional cooperation and local expertise development to tackle AI-enabled threats, emphasizing that AI governance should honor human rights and cultural values.

Concluding the session, Prof. Paterson reiterated the importance of adapting international frameworks like those of the OECD to meet local needs: “While the OECD provides guidelines, it is crucial that countries tailor and operationalize these principles according to their unique values and conditions.”