Rethinking Trade and Regulatory Paradigms for the Digital Eco-system through Development and Indigenous Lenses 

Professor Jane Kelsey

University of Auckland

Western conceptualisations of the digital ecosystem are generally techno-centric, disembodied and decontextualised, and treat data as an abstract and fettishised commodity that is amenable to private ownership, proprietary rights, commercial trading in markets, and exploitation for profit. The non-interventionist paradigm established in US telecommunications law fostered a regulatory void, with national regulatory agencies playing an often-futile game of catchup as they grapple with actual and potential applications and implications.  

This paradigm has been embedded in a growing web of binding and enforceable international trade and investment agreements, labelled in e-commerce or digital trade, starting with the US-driven Trans-Pacific Partnership Agreement. Moves to claw back some policy space within those agreements has again reflected Western priorities of consumer rights, individual privacy, abuse of power and anti-competitive practices, tax evasion, regulatory arbitrage, etc.  

 Against that backdrop, this presentation reflects on growing concern that developing countries who are technologically dependent, donor-influenced rule-takers in the digital 21st century risk being trapped within a model designed by and for Big Tech.  That tension is reflected in a recent report on the Pacific Islands Countries’ e-commerce strategy. There are similar concerns that the dominant paradigm is incompatible with indigenous conceptualisation of the digital ecosystem and data, and rights of indigenous data sovereignty and governance, the subject of a recent Waitangi Tribunal report on the “Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP)” (the successor to the Trans-Pacific Partnership Agreement (TPPA)).  

NFTs and the Law of the Horse (and other oddities)

Associate Professor Alex Sims

University of Auckland

Non-Fungible Tokens (NFTs) have created considerable interest and hype as well as causing confusion and misunderstanding. For example, while the high profile uses of NFTs so far have been for digital art, NFTs have an almost infinite range of uses. While no one (yet) is suggesting a course on NFTs—those familiar with Easterbrook’s “Law of the Horse” will understand this reference—some people believe NFTs pose unique challenges for the law and our ideas of value. For example, some are incredulous that people are willingly pay eye watering sums for something that can be copied with a right mouse click. Others are concerned that NFTs will facilitate money laundering and other criminal activities; unscrupulous creators will sell “unique” NFTs to more than one purchaser; and/or creators normally retain ownership of copyright in the NFT digital art works they are selling. While all these activities (and more) are possible with NFTs, they occur with NFTs’ analogue counterparts. In truth, therefore, NFTs simply shine a light on existing laws and practices of which most people are unaware.

Significance of trust to technology law and policy

Dr Michael Dizon

University of Waikato

Trust is considered a crucial element in cybersecurity. It is one of the guiding principles of New Zealand’s Cyber Security Strategy 2019. While trust is commonly raised in law and policy debates about cybersecurity, there is a lack of clarity and consensus on how to achieve it in practice. This is understandable given that trust is such an expansive, complex and multidimensional concept whose meaning and presence morph and dissipate depending on the actors, relationships and contexts involved. There has also been deep-seated distrust among technology users, developers and regulators after the Edward Snowden revelations of widespread government surveillance. The main aim of this presentation is to examine the significance and implication of trust on technology law and policy, specifically with regard to cybersecurity. The presentation will explain the various meanings, types and constructs of trust including distrust. It will then discuss how these various conceptualisations and experiences of trust apply in the case of cybersecurity. The presentation will thereafter recommend general principles and policies on how to incorporate and develop trust in cybersecurity law and policy. The presentation will conclude with a brief summary and reflection on the importance of trust to the regulation of technology.

They are using my data for what?

Dr Andelka M. Phillips

University of Queensland

Have you ordered a genetic test online or purchased a wearable fitness monitor?

Much of my recent research has considered the regulation of direct-to-consumer genetic testing industry (also known as personal genomics). I am continuing to explore issues raised by this industry together with the Internet of Things (IoT)3 and am currently working on papers relating to the merging of cyber and physical worlds and the merging of the home and work environments. As people purchase genetic tests and a variety of consumer products, they expose themselves to significant privacy and security risks. This includes reuse of their data for a variety of purposes, which could even lead them to be suspects in criminal investigations.

This talk will introduce you to the worlds of personal genomics and wearable technology, which is a prominent example of IoT. It will also provide an introduction to the risks these technologies raise for society and the law. This will include discussion of some of the challenges which businesses’ reliance on their electronic wrap contracts and privacy policies raise in a world where many consumers fail to read or even notice such documents. It is hoped that this talk will cause you to reflect upon your own experiences in the digital world and how we might improve businesses practices, so that they can afford better protection to privacy and consumer rights.

All your data will be held against you: secondary use of data from personal genomics and wearable tech’

Andelka M. Phillips


Personal data is everywhere. Individuals leave data trails through their use of various technologies, often without their knowledge or consent. This data can be repurposed by businesses for the purposes of marketing or other secondary research. However, it is also being used by other entities, such as the pharmaceutical and insurance industries, and it is beginning to be used by law enforcement in the context of legal proceedings and criminal investigations. This is an exploratory chapter, which discusses the impact of electronic contracts and privacy policies on individuals’ rights in their personal data. This discussion is centred on two examples: direct-to-consumer genetic tests (DTC or personal genomics); and wearable devices. Both these industries are examples of new technologies that have created new markets and both involve the collection and processing of consumers’ data that has the potential to be reused for a wide range of secondary purposes. Often contracts and privacy policies are relied upon by businesses to govern their relationships with consumers and also to allow for reuse of such data and currently, business practices may often not be in compliance with the EU’s General Data Protection Regulation (GDPR).

Invisible Crime, Smart City, Smart Crime

Associate Professor Wayne Rumbles

University of Waikato

As we connect more and more of our everyday objects in our homes and cities to the internet we open up portals into our lives;  it is not only our privacy that is vulnerable but also our financial resources, our data, our bodies and even our physical safety. This presentation intends to explore the risk of unseen criminal activity created by the Internet-of-Things and what we can do about it. 


Exploring the concept of jurisdiction on online hate speech

Rachel Tan

University of Waikato

There has been a proliferation of social media usage over the past decade. In 2019, it was reported that approximately 2.95 billion people were using social media worldwide 1. While social media behemoths like Facebook and Twitter make virtual social connection uncomplicated for the majority of people, there has been a reoccurring issue of harassment or bullying in the form of harsh speech that causes societal harm. 2 Online hate speech is essentially a cybercrime with transnational criminal law aspects, observed when individual states enforce their criminal laws, thus expressing their sovereignty. 3 This paper will explore the intricacies of the concept of jurisdiction concerning the cybercrime of online hate speech. It will also cover the components of private international law (or conflict of laws as it is known in Common Law systems) and explain how they work in the context of the Internet.

[1] “Number of social media users worldwide 2010-2021”  <>

[2] “Online harassment: the insidious face on an inescapable harm”<>.

[3] Neil Boister and Robert J. Currie Routledge handbook of transnational criminal law (Routledge Taylor & Francis Group, Abingson, Oxon ;, 2015); Ram, above n 1

Debriefing the Inaugural Law and Technology Moot at UoA 

Matt Bartlett

University of Auckland

The University of Auckland Mooting Society launched the inaugural Law and Technology Moot earlier this year. The moot problem involved some novel technological issues to challenge students, including self-driving cars and passive-aggressive subtweets. The successful student-led competition provided some helpful insights as to how to build engagement around issues in law and technology.

Unaware and Uninformed: The Eroding Moral Foundations of Online User Agreements

Matt Bartlett and Briony Blackmore

University of Auckland

New digital contexts place strain on the application of some areas of law. One example is how online user agreements (used by all major technology platforms) are routinely enforced despite a glaring lack of informed consent on the part of the user. It is well-established that online user agreements are opaque, complicated and rarely read. Our argument is that legal formalism is failing to protect users in the digital sphere, and that philosophical theory offers a flexible and morally sound mechanism to better anchor the application of law online.

Visual Contracts: Would pictures in consumer credit contracts help vulnerable consumers?

Michelle MacManus

Massey University

From traffic lights and road signs to COVID public health messages: visual imagery and design are commonly used to convey information and encourage behaviour.  In this context, the websites of finance companies use carefully chosen imagery and interactive tools to encourage vulnerable consumers to apply for personal loans.  In contrast, the terms and conditions of these loans remain fine print meaning vulnerable consumers often do not understand the costs of borrowing or their rights and obligations. This paper asks whether visual contracts could be used for consumer credit contracts under the Credit Contracts and Consumer Finance Act (CCCFA)? Visual contracts are related to the Nordic proactive law movement, an emerging discipline of legal design, and advances in legal technology. Visual contracts use visual elements including icons, illustrations, comic storyboards and charts, to supplement, and sometimes replace, words. Examples abound internationally, ranging from comic-strip employment contracts for migrant workers to business-to-business contracts. Drawing from these examples, would the use of visual elements better highlight key terms and improve the accessibility and readability of consumer credit contracts, especially when read on a phone screen?

Title: Quantum Computing

Ella Shepherd

TeLENZ Summer Scholar

We are currently living through another industrial revolution. Just as most people have gained access to a normal desktop computer, a new computer is on the rise: the quantum computer. This brief presentation will explore quantum computing and its potential impact on the law.

Title: AI and Privacy

Sneha Kant

TeLENZ Summer Scholar

This brief presentation will focus on the impact of artificial intelligence on privacy law by  providing a broad overview of artificial intelligence within its technological definitions, and subsequently exploring the legal issues that arise.

Back to the Future? Teaching law in a time of rapid technological change

Professor W. John Hopkins

University of Canterbury

This paper explores the challenges of teaching law in Aotearoa New Zealand in a time of rapid and existential change in the legal profession. Driven by technological advancements particularly in relation to big data and computational capacity the traditional role of lawyers is being challenged as never before. The conclusion of this process is hard to predict and raises fundamental questions about how law should be taught as a subject. This paper argues that, given the uncertainty that surrounds the role and development of technology in the legal system, teaching with reference to current technology comes with significant risks. Instead, such developments, perhaps counter- intuitively, should drive university legal teaching down a more theoretical and conceptual path, capable of equipping law graduates to understand whatever legal future they will eventually face.

A Sufficient Basis for the Moral Considerateness of AI would be?

Gay Morgan

University of Waikato

Abstract coming soon…

Harnessing AI: A change in paradigm for attaining sustainability in climate change governmentality. Using AI for steering corporations towards legal accountability.

Eliana Herrera-Vega

Organisation: Paris VIII University, France

“A change of paradigm for attaining sustainability must judge three sets of relationships: (1) The interaction between the corporation and its natural environment; (2) The interplay between the corporation and the regulatory environment; (3) The relationship between the regulatory environment and the natural environment. Analysing each relationship allows us to identify and integrate externalities into the standard for decision-making within the economic calculi, including ethical, environmental and safety reasons. Artificial intelligence offers the sheer power to compute such a paradigm shift, while enhancing the speed of reasoning, thereby integrating all the factors and scenarios for attaining sustainability. One way to integrate externalities within the operations of corporations would be the use of analogue computers and analogue thinking rather than digital reasoning. To that end, artificial intelligence could monitor the activities of the corporation and give reason for its behaviour before irreparable harm is done. Such an approach can only work within the frame of enhanced regulatory agencies or strict industry self-regulatory standards (as in transnational, voluntary disclosure regimes for producing environmental, social and governance (ESG) information) that resulted in the Global Reporting Initiative (GRI) or the Carbon Disclosure Project (CDP) into a transnational legal order. This presentation will also explore the opposite end of the spectrum: the danger of using AI without a clear and efficient regulatory framework. In the absence of such framework, AI can be used by the offending corporate agent to extend the capacities of its binary reasoning, further enhancing the ecological crisis that its actions entail.”

 My presentation reveals the need for a strong regulatory framework to hold corporations accountable as actors in global warming. The presentation exposes how corporative behaviour fosters carbon lock-in. 

 AI can reveal the set of crucial relations that were first discarded by corporative behaviour, allowing corporations to reconsider both the natural environment, the regulatory environment and the need for sustainability. AI takes an idealised form prescribed by the regulatory framework, encompassing human rights as developed in rule of law countries and the UN’s sustainable development goals. AI introduces possible futures for the corporation, facilitating the reassessment of externalities. By encompassing analogue reasoning, the rationality of AI is also enhanced. It integrates a wider and more varied range of data necessary for climate change governmentality. 

 In conclusion, without a strong regulatory framework, corporations can integrate AI as part of their carbon lock-in, increasing their anthropogenic emissions and diminishing the governmentality of climate change.