New Zealand issues long-awaited guidance for businesses on Artificial Intelligence

Article  \  30 Jul 2025

New Zealand’s Ministry of Business Innovation and Employment (MBIE) has released long-awaited guidance for the use of Artificial Intelligence (AI) by businesses. While inevitably there will still be areas of uncertainty, and there is no indication how the guidance will stand up to the increasing developments in the AI space, businesses will welcome the New Zealand Government taking initiative in this area.

In this summary, we cover our key takeaways from the Responsible AI Guidance for Businesses (“the guidance”).

Significantly, the guidance covers the broad spectrum of AI and not just the subset of Generative Artificial Intelligence. The guidance uses the OECD definitions for these terms:

“Artificial Intelligence refers to a machine-based system’s ability to infer from inputs and generate outputs for explicit or implicit objectives. Different types of AI systems vary in their levels of autonomy and adaptiveness.”

“Generative AI is a type of AI system that can create or generate new content such as text, images, video and music based off models and patterns detected in existing datasets.”

These definitions are helpful, as we have found that many businesses have only considered the use of Generative AI, with a lot of the public discourse being about Generative AI products such as OpenAI’s ChatGPT and Microsoft’s Copilot.

The guidance also contains links to a large number of external resources, including existing frameworks and international standards. It is clear that the Government is trying to provide guidance which is in line with international standards.

The guidance helpfully provides scenarios showing how AI products can interact with the law outside of a purely intellectual property (IP) space – for instance, how improper due diligence when selecting AI products could lead to issues with the Commerce Act.

It is worth keeping in mind that while the guidance provides definitions, scenarios, and other tools for risk mitigation, it does not provide answers for every case.  Businesses will need to continue to keep themselves up to date with legal developments.

Ways to mitigate risks

The guidance suggests that businesses using AI create a risk inventory, itemising potential risks that can be managed. It identifies that creating policies and standards are key ways to mitigate the risks involved with its use.

The guidance also identifies record keeping as a useful tool for accountability, and the importance of businesses keeping a clear record of not only how but where AI has been used within a business. Within the design and copyright space, this will be particularly important as keeping track of how and where creative works were created is an important step in protecting them.

Businesses need to have clear policies, procedures and strategies for the use of AI. However, the creation of these is only a starting point and it is extremely important to ensure protection of your IP and compliance with the law that your staff are trained in the use of AI and your policies on its use.

The guidance notes that it is good practice to identify when AI systems are being used. In particular, it notes the increased use of watermarks or disclaimers when Generative AI has been used in the creation of a work.

Feeding secure information into Generative AI can present not only a security risk but also risk releasing trade secrets, destroying the novelty in an invention, or creating ownership or other liability issues. For example, as the protection granted by trade secrets ends once that knowledge becomes available, it is important that businesses are aware of what information they are feeding AI products and how that information is stored/used by the product.

Data & modelling considerations

Be aware of the data used in the creation of AI models and the data that has been used to create the AI tool you are using.

AI tools operate on data. When creating data sets, AI tools should be accurate, ‘clean’, complete, relevant to the environment you intend them to operate in and lawfully obtained.

When using AI, you should be aware of the inevitable bias they contain, which can amplify unfairness, discrimination or inaccuracies in the data. This is particularly relevant where the data is about people.

The use of sensitive or personal information can not only breach privacy laws but also damage a business’s reputation.

Disclosing where the data comes from, and what (if any) licensing agreements you entered into to get that data is critical. While some data may be ‘open-sourced’ it could still be protected by IP rights and require certain conditions to be met to copy and/or use it. For instance, attribution requires or restrictions on its commercial use.

AI systems can enable misrepresentation, misappropriation or misuse of data and mātauranga Māori and other indigenous knowledge. This risk can be mitigated by using appropriate safeguards, seeking consultation, cross-checking outputs, and taking into account all relevant legal and cultural considerations for your scenario.

Use and outputs

While a useful tool, users of AI need to be aware that it is a tool and should not be the decision maker. This is particularly prevalent in regard to Generative AI, the outputs of which should be carefully considered. These outputs have a number of clear risks:

  • Large language models (LLMs) like ChatGPT or Copilot are created to produce statistically probable language patterns, i.e., the same prompt is likely to produce different (but similar) answers. So, businesses should not rely on an LLMs giving the same answer either to customers or internally.
  • There is always a risk that the LLM provide answers that include errors or ‘hallucinations’.
  • LLM outputs reflect the datasets used to create them, including any bias within them.
  • The creation of realistic images or videos of a person or voice could be used maliciously.
  • There is a risk that content generated by AI could infringe existing copyright in works. That also has the potential of undermining any claim to ownership of the work that has been created.

The guidance also provides helpful advice on the integration of AI products, their use and what to watch out for in procurement, IT, cybersecurity and privacy.

Does the approach differ from others globally?

As noted above, the guidance clearly draws on a wide range of international resources and standards, keeping it in line with those standards.

However, there are some notable differences, namely that this is purely guidance and not legislation which arguably provides more certainty though has its own issues.

On 1 August 2024, the European Artificial Intelligence Act entered into force, though its application is staggered with the main provisions taking effect in August 2026. This introduced a legal framework for AI in all EU countries. It sets clear risk-based rules for AI developers and deployers regarding the use of AI. There have, however, been issues with the legislation, and in May this year the EU paused the rollout of the Act due to industry backlash[1].

The United States has a myriad of both state and federal laws which govern specific uses of AI, however in January this year President Trump signed the Executive Order titled Removing Barriers to American Leadership in Artificial Intelligence. This required all federal agencies to develop an AI action plan within 180 days, with a focus on promoting innovation, reducing regulatory burdens, and advancing US global competitiveness in AI. The agencies were instructed to identify and repeal any policies that hindered AI development or deployment.

What businesses need to keep in mind when using the guidance

As noted above, the guidance provides clear considerations for businesses, and it is positive to see the Government focusing on this area. However, AI technology itself, the amount of its use, and the legal concerns surrounding it, are still expanding. The guidance is a great starting point, but it is not a complete solution. Unless businesses and the Government commit to updating their policies, the guidance may soon become outdated.

The guidance is also just that – guidance – and not a complete checklist for businesses. It is a starting point for businesses who will need to invest in creating policies and understanding how their use of AI interacts with legislation.

This guidance is not legislation and does not provide concrete law which businesses can rely on.

Much of the legislation in New Zealand which is referenced in the guidance has not been updated since the AI boom. This is particularly prevalent in the IP space, for instance:

  • The Copyright Act 1994, in November 2018 MBIE released an Issue Paper on the need for a review of the Copyright Act. This identified that due to significant technological changes since the last review in 2004, changes were needed ensure that the regime was fit for purpose. The need for these changes are exponentially greater now.
  • Trade Marks Act 2002, the use of and creation of trade marks by AI has not been tested. This could become increasingly important with the increased capabilities of AI and in particular Agentic AI which require little human involvement in their pursuant of goals.

AJ Park has a team of specialists happy to help businesses in the creation of these, and in understanding their rights in regard to AI.

[1] https://www.reuters.com/business/media-telecom/will-eu-delay-enforcing-its-ai-act-2025-07-03/