Our website uses cookies so we can analyse our site usage and give you the best experience. Click "Accept" if you’re happy with this, or click "More" for information about cookies on our site, how to opt out, and how to disable cookies altogether.
We respect your Do Not Track preference.
The final step of rule 1 is assessing the proportionality of the collection.
On this page:
You must not collect biometric information unless you believe, on reasonable grounds, that the biometric processing is proportionate to the likely impacts on individuals. To assess whether the biometric processing is proportionate, you need to assess:
A key part of the proportionality assessment is determining the degree of privacy risk presented by your use of biometrics.
Under the Code, privacy risk is any reasonable likelihood that the privacy of individuals may be infringed by the biometric processing. A privacy infringement is any impact or effect of the biometric system that may limit, undermine or encroach on an individual’s privacy or deter individuals from exercising their rights.
The concept of infringement is broader than an interference with privacy (see section 69 of the Privacy Act) in order to take account of the subtler systemic or aggregate impacts of using biometric systems that erode people’s privacy, like the impacts of monitoring public spaces, as well as distinct harms to individuals.
When considering privacy risk, think both how likely it is an event will occur, and what the consequences would be if an event occurred.
The Code lists examples of possible privacy infringements that can result from using a biometric system / biometric processing, which includes:
Although the Code lists certain privacy risks that you must consider, the context of your biometric processing is key to understanding the privacy risk, and you may need to take into account risks that aren’t listed in the Code.
Note: requiring that agencies take into account the risk of privacy infringements in the proportionality assessment does not change the threshold for a successful privacy complaint under the Code i.e. finding an agency has interfered with an individual’s privacy through their use of a biometric system.
All biometric processing has some risk, but some forms of biometric processing are higher risk than others.
To assess the privacy risk posed by your biometric processing, you could use the following framework based around what information you are collecting, whose information it is, why you are collecting it, and when, where and how you are collecting it.
|
What – the volume and nature of the information |
|
|
Who – who is the information about and who collects it: |
|
|
Why – why are you collecting the biometric information? |
|
|
How – context and design of the system (including where and when it is operating) |
For this dimension, think through the personal information lifecycle and your operational choices.
|
Some privacy risks in biometric systems are inherent and can’t be changed. For example, why the information is collected (collecting for public surveillance will pose more risk than for 1:1 verification) or who the data is about (children or other vulnerable groups raises the inherent risk). Other risks can be managed or mitigated through design choices, including how much information much information you collect, how you collect, store and protect it, or how long you retain it.
Remember that privacy risks also arise in the collection and use of non-biometric information. Assessing the relative risk of biometric processing compared to processing of other personal information can form part of your assessment of whether there is a reasonably effective alternative with less privacy risk. See the section on alternatives for more detail.
This table provides a high level summary of factors that OPC may consider to increase or lower the privacy risk. Each assessment still turns on its own facts and you need to consider how your own context and specific processing impacts the degree of risk.
|
Factors that tend to lower risk |
Factors that tend to increase risk |
|
System’s scope is limited or targeted scope of system, i.e. 1:1 verification, use impacts a small number of individuals or a select group. |
System’s scope is wide or indiscriminate, i.e. 1:N identification, use impacts many individuals or society more generally. |
|
Likelihood of errors lower:
|
Likelihood of errors higher:
|
|
Established uses of biometrics with robust scientific basis and high accuracy. |
Emerging or novel uses of biometrics with uncertain effectiveness and limited research/scientific basis. |
|
Use in places where individuals would reasonably expect to be asked to confirm their identity. |
Use in public spaces, or private/semi-private spaces where people don’t expect to be monitored. |
|
Little to no power imbalance between individuals and agency (e.g. a provider of an optional commercial service with lots of competitors, individual has high level of consumer power). |
Significant power imbalance between individuals and agency (e.g. agency with law enforcement powers, a provider of a critical service with few or no competitors, employment relationship, meaningful/significant age difference). |
|
Low impact on individual if a privacy risk eventuates e.g. system may inconvenience individuals or produce delays if it’s inaccurate. |
High impact on individual if a privacy risk eventuates e.g. causes inability to access a social or essential service, information or facility that individual is warranted to access, causes humiliation, distress or stress, or financial impact if system doesn’t work properly. |
|
Individuals who are the subject of processing less vulnerable to privacy harms e.g. have high capacity or ability to exercise privacy rights, not part of a vulnerable group in society, unlikely to experience bias. |
Individuals who are the subject of processing more vulnerable to privacy harms e.g. lower capacity or ability to exercise privacy rights, individuals more likely to experience bias, part of a vulnerable group in society. |
|
Examples: Verification to authenticate user or facilitate access. |
Examples: Identification for surveillance, monitoring or profiling. |
Assessing the overall risk requires you to consider the biometric system as a whole and the context in which your biometric processing will take place. You need to consider all factors which increase or decrease your risk. The modifiable risk factors (such as what information is collected), are a way to mitigate the risk by changing how the system operates to reduce your overall risk. Read more about protecting biometric information to lower the overall privacy risk.
In some cases, there may be factors which make the risk unacceptable. For example, if you do not have sufficient security safeguards to meet the requirements in rule 5 to keep the information secure. Similarly, if the accuracy of the system is not high enough to meet the requirement in rule 8 to ensure information is accurate before use. If the risk is unacceptable, you cannot continue with collecting biometric information unless you can sufficiently decrease the risk.
Adopting biometric systems should be driven by an analysis of the benefits and risks, rather than the availability or appeal of the technology. To support this, the proportionality assessment requires organisations to weigh up:
This section discusses how to assess the benefit and how to do the weighing exercise; more guidance on risk is included in the privacy risk section.
An organisation’s purpose for implementing a biometric system is the reason the system is needed (the “why”). The benefit of a biometric system is the positive effect or value that results from its implementation (the “consequences from the why”, or “why does the why matter?”). The achievement of your purpose may also be your benefit if your purpose has been expressed in sufficiently specific terms.
Because a biometric system is an automated way of recognising someone or inferring certain traits, the benefits of using a biometric system will typically reflect the fact that it’s an automated process. For example, increased efficiency, reliability or consistency, reduced manual errors (if the system is sufficiently accurate), faster turnaround times, quick decision making, user convenience and streamlining processes.
When assessing the benefit of achieving your purpose, you need to be clear on the specific benefit you expect to achieve and what kind of benefit it is (public benefit, benefit to the relevant individual/s, or private benefit to the organisation).
You should clearly document the benefit. Like your purpose, the benefit must be specific and directly linked to the biometric processing. For example, the benefit needs to be more specific than a generic “improved customer experience”, or “improved safety” – be clear on the actual specific improvement and how it will be achieved through biometric processing. You should be able to explain what the problem is you are trying to solve, or what would happen without the biometric processing.
Examples of specific benefits:
The benefit is related to how effective your biometric processing is in achieving the intended purpose – the more effective a biometric system is at doing what it was set up to do, the greater the benefits produced. The reverse is also true, less effective or unfit systems will provide fewer benefits, and it will be harder to determine that they are proportionate. (See also the section on effectiveness).
You also need to have reasonable grounds for assessing the scale of the benefit. For example:
It is not necessary to have an exact percentage improvement, but you should have a general idea of how much benefit comes from the biometric processing– e.g. a moderate improvement in customer safety or a small increase in security of information access.
Once you have clearly established what the expected benefit of your biometric processing is, you need to consider whether that benefit outweighs the privacy risk, taking into account the different standards that apply to the categories of benefit. What you are asking is whether the benefit e.g. additional security, efficiency, time, convenience, user experience, reduced cost, justifies that risk that you identified when assessing your privacy risk.
As outlined above, a biometric system can benefit the public, the individual whose biometric information you are collecting, and/or the organisation collecting the biometric information. Depending on who is accruing the benefit of the system, the Code requires a slightly different assessment when considering whether the benefit outweighs the privacy risk:
Public or customer opinion (e.g. that the public is supportive or not of the biometric processing) can be relevant to both the benefit and privacy risk but is not in itself determinative. That is, just because a majority of your customers may support or not oppose the processing, does not mean that the benefit will outweigh the risk.
If your biometric system is high risk, you will need a correspondingly significant benefit for the processing to be overall proportionate. If your system is low risk, then it could be proportionate even if you have only identified minor advantages from using the system, like 50 percent more efficient and improved user experience. If your system presents a high level of risk, but you only achieve modest or limited benefits, you will need to modify the risk to be lower (see the section on privacy risk) or the processing will not be proportionate.
Read the rule 1 example scenarios to see how the weighing exercise could work in practice.
If you intend to use biometric processing for multiple lawful purposes and the purposes have benefits in different categories, then you should consider the proportionality for each purpose separately, according to the relevant benefit for each purpose.
The purpose of your biometric processing needs to be proportionate when considering only one of the benefit categories. For example, if your purpose has advantages to multiple groups (e.g., there is both a public benefit and a clear benefit to the people subject to the processing), your system still needs to be proportionate based on just one of the benefit categories (i.e. you can’t tally multiple small benefits to claim the system is proportionate).
Example: a retail store intends to use FRT for the purposes of improving staff and customer safety and preventing stock losses by generating alerts to guide an appropriate staff response when individuals on a watchlist enter the store. The store considers the proportionality of each purpose separately:
If you are running a trial under rule 1 to assess how well the biometric system works, then you may not know in advance exactly how beneficial using biometrics in your context is, until after the trial is complete.
The Code requires that you have reasonable grounds to believe that your biometric processing is proportionate. This threshold (belief on reasonable grounds) is flexible depending on the circumstances. Therefore, if you are thinking about conducting doing a trial of a biometric system, you need to believe, with good reason, that the trial is a proportionate course of action given the privacy risks and likely or anticipated benefits).
We would expect there to be a reasonable and objective basis for your belief that at the end of the trial, there will be a benefit that outweighs the risk, assuming that the trial demonstrates that the system is sufficiently effective. Read our trial guidance.
Cultural impacts and effects on MāoriPart of determining whether your proposed use of biometrics is proportionate is working through the cultural impacts and effects of the biometric processing on Māori.
The Code requires you to have reasonable grounds to believe that the biometric processing is proportionate to the likely risks and impacts on individuals, after specifically taking into account the cultural impacts and effects on Māori. Identifying and addressing cultural impacts and effects is a necessary part of the proportionality assessment.
Cultural impacts and effects could result from:
Personal characteristics such as a person’s face or fingerprints are so inherent to the identity of a person that Māori treat them with special sensitivity. They are imbued with the tapu of that individual which restricts the way in which biometric information is engaged with. From a Māori perspective, tikanga (values and practices) such as tapu, whakapapa, mauri, noa, mana, hau and utu should influence how you collect, store, access, maintain and disclose biometric information.
A failure to observe Māori perspectives on privacy and biometric information may result in a hara or violation. In addition to any other harm, a hara creates a disparity between the parties involved. Such violations can impact the whakapapa, tapu, mana, mauri and hau of the affected party and must be corrected by the offending party, for example through an apology, karakia, reparation, rectification of the technology or finding alternatives for the individual to use.
Māori data sovereignty gives effect to the inherent sovereign rights and interests Māori have over the collection, ownership and application of their data as a taonga under te Tiriti o Waitangi. The principles of Māori data sovereignty are a self-determining framework that influences the way that Māori control their information, including biometric information. Te Mana Raraunga (the Māori Data Sovereignty Network) have outlined how key principles translate to concrete ways to protect Māori data and can help all organisations consider how the use of biometrics could impact and affect Māori.
Government agencies will need to consider any use of biometric information in the context of te Tiriti obligations and while considering the power imbalance between the Crown and Māori. For instance, how do principles such as tino rangatiratanga and partnership impact the use of Māori biometric information?
The definitions below come from Kukutai, T., Campbell-Kamariera, K., Mead, A., Mikaere, K., Moses, C., Whitehead, J. & Cormack, D. (2023). Māori data governance model. Te Kāhui Raraunga.
First, organisations must make a reasonable effort to assess what the cultural impacts and effects on Māori could be. Then, consider whether and how to address them.
What this requires in practice will change depending on your specific use case and context.
In general, we expect agencies to consider:
Once you have identified the potential cultural impacts and effects on Māori, if there are any negative impacts or effects, you need to consider whether and how to address those impacts. Some impacts or effects may not be able to be addressed. Failure to address those impacts or effects is a factor to be considered and may make the processing less proportionate.
If you do not have the internal expertise to make these assessments, you should consider whether it is appropriate to engage external advisers to provide cultural advice. The “further resources” section has links to other guidance which could assist you.
You need to address cultural impacts whether or not you know specific ethnicity or cultural information about each impacted individual. In most cases, if you are undertaking biometric processing in New Zealand, it is very likely you will be collecting Māori biometric information and so it is important that you consider what cultural impacts there may be.
In general, you do not need to collect ethnicity or cultural information to be able to consider potential cultural impacts. But, if you think you need to know how many Māori people may be impacted by your processing, you could consider using other metrics such as general population information to help you in your assessment.
Collecting, storing and using biometric information in accordance with tikanga is one way of addressing cultural impacts and effects (but it is not the only way). Some starting points include:
An example of a specific cultural concern for Māori is capturing images of moko and moko kauae (traditional facial tattoos), e.g. through a facial recognition system. Moko contain deeply sensitive and tapu information about an individual’s identity such as whakapapa, whānau/hapū/iwi, whenua, ancestors and origins. Even if the biometric system does not specifically analyse the moko itself, the use or misuse of images that include moko can affect the tapu, mana and mauri of the individual, and their whānau, hapū and iwi.
Free, prior and informed consent is an important principle underlining Māori data sovereignty (principle: manaakitanga | reciprocity). Free, prior and informed consent involves agreement from the individual to the collection and use of their biometric information based on adequate information, appropriate timing and an absence of coercion. It gives people a genuine choice and autonomy over their information.
There must be a genuine alternative available to individuals to access for consent to be considered free, prior and informed. If there is no alternative, then individuals cannot freely consent.
In many cases, if you can gain the free, prior and informed consent of individuals before collecting their biometric information, this will be a valuable way to mitigate any negative cultural impacts.
Consent that doesn’t meet the standard of free, prior and informed should be balanced by strong governance arrangements (partnership, oversight, and/or accountability mechanisms).
While free, prior and informed consent is not a mandatory part of the Code (and will not be appropriate or feasible for all circumstances), if it is an option for your specific use case, it is a good way to address cultural impacts.
Bias in a biometric system is critical to identify and address to avoid negative impacts on Māori. Bias can enter the biometric system in different ways, from technical bias (i.e. repeatable errors produced by algorithm making a comparison or inference) to human bias (e.g. assumptions or judgements by people acting on the results). Mitigations include:
Take particular care when using a biometric system that may put the privacy rights of tamariki or rangatahi at risk e.g. adding them to a watchlist. In Mātauranga Māori, children are highly valued and considered taonga with inherent mana. They are a significant whakapapa link to the past and future. There is a strong imperative to minimise collection of their biometric information as they are more vulnerable to harms. In addition, breaches of privacy can affect whānau trust in government, business and systems.
Read our example scenarios of how an organisation might apply rule 1 in context.