In border control and management, technology can act as a double-edged sword. On one hand, increasing efficiency by digitising daily functions is a welcome change to travellers. On the other hand, black box algorithms, increased surveillance and excessive migration management techniques are shifting priorities and promoting exclusion and largely unregulated and under-regulated data practices.
A traveller arriving at a border checkpoint is expected to submit (if they haven’t already) various forms of personal data, including biometric data which is considered ‘sensitive’ information. Depending on their citizenship category (EU citizen, Third Country National) the traveller’s data is passed through various systems to determine if they are allowed to cross the border or if they face detention or deportation. While the deployment of technology at the checkpoint improves efficiency and allows for greater opportunities to mitigate risks in identifying travellers who have been flagged as security risks at other borders, there are several pitfalls as well. These pitfalls arise particularly with novel technologies or with the use of algorithms which are assumed to be objective without adequate scrutiny of the data used in training the algorithms and the ethical implications of whether the technologies will ease access or exacerbate inequality.
The use of Automated Border Control (ABC) systems, which authenticate the travel documents/tokens of travellers and establish the passenger as being eligible to cross the border is not without its challenges. Automated systems pose a myriad of risks related to the protection of personal data and the harmful impact of false positives and false negatives. Additionally, in the absence of human personnel, successful attempts to delude biometric recovery systems can be made. For instance, through the use of silicone fingers and synthetic irises.
Datafication of persons
Proper ABC systems need to account for the ethical interplay of human actors, technology and social groups. Without this consideration, mobility is treated as a privilege and not a right, and groups can be subject to discrimination. The SIS II in particular, a database used to manage information sharing on alerts for specific travellers for security purposes, poses a challenge in terms of the datafication of persons. The system is used to control the mobilities of third-country nationals and it stores discreet information about persons which can be used to conduct searches for individuals who are thought to present risks, such as current or former convicts of a Member State. Despite the myriad of benefits of SIS II, there have been several noted problems including the danger of intuitively constructed data association rules which indicate who should be treated as a risk and which are subject to racial, ethnic or geographic bias against innocent persons. Additionally, in the new proposed Entry Exit System (EES), citizens from third countries will have to provide more detailed information particularly with regard to the biometric data needed to verify their identity. The Entry/Exit system is a database which records entries and exits of third-country nationals. The system is relied on by border authorities, immigration authorities, Visa authorities, and other designated authorities such as law enforcement officers and it is intended to be interoperable with several similar databases in border security. Due to the higher amounts of data required from TCN’s the potential for further stigmatisation and discrimination based on country of origin is present.
The potential for harm with new technologies
The deployment of ‘smart borders’ has led to the suggested use of technologies to support border control officers like automated deception detection tools which analyse second generation biometrics associated with deceptive markers such as stress and lying. The iBorderCtrl project was aimed at this and received significant funding under the Horizon 2020 scheme. The project was first tested in 2016 in Greece, Latvia and Hungary and faced scrutiny for its human rights implications, questions surrounding the objectification of human beings, the problems of false positives, stigmatisation of the data subject as well as the burden of proof placed on the person misidentified to prove that they were misidentified or miscategorized by the system.
Surveillance creep
Migration control practices have become more pervasive as personal data collected at borders is used in surveillance technology. Oftentimes the rationale given is linked to the prevention of criminal activity and the need for cross border collaboration of police and judicial actors. The term “surveillance creep” refers to the increasingly widening scope of the use of a technology or system in a way that can encroach on people’s/users’ privacy. Biometric data can easily be used to monitor the mobility of persons, among other factors, infringing on their privacy rights as a result.
The datafication of mobility and migration management recontextualized the functionality of borders. Border control is now a large-scale data-collecting, data sorting exercise which governs movement and facilitates different types of mobility accelerating some and inhibiting others (tourism and business versus asylum seeking and ‘irregular migration’).
Ethical intervention is at the crux of good border control as it mediates the rights and interests of persons and States, allowing for a constant balance check on bias and unfairly restrictive practices and technologies. Strict data usage limitations to minimise ‘surveillance creep’ and the use of scare tactics to justify further bias and discrimination are crucial to pevent infringing on safety, privacy, and human rights. The positive use of technology to make border crossing efficient and safer must be contrasted to and balanced with the risks of bias, discrimination and a lack of data protection. Ensuring that data rights form the crux of developing the technologies is the only way to have both fair and efficient border control and management.