U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

The ethics of facial recognition technologies, surveillance, and accountability in an age of artificial intelligence: a comparative analysis of US, EU, and UK regulatory frameworks

Denise almeida.

1 Department of Information Studies, UCL, London, UK

Konstantin Shmarko

2 Department of Economics, UCL, London, UK

Elizabeth Lomas

The rapid development of facial recognition technologies (FRT) has led to complex ethical choices in terms of balancing individual privacy rights versus delivering societal safety. Within this space, increasingly commonplace use of these technologies by law enforcement agencies has presented a particular lens for probing this complex landscape, its application, and the acceptable extent of citizen surveillance. This analysis focuses on the regulatory contexts and recent case law in the United States (USA), United Kingdom (UK), and European Union (EU) in terms of the use and misuse of FRT by law enforcement agencies. In the case of the USA, it is one of the main global regions in which the technology is being rapidly evolved, and yet, it has a patchwork of legislation with less emphasis on data protection and privacy. Within the context of the EU and the UK, there has been a critical focus on the development of accountability requirements particularly when considered in the context of the EU’s General Data Protection Regulation (GDPR) and the legal focus on Privacy by Design (PbD). However, globally, there is no standardised human rights framework and regulatory requirements that can be easily applied to FRT rollout. This article contains a discursive discussion considering the complexity of the ethical and regulatory dimensions at play in these spaces including considering data protection and human rights frameworks. It concludes that data protection impact assessments (DPIA) and human rights impact assessments together with greater transparency, regulation, audit and explanation of FRT use, and application in individual contexts would improve FRT deployments. In addition, it sets out ten critical questions which it suggests need to be answered for the successful development and deployment of FRT and AI more broadly. It is suggested that these should be answered by lawmakers, policy makers, AI developers, and adopters.

Introduction

Law enforcement agencies globally are constantly seeking new technologies to better ensure successful detection and prosecution of crimes to keep citizens and society safe. In addition, there is a public expectation to deliver value for money and where possible to provide economic efficiencies and reduced labor costs, which potentially new technologies can help deliver. Over the last decade, many new technologies have been harnessed by law enforcement agencies including, but not limited to surveillance cameras, automated license plate readers, body cameras, drones, and now facial recognition technologies (FRT). Law enforcement agencies have been at the forefront of FRT adoption due to the benefits that can be seen to be derived and justified in this space. However, each of these technologies changes the relationships between law enforcement operatives and citizens and requires the negotiation of new boundaries and revised accountability requirements. It is important to recognise that each technology has encroached on citizens’ privacy and relationship with the state. As such, what is being deemed as acceptable in terms of reshaping boundaries is under scrutiny and debate. However, the decisions being made in regard to technology adoption are not currently uniform. There are distinct differences in technology adoption and roll out nation to nation and in some national contexts state to state. These largely depend on the legal landscape in terms of privacy/data protection legislation and citizen acceptance and expectations of surveillance. Within this context, COVID-19 has further pushed the boundaries of privacy, with nations introducing new measures to track citizens’ movements and connections to contain the spread of the virus. However, the shift in enhanced monitoring, surveillance and privacy disclosures, and accountability in this regard is being questioned globally, drawing attention to changes and challenges [ 1 , 2 ]. This latter question of accountability and acceptable privacy limits is critical in terms of balancing rights and responsibilities for FRT.

Accountability provides for the obligation to explain, justify, and take responsibility for actions. In the context of the state and law enforcement, the state is obligated to be responsible for and answer for the choices it makes in terms of the technologies it rolls out and how these impact in particular case contexts. Many questions about the use of FRT and Artificial Intelligence (AI) have yet to be fully resolved. FRT usage by law enforcement agencies provides a strong case study for considering aspects of FRT and AI ethics more generally. It provides for a very understandable use of personal data with clear impacts on individuals rights.

This article considers the complexity of the ethical and regulatory dimensions at play in the space of FRT and law enforcement. The paper starts by providing a brief explanation of FRT, followed by an analysis of the use of FRT by law enforcement and legal approaches to the regulation of FRT in the US, EU, and UK. We conclude by recommending that there must be better checks and balances for individuals and societal needs. There needs to be accountability through greater transparency, regulation, audit and explanation of FRT use and application in individual contexts. One critical tool for this is the impact assessment, which can be used to undertake data protection impact assessments (DPIA) and human rights impact assessments. Ten critical ethical questions are framed that need to be considered for the ethical development, procurement, rollout, and use of FRT for law enforcement purposes. It is worth stating these from the outset:

  • Who should control the development, purchase, and testing of FRT systems ensuring the proper management and processes to challenge bias?
  • For what purposes and in what contexts is it acceptable to use FRT to capture individuals’ images?
  • What specific consents, notices and checks and balances should be in place for fairness and transparency for these purposes?
  • On what basis should facial data banks be built and used in relation to which purposes?
  • What specific consents, notices and checks and balances should be in place for fairness and transparency for data bank accrual and use and what should not be allowable in terms of data scraping, etc.?
  • What are the limitations of FRT performance capabilities for different purposes taking into consideration the design context?
  • What accountability should be in place for different usages?
  • How can this accountability be explicitly exercised, explained and audited for a range of stakeholder needs?
  • How are complaint and challenge processes enabled and afforded to all?
  • Can counter-AI initiatives be conducted to challenge and test law enforcement and audit systems?

Finally, it should be established that while law enforcement agencies are at the forefront of FRT adoption, others can learn valuable ethical lessons from the frameworks put in place to safeguard citizens’ rights and ensure accountability through time. Many of these same questions are applicable to AI development more broadly and should be considered by law makers to legislate and mandate for robust AI frameworks.

Facial recognition technologies (FRT)

Facial recognition in essence works by capturing an individual’s image and then identifying that person through analysing and mapping of those captured features comparing them to identified likenesses. Facial images, and their careful analysis, have been a critical toolkit of law enforcement agencies since the nineteenth century. However, in the twenty-first century, the application of facial recognition, moving from manual techniques to facial recognition technologies (FRT), to automatically extract and compare features and every nuance of their measurement through the application of artificial intelligence (AI) and algorithms has significantly enhanced this basic tool [ 3 ]. As such, the face can be mapped and compared to other data which offers a more formal match and identification to an individual. This can sometimes involve the introduction of other biometric data such as eye recognition data. One-to-one matching provides for certain identification of an individual in a specific context. However, using an identified image in connection with other data banks or data lakes enables one-to-many possibilities and connotations of usage. Matching that can process data at scale presents new possibilities and complexities when considering machine learning, algorithms, and AI.

The context of the situation of FRT rollout and data gathering is potentially all important in terms of how it aligns with citizens’ security versus privacy concerns in differing situations. In 2008, Lenovo launched a new series of laptops that instead of requiring a password, could recognise the face of their authorised user [ 4 ]. This functionality was seen as a marketing benefit for Lenovo and clearly users consented and engaged with the capture and use for their own personal computing needs and one-to-one matching. However, there will be distinctions between expectations in one-to-one matching in a more private controlled space for transparent individual benefits versus taking and using a verification process in broader and potentially big data contexts. As the proposed EU regulation on AI suggests, the use of FRT in public spaces is ethically (and legally) significantly different than its use for device unlocking. Citizens will have different expectations about spaces in which surveillance and FRT should be in place. For example, when crossing national border jurisdictions, there has always been an exchange of data and careful identification of individuals and as such FRT may be deemed to be more acceptable in this space as opposed to when moving around public spaces more generally, functioning in working spaces and finally residing within private home dwellings. In each of these spaces, the expectations for active law enforcement and surveillance clearly differ and there are a number of ethical questions to be answered for a successful rollout in different contexts and for different law enforcement purposes. In addition, there are differences between expectations for localised enforcement agencies such as police services and national intelligence agencies undertaking more covert security operations. In each citizen space, and dependent upon the form of law enforcement, there will be different perspectives and concerns from individuals and groups of stakeholders. As such, reaching a consensus in technological rollouts will be a journey. Even in the example of border controls, where ID data have always been exchanged, studies have shown that the views of travellers on acceptable technologies differ from the views of board control guards [ 5 ].

In regard to law enforcement, some scholars have advanced the theory that monitoring of social media by law enforcement could be perceived as a ‘digital stop and frisk’, potentially delivering, “everyday racism in social media policing as an emerging framework for conceptualizing how various forms of racism affect social media policing strategies” [ 6 ]. This statement evidences concerns about the bias and credibility of law enforcement agencies. Applying this same conceptual framework to, sometimes flawed, facial recognition algorithms without taking accountability for the consequences of this usage could not only lead to further discrimination and victimisation of specific communities, but also to an even greater loss of trust between the general population and law enforcement agencies. In recent years, we have seen an exponential increase in research focused on issues of algorithmic accountability, 1 with the overarching message being that algorithms tend to reflect the biases of those who build them, and the data used to train them. The extent to which they can be relied on without human checks is one of constant concern, particularly as the use of these technologies as well as identifying individuals is extending their reach to make further judgements about individuals including in regard to their behaviours, motivations, emotions, and protected characteristics such as gender or sexuality [ 7 ].

In the specific case of FRT, it is important to understand some aspects at play in the design and roll out that have led to concerns over biases and unbalanced power structures. The majority of technology workers in the West are claimed to be white men, which as such unintentionally influences the development of technologies such as FRT [ 8 ]. Input bias has been known about for decades, but has not been fully surfaced in an FRT context. If FRT are trained on white male faces, then there will be implications when it is used to process data related to non-white and female faces. As such, studies have indicated that identification and bias failings do occur [ 9 ]. Even where inputs are adjusted, systems can be biased by attempting to meet the anticipated needs of purchasers and users which may skew the system particularly as algorithms are applied and developed through time. In each of these instances, a high proportion of the stakeholders with power and influence are likely to be male and white [ 10 ]. These biases can lead to severe consequences, particularly when carried into uses by law enforcement. This brings to the surface issues of power dynamics and citizen trust of its law enforcement.

However, it is equally to be noted that AI has the potential to challenge biases and to be used in innovative ways that can alter existing power dynamics. A significant example of this, is the recent use of FRT by human rights activists and protesters as a way to identify, and hold accountable, law enforcement officers who might be abusing their power [ 11 ]. This ‘turn of the tables’ adds a further layer of complexity to discussions of accountability and power. However, while a group of people who typically do not hold power may in limited circumstances use FRT to hold law enforcement accountable, that does not make the technology ethically viable. However, this power shift, if more formally supported, might provide a part of the solution to FRT deployment and its impacts. For example, as images are captured and significant in legal case contexts, AI has the power to potentially assist with identifying deep fakes and calling out adaptions to footage and photographs. As such, it is important to drill down into the use of FRT and the frameworks which sit around FRT.

The EU and UK legislative landscape for FRT in a law enforcement context

There are currently no FRT specific pieces of legislation in the EU and UK domains, but there are other pieces of legislation that dictate the management and rollout of FRT. In terms of personal data management, the EU’s GDPR, which came into force in 2018 covering all the Member States of the EU, has been seen as setting the bar at the highest level for the management of personal data. As such, for many tech companies operating at a global level, it has been seen as the de facto standard to roll out across all global operations. It is to be noted that as the GDPR came into force, while the UK was part of the EU, it was enshrined into UK domestic legislation and still continues to apply within a UK context. The UK’s ongoing adequacy in terms of alignment to EU GDPR will continue to be judged by the EU.

The GDPR has required systems to be implemented where ‘privacy by design’ (PbD) and ‘privacy by default’ are inbuilt for any personal data processing. Processing covers any activity with personal data including creating, receiving, sharing, and even destroying/deleting personal data. There must be a clear lawful basis for personal data processing, and in addition, the data must be processed fairly and transparently. Within this context, it is important to understand that this does not prevent personal data collection, but does require carefully documented processes and active personal data management through time. In addition, it must be noted that what is considered fair and lawful is potentially open to interpretation and legal debate and contest. In certain instances, consent for processing is required. In addition, there are specific data subject rights such as the right to know what is held on/about you, subject to certain exemptions and to ask for data to be rectified or deleted (the right to be forgotten) in certain circumstances.

Where special category personal data are processed, stricter controls are required. Of note in this regard is biometric data which is categorised as physical or behavioural characteristics that uniquely identify an individual, including but not limited to DNA, fingerprints, faces, and voice patterns as examples. As such FRT are caught under this definition and within Article 9 of the GDPR, it is clarified that biometric data should not be used to identify a person unless an individual has provided explicit consent or alternatively other exemptions exist. One such example of an exempted area across the EU and UK is law enforcement. In the GDPR, personal data management for law enforcement purposes was derogated in Article 23, for determination at Member State level. There is therefore some divergence in terms of how the checks and balances exist between personal data rights and law enforcement rights. Within most EU Member States there is an expectation that for the purposes of pursuing law enforcement to identify and track offenders certain exemptions would exist, and consent would not be required. Within this space, the new technological landscape is further continuing to evolve and as such its rollout and use by law enforcement agencies is not consistent across the EU.

Regardless of certain consent exemptions, other GDPR requirements do still apply, such as PbD, which does provide a framework of accountability for law enforcement. For FRT purposes, a DPIA must be undertaken as a way of demonstrating and achieving PbD. The DPIA is a process of identifying risks that arise from data processing and is mandatory for high-risk applications, such as facial recognition in law enforcement use. 2 This requires that all aspects of a process are reviewed and considered to ensure that there are justifications for the process; this ensures it is ‘fair and lawful’, it is appropriately targeted, implemented and managed through time. This procedure is not only useful for the FRT operators, as it forces them to scrutinise their algorithms, focus and security, but can also benefit the general public, as, if published, a DPIA can explain data processing in terms that are accessible to any individual, not just an IT specialist. Mandatory publication of the DPIA does not exist, but there is a requirement to be transparent about DP processing and to have in place privacy notices for this reason.

Another important GDPR requirement is the need to have a Data Protection Officer (DPO) within any public authority or private entities where the core activities require large scale, regular, and systematic monitoring of individuals or large-scale processing of special category data or data relating to criminal convictions or offences. As such, this does mean that law enforcement agencies and businesses providing processing services will be required to have a DPO. The DPO is required to advise an organisation on its data protection compliance. In addition, were an organisation to fail to fully comply with the GDPR, the DPO would act as a whistle-blower reporting to the relevant national ombudsman on data protection.

Each EU Member State and the UK has a regulatory requirement which establishes an oversight, complaint, and investigatory regime to be in place, a data protection ombudsman/regulator. There are currently 27 data protection authorities in the EU, one for each country, plus the European Data Protection Supervisor, which oversees EU institutions and bodies. The UK also has a data protection supervisor. The exact responsibilities of the organisations differ, but all of them are tasked with monitoring and ensuring data protection and privacy compliance regionally on behalf of their citizens. In accordance with this mandate, it is not uncommon to see these authorities actively intervening in relevant disputes, sometimes even before any citizen complaints are filed. The benefit to accountability of these organisations is obvious—the data protection regulators have bigger budgets and better legal teams than most individuals, meaning that they are more effective in holding FRT operators accountable. The authorities with enforcement powers can bypass litigation entirely, issuing fines and orders faster than a court would be able to. These factors ensure that the FRT providers and operators should never get complacent.

Separately, citizens may bring forward lawsuits for data protection failings, but the ability to complain to a regulator provides the citizen with a cheaper alternative and one which should actively investigate and oversee any organisational data protection failings. The regulators are publicly funded and the resources for each across the EU and UK vary significantly. The extent of investigations and the timeliness of dealing with complaints have both been areas of criticism. For example, in 2020, a group of cross-party Members of the UK Parliament wrote complaining about the performance of the UK’s Information Commissioner. 3 Such complaints are not limited to the UK. In July 2020, the Irish High Court gave permission for a judicial review of the Data Protection Commissioner in respect of the delay dealing with complaints. It is to be noted that Ireland is the home to many tech companies’ European headquarters, and thus, these delays can impact more broadly upon EU citizens. However, equally, there are many examples of active engagement and investigation.

In terms of moving to cover new developments, the GDPR is not a prescriptive piece of legislation and, as such, its ‘vagueness by default’ is intended to ensure that the regulation maintains its relevance, allowing for application to new technologies, including FRT. Even more importantly, the GDPR holds some sway outside of the EU as well, since any business dealing with the bloc has to adhere to the rules when managing European’s data, even if those same rules do not apply in their own domestic jurisdiction. This is generally known as ‘The Brussels Effect’ [ 12 , 13 ]. In practice, where FRT are rolled out in the EU, this means that it is much easier to hold FRT operators accountable, as there is no need to navigate a complex web of regional laws, and the operators themselves are more consistent in their behaviour, unable to use the splintering of regulation to their advantage. In addition, companies will often roll out the same systems globally, meaning that those outside the EU may benefit from some read over of standards. However, this is not to say that the systems will then be operated and managed in the same ways globally.

In terms of AI more specifically, this has become a focus for the EU and UK regulators and governments. The UK Information Commissioner’s Office (ICO) has recently published [ 14 ] guidance on AI auditing, supported by impact assessments. Although this guidance marks an important start towards specific guidance tailored towards the compliance of AI systems, we are still lacking case studies and dedicated frameworks to address this problem in a standardised way [ 15 ]. Recently, the EU has engaged with the need to actively manage the ethics and legislation that sit around AI innovation. A 2019 press release by the European Data Protection Supervisor Wiewiórowsk, called out the accountability and transparency concerns of facial recognition, particularly around the input data for facial recognition systems stating, “the deployment of this technology so far has been marked by obscurity. We basically do not know how data are used by those who collect it, who has access and to whom it is sent, how long do they keep it, how a profile is formed and who is responsible at the end for the automated decision-making.” [ 16 ]. As such, the European Commission began publishing a roadmap for dealing with AI. In April 2021, the European Commission released documentation on its approach to AI, which includes an aspiration to harmonise all legislation and bring in a specific Artificial Intelligence Act. FRT more specifically have yet to be dealt with in detail but, within the proposals for harmonisation, law enforcement systems are categorised as high risk. It is stated that AI systems used by law enforcement must ensure, “accuracy, reliability and transparency… to avoid adverse impacts, retain public trust and ensure accountability and effective redress” [ 17 ]. The documentation draws out areas of greater concern focusing on vulnerable people and those contexts where AI systems failures will have greater consequences. Examples include managing asylum seekers and ensuring individuals have a right to a fair trial. The importance of data quality and documentation is highlighted [ 17 ]. The Commission states that there must be oversight regarding:

“the quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights…”

The place of the human in the system review is an important part of the process. In addition, the need for transparency is highlighted. However, what is not yet in place is a prescribed system for transparency and accountability. As the publications are currently at a high level, a need to drill down and consider case examples is necessary for delivery. There are some limitations to these publications and the recent publications by the EU have been criticized for not bringing in a moratorium on biometric technologies such as FRT [ 18 ]

In an EU context, in addition to the GDPR which dictates rules around managing personal data, privacy is further legislated for through the European Convention on Human Rights. As with the GDPR, this is enshrined in UK law as well as across all 27 EU Member States. The Human Rights legislation is potentially more holistic in terms of offering frameworks for consideration of law enforcement versus individual rights in the rollout considerations for FRT. It enshrines principles of equality and inclusion as well as privacy and rights to fair legal processes. The checks and balances of different and sometimes competing human rights are well established and tested through the courts. Under the terms of the law, individuals can bring legal cases, and, in the EU Member States (although not the UK), cases can progress to the European Court of Human Rights. However, there is not the same active regulatory framework sitting around the legislation which provides for quicker and cheaper routes to justice, and which can actively take action without the requirement for an individual to bring a case. Justice through the European Courts most normally is expensive, uncertain, and takes years. In addition, the requirements for accountability and design documentation for human rights compliance are not explicitly enshrined in the law. In terms of transparency, aspects of accountability for policy more generally fall under freedom of information legislation which is enacted at Member State level and differs very widely nation to nation in terms of public accountability requirements for administration more generally. There are also certain law enforcement and national security exemptions from freedom of information requirements. Finally, it is important to note that it does not bind on private entities who do not have the same accountability requirements.

In terms of actual FRT legal accountabilities, cases have been brought under both the GDPR and the Human Rights Act in respect of FRT. One such instance is the 2019 UK case of Bridges v. South Wales Police. Bridges, a civil rights campaigner, argued that the active FRT deployed by the police at public gatherings infringed on the right to respect for human life under the Human Rights Act 1998 and his privacy rights under the Data Protection Act 2018 (DPA 2018), the UK implementation of the GDPR. Relevant to this discussion, Bridges also claimed that, since the police failed to account for this infringement, its DPIA was not performed correctly [ 19 ]. After a lengthy litigation process, the court ruled in favour of Bridges, agreeing with the points above and additionally finding that the police had too broad a discretion regarding the use of FRT.

This example highlights the value of the GDPR (or similar legislative frameworks) and, in particular, the importance of the DPIA. Here, the impact assessment not only provided the basis for a large portion of the claimant’s argument, but it was also released to the public, making it easy for anyone with internet access to learn the details of the FRT data processing employed by the South Wales Police. 4 In addition, the case shows that the DPIA is not a checkbox exercise but, instead, requires that the FRT operator possesses substantial knowledge about the inner workings of the algorithm and its wider repercussions.

The lawsuit also draws attention to the holistic understanding of privacy under the GDPR. In a country with less-developed data protection laws, it may be sufficient for an FRT operator to encrypt and anonymise faceprints, and, regardless of how they are collected, this will constitute sufficient protection; the GDPR goes to great lengths to ensure that this is never the case. Of particular importance are the concepts of PbD and privacy by default, as mentioned above and defined in Article 25 of the regulation. In this example, the South Wales Police ensured privacy by design, meaning that its facial recognition algorithms were built around data protection. That, however, was not enough, since the FRT were then deployed indiscriminately, which violated privacy by default—the amount of personal data collected was disproportionate with respect to the intended goal of identifying individuals on watchlists. As such, the police use of FRT for these processes had to be stopped. This “one strike and you’re out” approach to personal data collection goes a long way towards ensuring accountability in facial recognition, since it makes it much harder for the FRT operator to get away with negligent data processing for which there can be significant consequences. However, while the Human Rights legislation was deployed as part of the case, the lack of a published Human Rights Impact Assessment does diminish accountability in this regard. It is to be noted that a similar requirement to the provision of a DPIA, in regards to Human Rights Impact Assessments and human rights’ by design and default, could better improve citizen rights more generally.

In spite of the data protection legislation, it is important to highlight that authorities and corporate entities may fall short in their duties, which is why a proactive regulator is a significant attribute in the GDPR regime. In August 2018, upon the request of the London Mayor, the UK ICO started to investigate whether a private property company (Kings Cross Estate Services), which managed the area around Kings Cross, a critical London transport hub was using FRT in its CCTV. It emerged that for a number of years, this company had been using FRT for ‘public safety’ reasons, but had not properly disclosed or made people aware that the scheme was in operation. In addition, as part of this investigation it transpired that not only had it been using FRT to capture the images of all those people passing through the transport hub, but it had been working with the Metropolitan Police in London to check and match for certain people entering the area. A data sharing agreement was in place with the intention of providing for the potential identification of wanted individuals, known offenders, and missing persons. Over a 2-year period from 2016 to 2018, the Police passed images of seven people to the property entity. These people had been either arrested and charged, reprimanded, cautioned, or given a formal warning for offences. However, it was clear that the Police had failed to disclose that the scheme existed. [ 20 ]. That said, more generally the ICO has found that it is acceptable for the Police to use FRT and that there is a great deal of public support for its use, but that nevertheless it must be done so in a carefully targeted way taking into account individual’s Article 8 human rights to privacy [ 21 ].

Reflecting on the position of the Regulators and their investigatory powers, one of the most active national data protection bodies in the EU is the Swedish Authority for Privacy Protection (IMY), formerly known as the Swedish Data Protection Authority. In recent years, it has been involved in two FRT cases of note: a school using FRT to monitor class attendance [ 22 ], and the police using facial recognition software [ 23 ].

The first case, while not related to law enforcement, showcases how a data protection authority’s independence and legal expertise can ensure accountability where an individual or a civil organisation would not have been able to do so for various reasons. In this instance, the IMY “became aware through information in the media” that the school was trialing FRT on its students and decided to intervene. In the ensuing process, the authority found that the school’s use of facial recognition did not satisfy proportionality and necessity, which also led to the DPIA being conducted incorrectly. Most importantly, the IMY ruled that the consent that was given by the children’s parents to the school was invalid, as the students were in a position of dependence (school attendance is compulsory). The school’s board was subsequently fined approximately €20,000.

There are several important aspects to this example. First, note that the IMY intervened in the case on its own volition, without receiving any complaints or being asked to take action. This autonomy is important, as individuals may not always be able/willing to alert the authorities when their data are being collected and/or processed unlawfully. The reason why none of the parents came forward could be that they did not possess enough legal expertise to notice the problems in the FRT deployment or did not feel able to challenge the school given their own and their children’s relationship with it. The IMY had independence, sufficient knowledge, and a position of power to hold the school accountable. Finally, note the “one strike and you’re out” approach mentioned above. While the school made reasonable efforts to comply with the legal requirements—the faceprints were recorded on a hard drive connected to an offline computer locked away in a cupboard, and a DPIA was conducted—it failed to ensure complete compliance, and so was prosecuted.

The second example concerns the use of FRT by the Swedish police. The IMY found that the police failed to conduct a DPIA and were negligent enough to let unauthorised employees access the software, after which it imposed a fine of €250,000. Here, the law enforcement was ignorant to any negative consequences of FRT use and did not take appropriate active PbD steps; as a result, it was held accountable for its failings.

Exact data on how widespread FRT are across the EU is difficult to find, but the technologies are not ubiquitous yet. In 2019, 12 national police forces had already deployed facial recognition with 7 more planning or testing deployment at that date. Deployment has been deemed to be much slower than in USA [ 24 ]. This may in part be due to the fact that it is also surrounded by much more suitable, uniform legislation, greater transparency, and active data protection authorities—all of these components will play a large role in making Europe a better model for facial recognition accountability. However, in the context of FRT, it is important to note that a lot of the development has happened outside the boundaries of the EU and UK. As such, while the EU may have set a high bar in terms of requiring PbD, much FRT application happens within a USA context.

The USA ethical and legislative landscape for FRT in a law enforcement context

Having considered the European regulatory framework, strongly positioned to ensure some forms of ethical considerations before the deployment of FRT, we now turn to a much more fragmented legislative territory: the United States of America (USA). Within USA, FRT are heavily used by law enforcement, affecting over 117 million adults [ 25 ], which is over a third of the country’s total population. FRT rollouts are widespread, yet an average citizen has very limited means of holding its operators accountable should it be misused. The USA was an early adopter of freedom of information laws, passing the federal Publication Information Act in 1966, with individual state laws being passed after this date. This set of legislation provides for state authorities to answer for their policies and actions on receipt of a freedom of information request. This does not impact on private companies who are not held accountable in the same way. In addition, there are certain exemptions under the legislation for law enforcement and national security purposes. There are some sector-specific privacy laws, covering, for instance children online, but no overarching data protection law akin to the GDPR. These federal laws are then enforced by the Federal Trade Commission, which has an extremely broad mandate of protecting consumers against deceptive practices; it is not comparable, however, to the data protection authorities in European countries [ 26 ]. Such a massive rollout of FRT without a regulator/ombudsman to investigate is a cause for concern as it then relies on individual legal action to call out wrongdoings. In addition, there are very considerable state-by-state differences, and a notable lack of requirements for transparency or calls for that transparency.

This reliance on individual action originates from USA lacking any federal (or state) data protection authority. This means that there is no body which would actively represent and protect citizens’ interests, while possessing the legal and regulatory powers of the state. Moreover, as we have seen, data protection authorities can intervene on behalf of the citizen and enforce decisions without initiating court proceedings; in the USA, this is not an option—any conflict regarding FRT and related personal data has to be heard in court, necessitating lengthy and costly court battles (which is why citizen representation is so important). As a result, individuals often have to seek legal support from non-profit organisations; those who fail to secure it may not be able to hold FRT operators or providers accountable at all.

The second issue is centered around state-by-state differences; it occurs thanks to an absence of a general federal privacy legislation, with state law often providing only very basic rights for holding FRT operators accountable. The extent of privacy laws in most states is limited to notifying an individual if their data have been stolen in a security breach [ 27 ]—hardly a consolation for someone who has been affected by unintentionally biased or malicious use of FRT. Relevant to our discussion, at the time of writing, there is only one state (Illinois) that has legislation allowing private individuals to sue and recover damages for improper usage and/or access to their biometric data, including faceprints [ 26 ]. However, even if you are lucky to live in Illinois, holding a malicious FRT provider or operator, private or public, accountable is likely to be difficult. Accountability relies on transparency—if, for instance, an individual would like to sue an FRT provider on the basis of a privacy violation, they will need some knowledge of how their data are processed. This is where the USA falls short; not only are the law enforcement and federal agencies notoriously secretive, but they often do not understand how their own FRT works in the first place. Without PbD and the requirements for a DPIA, there is less transparency on FRT processes, and it is harder to know exactly how processing is occurring and to hold operators to account. In addition, operators may often not have duly considered and weighted the implications of the FRT usage.

In a USA context, the law on privacy and use of FRT for localised law enforcement operates very much at a state-by-state level. Within this context, California is often held to be the state with the strongest privacy laws; in 2020, it strengthened its existing privacy laws with the California Privacy Rights Act (CCPA), which established the California Privacy Protection Agency and extended residents’ rights in terms of how business could collect and use their data. However, notably, it did not touch on any privacy powers in respect of law enforcement, and, in tandem with the CCPA, the state started to try to introduce a Facial Recognition Bill to enhance the use of FRT for law enforcement purposes. It is to be noted that some cities in California (e.g., Berkeley and San Francisco) have banned FRT usage. Interestingly, the Bill received lobbying support from Microsoft, but was fiercely campaigned against by Civil Rights groups, and as such, it was not passed in June 2020. This period marked a growing sense of unease with the ethics around FRT. In the same month, IBM stated that it would cease all export sales of FRT. In its statement, it described FRT as akin to other innovations such as nuclear arms on which the USA has had to seize a lead for the protection of its citizens [ 28 ]. In addition, it highlighted the flaws in the technology, for example its failure to deal with Black and Asian faces with sufficient accuracy. At the same time, another big tech entity, Amazon stated that it would cease to sell FRT to the Police for 1 year to give Congress time to put in place new regulations to govern its ethical usage. Microsoft followed suit stating, “we will not sell facial recognition technology to police departments in the United States until we have a national law in place, grounded in human rights, that will govern this technology" [ 29 ]. Each of these entities clearly became concerned about the potential misuse of the technology by law enforcement agencies which IBM said had caused concerns since the revelations by Edward Snowden in 2014 [ 29 ]. Clearly, there were valid ethical concerns about the development of FRT. However, when beneficial influences leave the marketplace, this may open up the field to less ethical developers. Each of these entities has a process for reviewing the ethics of technology roll outs, for example, IBM has an Ethics AI Board led by a Chief Privacy Officer. It is difficult to know how ethical or effective these private entities are where there is such limited transparency, although clearly these large global corporations worry about their images. This was evidenced in the case of Google which received international press attention and criticism when it fired Timnit Gebru, co-lead of its Ethical AI Research Team, for refusing to edit out certain statements from a research article on AI [ 30 ], and as a result of the controversy, it has since had to change its publication approach.

The concerns of private enterprise and the relationship with law enforcement and national security have been recognised at a national level. For example in the context of the Federal Bureau of Investigation (FBI), there have been hearings in Washington on the acceptable use of FRT. 5 At this hearing, it was stated that the “FBI has limited information on the accuracy of its face recognition technology capabilities.” These hearings called for greater accountability and transparency in the use of the technologies, although definitive outcomes from the hearings are still awaited.

A recent illustration of the current opacity of the USA system is demonstrated in the case of Willie Allen Lynch, a black man convicted in 2016 by a Florida court of selling cocaine; the Police Department made the decision to arrest him based on a facial recognition match, among other factors. In an attempt to appeal the decision, Lynch argued that the facial recognition system made an erroneous match (a reasonable statement, given FRT’s known inaccuracy with black faceprints [ 9 ]), proving this, however, required the police to turn over the photo in question and the list of possible faceprint matches offered by the system, which it refused to do. Strikingly, the detectives involved in the case admitted that, while the FRT rated Lynch’s faceprint as the closest match, they did not actually know how the rating system worked or even which scale the rating was assigned on. Ultimately, the court ruled in favour of the Police Department, and Lynch was never given access to the photo and potential matches [ 31 ].

On a federal level, the issues of a lack of transparency and accountability persist; an attempt by the American Civil Liberties Union (ACLU) to gather information about the use of FRT by the Department of Justice, the FBI and the Drug Enforcement Administration failed, since none of the agencies responded to a Freedom of Information Act request. Undeterred, the ACLU pursued legal action, with results yet to be seen—there has been no information about the case since October 2019, when the initial complaint was filed [ 32 ]. In addition, the ACLU has called out the Government’s and private enterprises’ surveillance operations at airports and customs boundaries across the USA [ 33 ].

In regard to private companies, as previously noted, these are not caught by freedom of information laws and can often afford legal firepower beyond the reach of even the wealthiest individuals. Clearview AI, one of the leading providers of FRT to the USA law enforcement agencies, supplies the technologies to more than 600 police departments across USA [ 34 ]; the ACLU filed a lawsuit against the company in the state of Illinois, arguing that it collected faceprints without consent, as required by the state’s Biometric Information Privacy Act [ 35 ]. Filed in May 2020, the case remains active at the time of writing, accumulating a seemingly endless stream of motions, memoranda, and briefs from both sides. The amount and complexity of the legal paperwork on a case that has not even been heard yet is illustrative of how fiercely opposed the company is to any efforts to hold it accountable, and it is problematic for ordinary citizens to follow the lawsuit through on their own; although crowdsourcing and group action has become a reality for legal cases, as seen in the actions brought by the Austrian Max Schrems in the EU. In addition, there has been a class action brought against the Department Store Macy’s in Illinois for its use of FRT [ 36 ], so such legal action may become more common. Nevertheless, a mature democratic nation should have other solutions in place.

This absence of the threat of litigation removes the proverbial sword hanging above the FRT providers’ heads, allowing them to have a free-for-all feast on user information. For instance, Clearview AI openly discloses information about scraping Facebook user profiles for images to build up its reference database [ 34 ], even though this action is explicitly prohibited by the website’s terms of service. IBM, in a similar fashion, collected individuals’ Flickr photos without consent; the affected users were not given a feasible way of deleting their information from the database [ 37 ]. A complete absence of data protection and privacy rights is hugely problematic.

Conclusion and recommendations

FRT is no longer a topic of science fiction or a concern for the future. It is here now, impacting people’s lives on a daily basis, from wrongful arrests to privacy invasions and human rights infringements. The widespread adoption of this technology without appropriate considerations could have catastrophic outcomes, and ultimately may jeopardise its development if some jurisdictions decide to ban the use of the technology for an indefinite amount of time [ 38 ]. However, critical in the success of FRT is the transparency and accountability in each stage of its development and usage and the ability to audit and challenge as required. The idea of power is particularly linked to the intended, and actual, outcomes of FRT, which should not be dissociated from discussions around accountability.

This discussions in this article makes the case that at all stages of the FRT process in all aspects of design and use including specific contexts, there is a requirement to document and account for the usage ensuring mechanisms for transparency and challenge. The GDPR provides a good regulatory starting point to address some of its concerns. However, the ethical considerations of this technology go far beyond issues of privacy and transparency alone. It requires broader considerations of equality, diversity, and inclusion as well as human rights issues more generally. As such other forms of assessments, such as Human Rights Impact Assessments, in addition to DPIA, should be part of the development and rollout of FRT—a DPIA alone is insufficient. These Assessments should be automatically required to be put into the public domain. In addition, the requirements must equally be enacted upon both public and private enterprises with transparency and accountability requirements. In conjunction with these steps, global regulators are needed with powers to actively investigate each aspect of the development and deployment processes of FRT in case contexts, and with powers to step in, stop and fine inappropriate FRT development and deployment. In addition, there should be more normal audit processes required for FRT deployment just as there are for financial oversights. The societal impacts for FRT misconduct are not to be underestimated.

We conclude this paper with the recommendation of ten critical ethical questions that need to be considered, researched, and answered in granular detail for law enforcement purposes and which in addition have read over to other AI development. It is suggested that these need to be dealt with and regulated for. The questions are:

  • How can this accountability be explicitly exercised, explained and audited for, for a range of stakeholder needs?

We are at a tipping point in the relationships and power structures in place between citizens and law enforcers. We cannot wait to step in and act, and in fact, there are many potential solutions to better ensure ethical FRT deployment. However, this is currently an ethical emergency requiring urgent global attention.

This work received partial funding from the UCL AI Centre.

Declarations

The authors confirm there are no conflicts of interest.

1 For example, see McGregor, L. (2018) ‘Accountability for Governance Choices in Artificial Intelligence: Afterword to Eyal Benvenisti’s Foreword’, European Journal of International Law , 29(4), pp. 1079–1085.; Shah, H. (2018) ‘Algorithmic accountability’, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences , 376(2128), p. 20,170,362. https://doi.org/10.1098/rsta.2017.0362 ; Buhmann, A., Paßmann, J. and Fieseler, C. (2020) ‘Managing Algorithmic Accountability: Balancing Reputational Concerns, Engagement Strategies, and the Potential of Rational Discourse’, Journal of Business Ethics, 163(2), pp. 265–280. https://doi.org/10.1007/s10551-019-04226-4.0 .

2 For the formal definition of the DPIA, see GDPR Article 35.

3 See https://www.openrightsgroup.org/app/uploads/2020/08/Letter-for-MPs-Final-sigs-1.pdf .

4 This particular assessment is available here: https://afr.south-wales.police.uk/wp-content/uploads/2019/10/DPIA-V5.4-Live.pdf .

5 See for example the 2019 report at https://oversight.house.gov/legislation/hearings/facial-recognition-technology-part-ii-ensuring-transparency-in-government-use .

All authors contributed equally to the writing, research, and ideas within this article. The initial concept was conceived by Denise Almeida with Konstantin Shmarko initiating the research work.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

electronics-logo

Article Menu

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Past, present, and future of face recognition: a review.

face recognition uk essay

1. Introduction

  • Natural character: The face is a very realistic biometric feature used by humans in the individual’s recognition, making it possibly the most related biometric feature for authentication and identification purposes [ 4 ]. For example, in access control, it is simple for administrators to monitor and evaluate approved persons after authentication, using their facial characteristics. The support of ordinary employers (e.g., administrators) may boost the efficiency and applicability of recognition systems. On the other hand, identifying fingerprints or iris requires an expert with professional competencies to provide accurate confirmation.
  • Nonintrusive: In contrast to fingerprint or iris images, facial images can quickly be obtained without physical contact; people feel more relaxed when using the face as a biometric identifier. Besides, a face recognition device can collect data in a friendly manner that people commonly accept [ 5 ].
  • Less cooperation: Face recognition requires less assistance from the user compared with iris or fingerprint. For some limited applications such as surveillance, a face recognition device may recognize an individual without active subject involvement [ 5 ].
  • We provide an updated review of automated face recognition systems: the history, present, and future challenges.
  • We present 23 well-known face recognition datasets in addition to their assessment protocols.
  • We have reviewed and summarized nearly 180 scientific publications on facial recognition and its material problems of data acquisition and pre-processing from 1990 to 2020. These publications have been classified according to various approaches: holistic, geometric, local texture, and deep learning for 2D and 3D facial recognition. We pay particular attention to the methods based deep-learning, which are currently considered state-of-the-art in 2D face recognition.
  • We analyze and compare several in-depth learning methods according to the architecture implemented and their performance assessment metrics.
  • We study the performance of deep learning methods under the most commonly used data set: (i) Labeled Face in the Wild (LFW) data set [ 10 ] for 2D face recognition, (ii) Bosphorus and BU-3DFE for 3D face recognition.
  • We discuss some new directions and future challenges for facial recognition technology by paying particular attention to the aspect of 3D recognition.

2. Face Recognition History

  • 1964: The American researchers Bledsoe et al. [ 11 ] studied facial recognition computer programming. They imagine a semi-automatic method, where operators are asked to enter twenty computer measures, such as the size of the mouth or the eyes.
  • 1977: The system was improved by adding 21 additional markers (e.g., lip width, hair color).
  • 1988: Artificial intelligence was introduced to develop previously used theoretical tools, which showed many weaknesses. Mathematics (“linear algebra”) was used to interpret images differently and find a way to simplify and manipulate them independent of human markers.
  • 1991: Alex Pentland and Matthew Turk of the Massachusetts Institute of Technology (MIT) presented the first successful example of facial recognition technology, Eigenfaces [ 12 ], which uses the statistical Principal component analysis (PCA) method.
  • 1998: To encourage industry and the academy to move forward on this topic, the Defense Advanced Research Projects Agency (DARPA) developed the Face recognition technology (FERET) [ 13 ] program, which provided to the world a sizable, challenging database composed of 2400 images for 850 persons.
  • 2005: The Face Recognition Grand Challenge (FRGC) [ 14 ] competition was launched to encourage and develop face recognition technology designed to support existent facial recognition initiatives.
  • 2011: Everything accelerates due to deep learning, a machine learning method based on artificial neural networks [ 9 ]. The computer selects the points to be compared: it learns better when it supplies more images.
  • 2014: Facebook knows how to recognize faces due to its internal algorithm, Deepface [ 15 ]. The social network claims that its method approaches the performance of the human eye near to 97%.
  • In its new updates, Apple introduced a facial recognition application where its implementation has extended to retail and banking.
  • Mastercard developed the Selfie Pay, a facial recognition framework for online transactions.
  • From 2019, people in China who want to buy a new phone will now consent to have their faces checked by the operator.
  • Chinese police used a smart monitoring system based on live facial recognition; using this system, they arrested, in 2018, a suspect of “economic crime” at a concert where his face, listed in a national database, was identified in a crowd of 50,000 persons.

3. Face Recognition Systems

3.1. main steps in face recognition systems, 3.2. assessment protocols in face recognition, 4. available datasets and protocols for 2d face recognition, 4.1. orl dataset, 4.2. feret dataset, 4.3. ar dataset, 4.4. xm2vts database, 4.5. banca dataset, 4.6. frgc dataset.

  • In experimental protocol 1, two controlled still images of an individual are used as one for a gallery, and the other for a probe.
  • In Exp 2, the four controlled images of a person are distributed among the gallery and probe.
  • In Exp 4, a single controlled still image presents the gallery, and a single uncontrolled still image presents the probe.
  • Exps 3, 5, and 6 are designed for 3D images.

4.7. LFW Database

4.8. cmu multi pie dataset, 4.9. casia-webface dataset, 4.10. iarpa janus benchmark-a, 4.11. megaface database, 4.12. cfp dataset, 4.13. ms-celeb-m1 benchmark, 4.14. dmfd database, 4.15. vggface database, 4.16. vggface2 database, 4.17. iarpa janus benchmark-b, 4.18. mf2 dataset, 4.19. dfw dataset.

  • Impersonation protocol used only to evaluate the performance of impersonation techniques.
  • Obfuscation protocol used in the cases of disguises.
  • Overall performance protocol that is used to evaluate any algorithm on the complete dataset.

4.20. IARPA Janus Benchmark-C

4.21. lfr dataset, 4.22. rmfrd and smfrd: masqued face recognition dataset.

  • Masked face detection dataset (MFDD): it can be utilized to train a masked face detection model with precision.
  • Real-world masked face recognition dataset (RMFRD): it contains 5000 images of 525 persons wearing masks, and 90,000 pictures of the same 525 individuals without masks collected from the Internet ( Figure 17 ).
  • Simulated masked face recognition dataset (SMFRD): in the meantime, the proposers utilized alternative means to place masks on the standard large-scale facial datasets, such as LFW [ 10 ] and CASIA WebFace [ 30 ] datasets, expanding thus the volume and variety of the masked facial recognition dataset. The SMFRD dataset covers 500,000 facial images of 10,000 persons, and it can be employed in practice alongside their original unmasked counterparts ( Figure 18 ).

5. Two-Dimensional Face Recognition Approaches

5.1. holistic methods, 5.2. geometric approach, 5.3. local-texture approach, 5.4. deep learning approach, 5.4.1. introduction to deep learning.

  • Unsupervised or generative (auto encoder (AE) [ 99 ], Boltzman machine (BM) [ 100 ], recurrent neural network (RNN) [ 101 ], and sum-product network (SPN) [ 102 ]);
  • Supervised or discriminative (convolutional neural network (CNN));
  • Hybrid (deep neural network (DNN) [ 97 , 103 ]).

5.4.2. Convolutional Neural Networks (CNNs)

  • Convolutional layer: This is the CNN’s core building block that aims at extracting features from the input data. Each layer uses a convolution operation to obtain a feature map. After that, the activation or feature maps are fed to the next layer as input data [ 9 ].
  • Pooling layer: This is a non-linear down-sampling [ 104 , 105 ] form that reduces the dimensionality of the feature map but still has the crucial information. There are various non-linear pooling functions in which max-pooling is the most efficient and superior to sub-sampling [ 106 ].
  • Rectified linear unit (ReLU) Layer: This is a non-linear operation, involving units that use the rectifier.
  • Fully connected layer (FC): The high-level reasoning in the neural network is done via fully connected layers after applying various convolutional layers and max-pooling layers [ 107 ].

5.4.3. Popular CNN Architectures

5.4.4. deep cnn-based methods for face recognition., investigations based on alexnet architecture, investigations based on vggnet architecture, investigations based on googlenet architecture, investigations based on lenet architecture, investigations based on resnet architecture, 6. three-dimensional face recognition, 6.1. factual background and acquisition systems, 6.1.1. introduction to 3d face recognition, 6.1.2. microsoft kinect technology, 6.2. methods and datasets, 6.2.1. challenges of 3d facial recognition, 6.2.2. traditional methods of machine learning.

  • Traditional methods of machine learning
  • Deep learning-based methods.

6.2.3. Deep Learning-Based Methods

6.2.4. three-dimensional face recognition databases, 7. open challenges, 7.1. face recognition and occlusion, 7.2. hetegerenous face recognition, 7.3. face recognition and ageing, 7.4. single sample face recognition.

  • In real-world applications (e.g., passports, immigration systems), only one model of each individual is registered in the database and accessible for the recognition task [ 174 ].
  • Pattern recognition systems require vast training data to ensure the generalization of the learning systems.
  • Deep learning-based approach is considered a powerful technique in face recognition. Nonetheless, they need a significant amount of training data to perform well [ 9 ].

7.5. Face Recognition in Video Surveillance

7.6. face recognition and soft biometrics, 7.7. face recognition and smartphones, 7.8. face recognition and internet of things (iot), 8. conclusions, author contributions, conflicts of interest.

  • Kortli, Y.; Jridi, M.; Al Falou, A.; Atri, M. A Review of Face Recognition Methods. Sensors 2020 , 20 , 342. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ Green Version ]
  • O’Toole, A.J.; Roark, D.A.; Abdi, H. Recognizing moving faces: A psychological and neural synthesis. Trends Cogn. Sci. 2002 , 6 , 261–266. [ Google Scholar ] [ CrossRef ]
  • Dantcheva, A.; Chen, C.; Ross, A. Can facial cosmetics affect the matching accuracy of face recognition systems? In Proceedings of the 2012 IEEE Fifth International Conference on Biometrics: Theory, Applications and Systems (BTAS), Arlington, VA, USA, 23–27 September 2012; pp. 391–398. [ Google Scholar ]
  • Sinha, P.; Balas, B.; Ostrovsky, Y.; Russell, R. Face recognition by humans: Nineteen results all computer vision researchers should know about. Proc. IEEE 2006 , 94 , 1948–1962. [ Google Scholar ] [ CrossRef ]
  • Ouamane, A.; Benakcha, A.; Belahcene, M.; Taleb-Ahmed, A. Multimodal depth and intensity face verification approach using LBP, SLF, BSIF, and LPQ local features fusion. Pattern Recognit. Image Anal. 2015 , 25 , 603–620. [ Google Scholar ] [ CrossRef ]
  • Porter, G.; Doran, G. An anatomical and photographic technique for forensic facial identification. Forensic Sci. Int. 2000 , 114 , 97–105. [ Google Scholar ] [ CrossRef ]
  • Li, S.Z.; Jain, A.K. Handbook of Face Recognition , 2nd ed.; Springer Publishing Company: New York, NY, USA, 2011. [ Google Scholar ]
  • Morder-Intelligence. Available online: https://www.mordorintelligence.com/industry-reports/facial-recognition-market (accessed on 21 July 2020).
  • Guo, G.; Zhang, N. A survey on deep learning based face recognition. Comput. Vis. Image Underst. 2019 , 189 , 10285. [ Google Scholar ] [ CrossRef ]
  • Huang, G.B.; Mattar, M.; Berg, T.; Learned-Miller, E. Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments ; Technical Report; University of Massachusetts: Amherst, MA, USA, 2007; pp. 7–49. [ Google Scholar ]
  • Bledsoe, W.W. The Model Method in Facial Recognition ; Technical Report; Panoramic Research, Inc.: Palo Alto, CA, USA, 1964. [ Google Scholar ]
  • Turk, M.; Pentland, A. Eigenfaces for recognition. J. Cogn. Neurosci. 1991 , 3 , 71–86. [ Google Scholar ] [ CrossRef ]
  • Phillips, P.J.; Wechsler, H.; Huang, J.; Rauss, P. The FERET database and evaluation procedure for face recognition algorithms. Image Vis. Comput. 1998 , 16 , 295–306. [ Google Scholar ] [ CrossRef ]
  • Phillips, P.J.; Flynn, P.J.; Scruggs, T.; Bowyer, K.W.; Chang, J.; Hoffman, K.; Marques, J.; Min, J.; Worek, W. Overview of the face recognition grand challenge. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–26 June 2005; pp. 947–954. [ Google Scholar ]
  • Taigman, Y.; Yang, M.; Ranzato, M.; Wolf, L. Deepface: Closing the gap to human-level performance in face verification. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1701–1708. [ Google Scholar ]
  • Chihaoui, M.; Elkefi, A.; Bellil, W.; Ben Amar, C. A Survey of 2D Face Recognition Techniques. Computers 2016 , 5 , 21. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Benzaoui, A.; Bourouba, H.; Boukrouche, A. System for automatic faces detection. In Proceedings of the 2012 3rd International Conference on Image Processing, Theory, Tools and Applications (IPTA), Istanbul, Turkey, 15–18 October 2012; pp. 354–358. [ Google Scholar ]
  • Martinez, A.M. Recognizing imprecisely localized, partially occluded and expression variant faces from a single sample per class. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 2002 , 24 , 748–763. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Sidahmed, S.; Messali, Z.; Ouahabi, A.; Trépout, S.; Messaoudi, C.; Marco, S. Nonparametric denoising methods based on contourlet transform with sharp frequency localization: Application to electron microscopy images with low exposure time. Entropy 2015 , 17 , 2781–2799. [ Google Scholar ]
  • Ouahabi, A. Image Denoising using Wavelets: Application in Medical Imaging. In Advances in Heuristic Signal Processing and Applications ; Chatterjee, A., Nobahari, H., Siarry, P., Eds.; Springer: Basel, Switzerland, 2013; pp. 287–313. [ Google Scholar ]
  • Ouahabi, A. A review of wavelet denoising in medical imaging. In Proceedings of the International Workshop on Systems, Signal Processing and Their Applications (IEEE/WOSSPA’13), Algiers, Algeria, 12–15 May 2013; pp. 19–26. [ Google Scholar ]
  • Nakanishi, A.Y.J.; Western, B.J. Advancing the State-of-the-Art in Transportation Security Identification and Verification Technologies: Biometric and Multibiometric Systems. In Proceedings of the 2007 IEEE Intelligent Transportation Systems Conference, Seattle, WA, USA, 30 September–3 October 2007; pp. 1004–1009. [ Google Scholar ]
  • Samaria, F.S.; Harter, A.C. Parameterization of a Stochastic Model for Human Face Identification. In Proceedings of the 1994 IEEE Workshop on Applications of Computer Vision, Sarasota, FL, USA, 5–7 December 1994; pp. 138–142. [ Google Scholar ]
  • Martinez, A.M.; Benavente, R. The AR face database. CVC Tech. Rep. 1998 , 24 , 1–10. [ Google Scholar ]
  • Messer, K.; Matas, J.; Kittler, J.; Jonsson, K. Xm2vt sdb: The extended m2vts database. In Proceedings of the 1999 2nd International Conference on Audio and Video-based Biometric Person Authentication (AVBPA), Washington, DC, USA, 22–24 March 1999; pp. 72–77. [ Google Scholar ]
  • Bailliére, E.A.; Bengio, S.; Bimbot, F.; Hamouz, M.; Kittler, J.; Mariéthoz, J.; Matas, J.; Messer, K.; Popovici, V.; Porée, F.; et al. The BANCA Database and Evaluation Protocol. In Proceedings of the 2003 International Conference on Audio- and Video-Based Biometric Person Authentication (AVBPA), Guildford, UK, 9–11 June 2003; pp. 625–638. [ Google Scholar ]
  • Huang, G.B.; Jain, V.; Miller, E.L. Unsupervised joint alignment of complex images. In Proceedings of the 2007 IEEE International Conference on Computer Vision (ICCV), Rio de Janeiro, Brazil, 14–20 October 2007; pp. 1–8. [ Google Scholar ]
  • Huang, G.; Mattar, M.; Lee, H.; Miller, E.G.L. Learning to align from scratch. Adv. Neural Inf. Process. Syst. 2012 , 25 , 764–772. [ Google Scholar ]
  • Gross, R.; Matthews, L.; Cohn, J.; Kanade, T.; Baker, S. Multi-PIE. Image Vis. Comput. 2010 , 28 , 807–813. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • CASIA Web Face. Available online: http://www.cbsr.ia.ac.cn/english/CASIA-WebFace-Database.html (accessed on 21 July 2019).
  • Klare, B.F.; Klein, B.; Taborsky, E.; Blanton, A.; Cheney, J.; Allen, K.; Grother, P.; Mah, A.; Burge, M.; Jain, A.K. Pushing the frontiers of unconstrained face detection and recognition: IARPA Janus Benchmark A. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1931–1939. [ Google Scholar ]
  • Shlizerman, I.K.; Seitz, S.M.; Miller, D.; Brossard, E. The MegaFace benchmark: 1 million faces for recognition at scale. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 4873–4882. [ Google Scholar ]
  • Shlizerman, I.K.; Suwajanakorn, S.; Seitz, S.M. Illumination-aware age progression. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 3334–3341. [ Google Scholar ]
  • Ng, H.W.; Winkler, S. A data-driven approach to cleaning large face datasets. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 343–347. [ Google Scholar ]
  • Sengupta, S.; Cheng, J.; Castillo, C.; Patel, V.M.; Chellappa, R.; Jacobs, D.W. Frontal to Profile Face Verification in the Wild. In Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA, 7–10 March 2016; pp. 1–9. [ Google Scholar ]
  • Guo, Y.; Zhang, L.; Hu, Y.; He, X.; Gao, J. Ms-Celeb-1m: A dataset and benchmark for large-scale face recognition. In Proceedings of the 14th European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 8–16 October 2016. [ Google Scholar ]
  • Wang, T.Y.; Kumar, A. Recognizing Human Faces under Disguise and Makeup. In Proceedings of the 2016 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA), Sendai, Japan, 29 February–2 March 2016; pp. 1–7. [ Google Scholar ]
  • Parkhi, O.M.; Vedaldi, A.; Zisserman, A. Deep Face Recognition. In Proceedings of the 2015 British Machine Vision Conference, Swansea, UK, 7–10 September 2015; pp. 41.1–41.12. [ Google Scholar ]
  • Cao, Q.; Shen, L.; Xie, W.; Parkhi, O.M.; Zisserman, A. VGGFace2: A dataset for recognizing faces across pose and age. In Proceedings of the 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG), Xi’an, China, 15–19 May 2018; pp. 67–74. [ Google Scholar ]
  • Whitelam, C.; Taborsky, E.; Blanton, A.; Maze, B.; Adams, J.; Miller, T.; Kalka, N.; Jain, A.K.; Duncan, J.A.; Allen, K. IARPA Janus Benchmark-B face dataset. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 592–600. [ Google Scholar ]
  • Nech, A.; Shlizerman, I.K. Level playing field for million scale face recognition. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 3406–3415. [ Google Scholar ]
  • Kushwaha, V.; Singh, M.; Singh, R.; Vatsa, M. Disguised Faces in the Wild. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 1–18. [ Google Scholar ]
  • Maze, B.; Adams, J.; Duncan, J.A.; Kalka, N.; Miller, T.; Otto, C.; Jain, A.K.; Niggel, W.T.; Anderson, J.; Cheney, J.; et al. IARPA Janus benchmark-C: Face dataset and protocol. In Proceedings of the 2018 International Conference on Biometrics (ICB), Gold Coast, QLD, Australia, 20–23 February 2018; pp. 158–165. [ Google Scholar ]
  • Elharrouss, O.; Almaadeed, N.; Al-Maadeed, S. LFR face dataset: Left-Front-Right dataset for pose-invariant face recognition in the wild. In Proceedings of the 2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT), Doha, Qatar, 2–5 February 2020; pp. 124–130. [ Google Scholar ]
  • Wang, Z.; Wang, G.; Huang, B.; Xiong, Z.; Hong, Q.; Wu, H.; Yi, P.; Jiang, K.; Wang, N.; Pei, Y.; et al. Masked Face Recognition Dataset and Application. arXiv 2020 , arXiv:2003.09093v2. [ Google Scholar ]
  • Belhumeur, P.N.; Hespanha, J.P.; Kriegman, D.J. Eigenfaces vs Fisherfaces: Recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 1997 , 19 , 711–720. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Stone, J.V. Independent component analysis: An introduction. Trends Cogn. Sci. 2002 , 6 , 59–64. [ Google Scholar ] [ CrossRef ]
  • Sirovich, L.; Kirby, M. Low-Dimensional procedure for the characterization of human faces. J. Opt. Soc. Am. 1987 , 4 , 519–524. [ Google Scholar ] [ CrossRef ]
  • Kirby, M.; Sirovich, L. Application of the Karhunen-Loève procedure for the characterization of human faces. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 1990 , 12 , 831–835. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Femmam, S.; M’Sirdi, N.K.; Ouahabi, A. Perception and characterization of materials using signal processing techniques. IEEE Trans. Instrum. Meas. 2001 , 50 , 1203–1211. [ Google Scholar ] [ CrossRef ]
  • Zhao, L.; Yang, Y.H. Theoretical analysis of illumination in PCA-based vision systems. Pattern Recognit. 1999 , 32 , 547–564. [ Google Scholar ] [ CrossRef ]
  • Pentland, A.; Moghaddam, B.; Starner, T. View-Based and modular eigenspaces for face recognition. In Proceedings of the 1994 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 21–23 June 1994; pp. 84–91. [ Google Scholar ]
  • Bartlett, M.; Movellan, J.; Sejnowski, T. Face Recognition by Independent Component Analysis. IEEE Trans. Neural Netw. 2002 , 13 , 1450–1464. [ Google Scholar ] [ CrossRef ]
  • Abhishree, T.M.; Latha, J.; Manikantan, K.; Ramachandran, S. Face recognition using Gabor Filter based feature extraction with anisotropic diffusion as a pre-processing technique. Procedia Comput. Sci. 2015 , 45 , 312–321. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Zehani, S.; Ouahabi, A.; Oussalah, M.; Mimi, M.; Taleb-Ahmed, A. Trabecular bone microarchitecture characterization based on fractal model in spatial frequency domain imaging. Int. J. Imaging Syst. Technol. accepted.
  • Ouahabi, A. Signal and Image Multiresolution Analysis , 1st ed.; ISTE-Wiley: London, UK, 2012. [ Google Scholar ]
  • Guetbi, C.; Kouame, D.; Ouahabi, A.; Chemla, J.P. Methods based on wavelets for time delay estimation of ultrasound signals. In Proceedings of the 1998 IEEE International Conference on Electronics, Circuits and Systems, Lisbon, Portugal, 7–10 September 1998; pp. 113–116. [ Google Scholar ]
  • Ferroukhi, M.; Ouahabi, A.; Attari, M.; Habchi, Y.; Taleb-Ahmed, A. Medical video coding based on 2nd-generation wavelets: Performance evaluation. Electronics 2019 , 8 , 88. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Wang, M.; Jiang, H.; Li, Y. Face recognition based on DWT/DCT and SVM. In Proceedings of the 2010 International Conference on Computer Application and System Modeling (ICCASM), Taiyuan, China, 22–24 October 2010; pp. 507–510. [ Google Scholar ]
  • Bookstein, F.L. Principal warps: Thin-plate splines and the decomposition of deformations. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 1989 , 11 , 567–585. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Shih, F.Y.; Chuang, C. Automatic extraction of head and face boundaries and facial features. Inf. Sci. 2004 , 158 , 117–130. [ Google Scholar ] [ CrossRef ]
  • Zobel, M.; Gebhard, A.; Paulus, D.; Denzler, J.; Niemann, H. Robust facial feature localization by coupled features. In Proceedings of the 2000 4th IEEE International Conference on Automatic Face and Gesture Recognition (FG), Grenoble, France, 26–30 March 2000; pp. 2–7. [ Google Scholar ]
  • Wiskott, L.; Fellous, J.M.; Malsburg, C.V.D. Face recognition by elastic bunch graph matching. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 1997 , 19 , 775–779. [ Google Scholar ] [ CrossRef ]
  • Xue, Z.; Li, S.Z.; Teoh, E.K. Bayesian shape model for facial feature extraction and recognition. Pattern Recognit. 2003 , 36 , 2819–2833. [ Google Scholar ] [ CrossRef ]
  • Tistarelli, M. Active/space-variant object recognition. Image Vis. Comput. 1995 , 13 , 215–226. [ Google Scholar ] [ CrossRef ]
  • Lades, M.; Vorbuggen, J.C.; Buhmann, J.; Lange, J.; Malsburg, C.V.D.; Wurtz, R.P.; Konen, W. Distortion invariant object recognition in the dynamic link architecture. IEEE Trans. Comput. 1993 , 42 , 300–311. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Wiskott, L. Phantom faces for face analysis. Pattern Recognit. 1997 , 30 , 837–846. [ Google Scholar ] [ CrossRef ]
  • Duc, B.; Fischer, S.; Bigun, J. Face authentication with Gabor information on deformable graphs. IEEE Trans. Image Process. 1999 , 8 , 504–516. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ Green Version ]
  • Kotropoulos, C.; Tefas, A.; Pitas, I. Frontal face authentication using morphological elastic graph matching. IEEE Trans. Image Process. 2000 , 9 , 555–560. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Jackway, P.T.; Deriche, M. Scale-space properties of the multiscale morphological dilation-erosion. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 1996 , 18 , 38–51. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Tefas, A.; Kotropoulos, C.; Pitas, I. Face verification using elastic graph matching based on morphological signal decomposition. Signal Process. 2002 , 82 , 833–851. [ Google Scholar ] [ CrossRef ]
  • Kumar, D.; Garaina, J.; Kisku, D.R.; Sing, J.K.; Gupta, P. Unconstrained and Constrained Face Recognition Using Dense Local Descriptor with Ensemble Framework. Neurocomputing 2020 . [ Google Scholar ] [ CrossRef ]
  • Zehani, S.; Ouahabi, A.; Mimi, M.; Taleb-Ahmed, A. Staistical features extraction in wavelet domain for texture classification. In Proceedings of the 2019 6th International Conference on Image and Signal Processing and their Applications (IEEE/ISPA), Mostaganem, Algeria, 24–25 November 2019; pp. 1–5. [ Google Scholar ]
  • Ait Aouit, D.; Ouahabi, A. Nonlinear Fracture Signal Analysis Using Multifractal Approach Combined with Wavelet. Fractals Complex Geom. Patterns Scaling Nat. Soc. 2011 , 19 , 175–183. [ Google Scholar ] [ CrossRef ]
  • Girault, J.M.; Kouame, D.; Ouahabi, A. Analytical formulation of the fractal dimension of filtered stochastic signal. Signal Process. 2010 , 90 , 2690–2697. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Djeddi, M.; Ouahabi, A.; Batatia, H.; Basarab, A.; Kouamé, D. Discrete wavelet transform for multifractal texture classification: Application to ultrasound imaging. In Proceedings of the IEEE International Conference on Image Processing (IEEE ICIP2010), Hong Kong, China, 26–29 September 2010; pp. 637–640. [ Google Scholar ]
  • Ouahabi, A. Multifractal analysis for texture characterization: A new approach based on DWT. In Proceedings of the 10th International Conference on Information Science, Signal Processing and Their Applications (IEEE/ISSPA), Kuala Lumpur, Malaysia, 10–13 May 2010; pp. 698–703. [ Google Scholar ]
  • Davies, E.R. Introduction to texture analysis. In Handbook of Texture Analysis ; Mirmehdi, M., Xie, X., Suri, J., Eds.; Imperial College Press: London, UK, 2008; pp. 1–31. [ Google Scholar ]
  • Benzaoui, A.; Hadid, A.; Boukrouche, A. Ear biometric recognition using local texture descriptors. J. Electron. Imaging 2014 , 23 , 053008. [ Google Scholar ] [ CrossRef ]
  • Ahonen, T.; Hadid, A.; Pietikäinen, M. Face recognition with local binary patterns. In Proceedings of the 8th European Conference on Computer Vision (ECCV), Prague, Czech Republic, 11–14 May 2004; pp. 469–481. [ Google Scholar ]
  • Ahonen, T.; Hadid, A.; Pietikäinen, M. Face description with local binary patterns: Application to face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2006 , 28 , 2037–2041. [ Google Scholar ] [ CrossRef ]
  • Beveridge, J.R.; Bolme, D.; Draper, B.A.; Teixeira, M. The CSU face identification evaluation system: Its purpose, features, and structure. Mach. Vis. Appl. 2005 , 16 , 128–138. [ Google Scholar ] [ CrossRef ]
  • Moghaddam, B.; Nastar, C.; Pentland, A. A bayesian similarity measure for direct image matching. In Proceedings of the 13th International Conference on Pattern Recognition (ICPR), Vienna, Austria, 25–29 August 1996; pp. 350–358. [ Google Scholar ]
  • Rodriguez, Y.; Marcel, S. Face authentication using adapted local binary pattern histograms. In Proceedings of the 9th European Conference on Computer Vision (ECCV), Graz, Austria, 7–13 May 2006; pp. 321–332. [ Google Scholar ]
  • Sadeghi, M.; Kittler, J.; Kostin, A.; Messer, K. A comparative study of automatic face verification algorithms on the banca database. In Proceedings of the 4th International Conference on Audio- and Video-Based Biometric Person Authentication (AVBPA), Guilford, UK, 9–11 June 2003; pp. 35–43. [ Google Scholar ]
  • Huang, X.; Li, S.Z.; Wang, Y. Jensen-shannon boosting learning for object recognition. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, 20–26 June 2005; pp. 144–149. [ Google Scholar ]
  • Boutella, E.; Harizi, F.; Bengherabi, M.; Ait-Aoudia, S.; Hadid, A. Face verification using local binary patterns and generic model adaptation. Int. J. Biomed. 2015 , 7 , 31–44. [ Google Scholar ] [ CrossRef ]
  • Benzaoui, A.; Boukrouche, A. 1DLBP and PCA for face recognition. In Proceedings of the 2013 11th International Symposium on Programming and Systems (ISPS), Algiers, Algeria, 22–24 April 2013; pp. 7–11. [ Google Scholar ]
  • Benzaoui, A.; Boukrouche, A. Face Recognition using 1DLBP Texture Analysis. In Proceedings of the 5th International Conference of Future Computational Technologies and Applications, Valencia, Spain, 27 May–1 June 2013; pp. 14–19. [ Google Scholar ]
  • Benzaoui, A.; Boukrouche, A. Face Analysis, Description, and Recognition using Improved Local Binary Patterns in One Dimensional Space. J. Control Eng. Appl. Inform. (CEAI) 2014 , 16 , 52–60. [ Google Scholar ]
  • Ahonen, T.; Rathu, E.; Ojansivu, V.; Heikkilä, J. Recognition of Blurred Faces Using Local Phase Quantization. In Proceedings of the 19th International Conference on Pattern Recognition (ICPR), Tampa, FL, USA, 8–11 December 2008; pp. 1–4. [ Google Scholar ]
  • Ojansivu, V.; Heikkil, J. Blur insensitive texture classification using local phase quantization. In Proceedings of the 3rd International Conference on Image and Signal Processing (ICSIP), Cherbourg-Octeville, France, 1–3 July 2008; pp. 236–243. [ Google Scholar ]
  • Tan, X.; Triggs, B. Enhanced local texture feature sets for face recognition under difficult lighting conditions. In Proceedings of the 3rd International Workshop on Analysis and Modeling of Faces and Gestures (AMFG), Rio de Janeiro, Brazil, 20 October 2007; pp. 168–182. [ Google Scholar ]
  • Lei, Z.; Ahonen, T.; Pietikainen, M.; Li, S.Z. Local Frequency Descriptor for Low-Resolution Face Recognition. In Proceedings of the 9th Conference on Automatic Face and Gesture Recognition (FG), Santa Barbara, CA, USA, 21–25 March 2011; pp. 161–166. [ Google Scholar ]
  • Kannala, J.; Rahtu, E. BSIF: Binarized statistical image features. In Proceedings of the 21th International Conference on Pattern Recognition (ICPR), Tsukuba, Japan, 11–15 November 2012; pp. 1363–1366. [ Google Scholar ]
  • Schmidhuber, J. Deep Learning in Neural Networks: An Overview. Neural Netw. 2015 , 61 , 85–117. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ Green Version ]
  • Deng, L. A tutorial survey of architectures, algorithms, and applications for deep learning. APSIPA Trans. Signal Inf. Process. 2014 , 3 , 1–29. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Deng, L.; Yu, D. Deep Learning: Methods and Applications. Found. Trends Signal Process. 2014 , 7 , 197–387. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Vincent, P.; Larochelle, H.; Lajoie, I.; Bengio, Y.; Manzagol, P.A. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 2010 , 11 , 3371–3408. [ Google Scholar ]
  • Salakhutdinov, R.; Hinton, G. Deep Boltzmann machines. In Proceedings of the 12th International Conference on Artificial Intelligence and Statistics, Clearwater, FL, USA, 16–19 April 2009; pp. 448–455. [ Google Scholar ]
  • Sutskever, I.; Martens, J.; Hinton, G. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML), Bellevue, WA, USA, 28 June–2 July 2011; pp. 1017–1024. [ Google Scholar ]
  • Poon, H.; Domingos, P. Sum-product networks: A new deep architecture. In Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, 6–13 November 2011; pp. 689–690. [ Google Scholar ]
  • Kimb, K.; Aminantoa, M.E. Deep Learning in Intrusion Detection Perspective: Overview and further Challenges. In Proceedings of the International Workshop on Big Data and Information Security (IWBIS), Jakarta, Indonesia, 23–24 September 2017; pp. 5–10. [ Google Scholar ]
  • Ouahabi, A. Analyse spectrale paramétrique de signaux lacunaires. Traitement Signal 1992 , 9 , 181–191. [ Google Scholar ]
  • Ouahabi, A.; Lacoume, J.-L. New results in spectral estimation of decimated processes. IEEE Electron. Lett. 1991 , 27 , 1430–1432. [ Google Scholar ] [ CrossRef ]
  • Scherer, D.; Müller, A.; Behnke, S. Evaluation of pooling operations in convolutional architectures for object recognition. In Proceedings of the 2010 International Conference on Artificial Neural Networks, Thessaloniki, Greece, 15–18 September 2010; pp. 92–101. [ Google Scholar ]
  • Coşkun, M.; Uçar, A.; Yildirim, Ö.; Demir, Y. Face recognition based on convolutional neural network. In Proceedings of the 2017 International Conference on Modern Electrical and Energy Systems (MEES), Kremenchuk, Ukraine, 15–17 November 2017; pp. 376–379. [ Google Scholar ]
  • Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998 , 86 , 2278–2324. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015 , 115 , 211–252. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems (NIPS), Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [ Google Scholar ]
  • Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the 2nd International Conference on Learning Representations (ICLR), Banff, AB, Canada, 14–16 April 2014. [ Google Scholar ]
  • Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [ Google Scholar ]
  • He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [ Google Scholar ]
  • Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-excitation networks. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 2019 , 42 , 7132–7141. [ Google Scholar ]
  • Chopra, S.; Hadsell, R.; LeCun, Y. Learning a similarity metric discriminatively, with application to face verification. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–26 June 2005; pp. 539–546. [ Google Scholar ]
  • Sun, Y.; Wang, X.; Tang, X. Deep learning face representation from predicting 10,000 classes. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1891–1898. [ Google Scholar ]
  • Sun, Y.; Chen, Y.; Wang, X.; Tang, X. Deep learning face representation by joint identification-verification. In Proceedings of the 27th International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 1988–1996. [ Google Scholar ]
  • Sun, Y.; Wang, X.; Tang, X. Deeply learned face representations are sparse, selective, and robust. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 2892–2900. [ Google Scholar ]
  • Sun, Y.; Liang, D.; Wang, X.; Tang, X. DeepID3: Face Recognition with Very Deep Neural Networks. arXiv 2015 , arXiv:1502.00873v1. [ Google Scholar ]
  • Taigman, Y.; Yang, M.; Ranzato, M.; Wolf, L. Web-Scale training for face identification. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 2746–2754. [ Google Scholar ]
  • Ouahabi, A.; Depollier, C.; Simon, L.; Kouame, D. Spectrum estimation from randomly sampled velocity data [LDV]. IEEE Trans. Instrum. Meas. 1998 , 47 , 1005–1012. [ Google Scholar ] [ CrossRef ]
  • Liu, J.; Deng, Y.; Bai, T.; Huang, C. Targeting ultimate accuracy: Face recognition via deep embedding. arXiv 2015 , arXiv:1506.07310v4. [ Google Scholar ]
  • Masi, I.; Tran, A.T.; Hassner, T.; Leksut, J.T.; Medioni, G. Do we really need to collect millions of faces for effective face recognition? In Proceedings of the 2016 European Conference on Computer Vision (ECCV), Amsterdam, The Netherland, 8–16 October 2016; pp. 579–596. [ Google Scholar ]
  • Zhang, X.; Fang, Z.; Wen, Y.; Li, Z.; Qiao, Y. Range loss for deep face recognition with Long-Tailed Training Data. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 5419–5428. [ Google Scholar ]
  • Liu, W.; Wen, Y.; Yu, Z.; Yang, M. Large-margin softmax loss for convolutional neural networks. In Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016; pp. 507–516. [ Google Scholar ]
  • Chen, B.; Deng, W.; Du, J. Noisy Softmax: Improving the Generalization Ability of DCNN via Postponing the Early Softmax Saturation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4021–4030. [ Google Scholar ]
  • Schroff, F.; Kalenichenko, D.; Philbin, J. FaceNet: A unified embedding for face recognition and clustering. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 815–823. [ Google Scholar ]
  • Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. arXiv 2013 , arXiv:1311.2901v3. [ Google Scholar ]
  • Ben Fredj, H.; Bouguezzi, S.; Souani, C. Face recognition in unconstrained environment with CNN. Vis. Comput. 2020 , 1–10. [ Google Scholar ] [ CrossRef ]
  • Wen, Y.; Zhang, K.; Li, Z.; Qiao, Y. A discriminative feature learning approach for deep face recognition. In Proceedings of the 14th European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 8–16 October 2016; pp. 499–515. [ Google Scholar ]
  • Wu, Y.; Liu, H.; Li, J.; Fu, Y. Deep Face Recognition with Center Invariant Loss. In Proceedings of the Thematic Workshop of ACM Multimedia, Mountain View, CA, USA, 23–27 October 2017; pp. 408–414. [ Google Scholar ]
  • Yin, X.; Yu, X.; Sohn, K.; Liu, X.; Chandraker, M. Feature Transfer Learning for Face Recognition with Under-Represented Data. In Proceedings of the 2019 International Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019. [ Google Scholar ]
  • Ranjan, R.; Castillo, C.D.; Chellappa, R. L2-constrained softmax loss for discriminative face verification. arXiv 2017 , arXiv:1703.09507v3. [ Google Scholar ]
  • Deng, J.; Zhou, Y.; Zafeiriou, S. Marginal Loss for Deep Face Recognition. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 2006–2014. [ Google Scholar ]
  • Wang, F.; Xiang, X.; Cheng, J.; Yuille, A.L. NormFace: L2 Hypersphere Embedding for Face Verification. In Proceedings of the 25th ACM International Conference on Multimedia, Mountain View, CA, USA, 23–27 October 2017; pp. 1041–1049. [ Google Scholar ]
  • Liu, Y.; Li, H.; Wang, X. Rethinking Feature Discrimination and Polymerization for Large-Scale Recognition. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS), (Deep Learning Workshop), Long Beach, CA, USA, 4–9 December 2017. [ Google Scholar ]
  • Hasnat, M.; Bohné, J.; Milgram, J.; Gentric, S.; Chen, L. Von Mises-Fisher Mixture Model-based Deep Learning: Application to Face Verification. arXiv 2017 , arXiv:1706.04264v2. [ Google Scholar ]
  • Liu, W.; Wen, Y.; Yu, Z.; Li, M.; Raj, B.; Song, L. SphereFace: Deep Hypersphere Embedding for Face Recognition. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6738–6746. [ Google Scholar ]
  • Zheng, Y.; Pal, D.K.; Savvides, M. Ring Loss: Convex Feature Normalization for Face Recognition. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 5089–5097. [ Google Scholar ]
  • Guo, Y.; Zhang, L. One-Shot Face Recognition by Promoting Underrepresented Classes. arXiv 2018 , arXiv:1707.05574v2. [ Google Scholar ]
  • Wang, H.; Wang, Y.; Zhou, Z.; Ji, X.; Gong, D.; Zhou, J.; Liu, W. CosFace: Large Margin Cosine Loss for Deep Face Recognition. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 5265–5274. [ Google Scholar ]
  • Wang, F.; Cheng, J.; Liu, W.; Liu, H. Additive Margin Softmax for Face Verification. IEEE Signal Process. Lett. 2018 , 25 , 926–930. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Wu, X.; He, R.; Sun, Z.; Tan, T. A Light CNN for Deep Face Representation with Noisy Labels. IEEE Trans. Inf. Forensics Secur. 2018 , 13 , 2884–2896. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Hayat, M.; Khan, S.H.; Zamir, W.; Shen, J.; Shao, L. Gaussian Affinity for Max-margin Class Imbalanced Learning. In Proceedings of the 2019 International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019. [ Google Scholar ]
  • Deng, J.; Guo, J.; Zafeiriou, S. ArcFace: Additive Angular Margin Loss for Deep Face Recognition. In Proceedings of the 2019 International Conference on Computer Vision and Pattern Recognition (CVPR), Lone Beach, CA, USA, 16–20 June 2019; pp. 4690–4699. [ Google Scholar ]
  • Huang, C.; Li, Y.; Loy, C.C.; Tang, X. Deep Imbalanced Learning for Face Recognition and Attribute Prediction. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 2019 . Available online: https://ieeexplore.ieee.org/document/8708977 (accessed on 21 July 2020).
  • Song, L.; Gong, D.; Li, Z.; Liu, C.; Liu, W. Occlusion Robust Face Recognition Based on Mask Learning with Pairwise Differential Siamese Network. In Proceedings of the 2019 International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019. [ Google Scholar ]
  • Wei, X.; Wang, H.; Scotney, B.; Wan, H. Minimum margin loss for deep face recognition. Pattern Recognit. 2020 , 97 , 107012. [ Google Scholar ] [ CrossRef ]
  • Sun, J.; Yang, W.; Gao, R.; Xue, J.H.; Liao, Q. Inter-class angular margin loss for face recognition. Signal Process. Image Commun. 2020 , 80 , 115636. [ Google Scholar ] [ CrossRef ]
  • Wu, Y.; Wu, Y.; Wu, R.; Gong, Y.; Lv, K.; Chen, K.; Liang, D.; Hu, X.; Liu, X.; Yan, J. Rotation consistent margin loss for efficient low-bit face recognition. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–18 June 2020; pp. 6866–6876. [ Google Scholar ]
  • Ling, H.; Wu, J.; Huang, J.; Li, P. Attention-based convolutional neural network for deep face recognition. Multimed. Tools Appl. 2020 , 79 , 5595–5616. [ Google Scholar ] [ CrossRef ]
  • Wu, B.; Wu, H. Angular Discriminative Deep Feature Learning for Face Verification. In Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 2133–2137. [ Google Scholar ]
  • Chen, D.; Cao, X.; Wang, L.; Wen, F.; Sun, J. Bayesian face revisited: A joint formulation. In Proceedings of the European Conference on Computer Vision (ECCV), Firenze, Italy, 7–13 October 2012; pp. 566–579. [ Google Scholar ]
  • Chen, B.C.; Chen, C.S.; Hsu, W.H. Face recognition and retrieval using cross-age reference coding with cross-age celebrity dataset. IEEE Trans. Multimed. 2015 , 17 , 804–815. [ Google Scholar ] [ CrossRef ]
  • Liu, Z.; Luo, P.; Wang, X.; Tang, X. Deep learning face attributes in the wild. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 11–18 December 2015; pp. 3730–3738. [ Google Scholar ]
  • Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 4278–4284. [ Google Scholar ]
  • Oumane, A.; Belahcene, M.; Benakcha, A.; Bourennane, S.; Taleb-Ahmed, A. Robust Multimodal 2D and 3D Face Authentication using Local Feature Fusion. Signal Image Video Process. 2016 , 10 , 12–137. [ Google Scholar ] [ CrossRef ]
  • Oumane, A.; Boutella, E.; Benghherabi, M.; Taleb-Ahmed, A.; Hadid, A. A Novel Statistical and Multiscale Local Binary Feature for 2D and 3D Face Verification. Comput. Electr. Eng. 2017 , 62 , 68–80. [ Google Scholar ] [ CrossRef ]
  • Soltanpour, S.; Boufama, B.; Wu, Q.M.J. A survey of local feature methods for 3D face recognition. Pattern Recognit. 2017 , 72 , 391–406. [ Google Scholar ] [ CrossRef ]
  • Zhou, S.; Xiao, S. 3D Face Recognition: A Survey. Hum. Cent. Comput. Inf. Sci. 2018 , 8 , 8–35. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Min, R.; Kose, N.; Dugelay, J. KinectFaceDB: A Kinect Database for Face Recognition. IEEE Trans. Syst. Man Cybern. Syst. 2014 , 44 , 1534–1548. [ Google Scholar ] [ CrossRef ]
  • Drira, H.; Ben Amor, B.; Srivastava, A.; Daoudi, M.; Slama, R. 3D Face Recognition under Expressions, Occlusions, and Pose Variations. IEEE Trans. Pattern Anal. Mach. Intell. 2013 , 35 , 2270–2283. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ Green Version ]
  • Ribeiro Alexandre, G.; Marques Soares, J.; Pereira Thé, G.A. Systematic review of 3D facial expression recognition methods. Pattern Recognit. 2020 , 100 , 107108. [ Google Scholar ] [ CrossRef ]
  • Ríos-Sánchez, B.; Costa-da-Silva, D.; Martín-Yuste, N.; Sánchez-Ávila, C. Deep Learning for Facial Recognition on Single Sample per Person Scenarios with Varied Capturing Conditions. Appl. Sci. 2019 , 9 , 5474. [ Google Scholar ]
  • Kim, D.; Hernandez, M.; Choi, J.; Medioni, G. Deep 3D face identification. In Proceedings of the IEEE International Joint Conference on Biometrics (IJCB), Denver, CO, USA, 1–4 October 2017; pp. 133–142. [ Google Scholar ]
  • Gilani, S.Z.; Mian, A.; Eastwood, P. Deep, dense and accurate 3D face correspondence for generating population specific deformable models. Pattern Recognit. 2017 , 69 , 238–250. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Gilani, S.Z.; Mian, A.; Shafait, F.; Reid, I. Dense 3D face correspondence. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 2018 , 40 , 1584–1598. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ Green Version ]
  • Gilani, S.Z.; Mian, A. Learning from Millions of 3D Scans for Large-scale 3D Face Recognition. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1896–1905. [ Google Scholar ]
  • Mimouna, A.; Alouani, I.; Ben Khalifa, A.; El Hillali, Y.; Taleb-Ahmed, A.; Menhaj, A.; Ouahabi, A.; Ben Amara, N.E. OLIMP: A Heterogeneous Multimodal Dataset for Advanced Environment Perception. Electronics 2020 , 9 , 560. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Benzaoui, A.; Boukrouche, A.; Doghmane, H.; Bourouba, H. Face recognition using 1DLBP, DWT, and SVM. In Proceedings of the 2015 3rd International Conference on Control, Engineering & Information Technology (CEIT), Tlemcen, Algeria, 25–27 May 2015; pp. 1–6. [ Google Scholar ]
  • Ait Aouit, D.; Ouahabi, A. Monitoring crack growth using thermography.-Suivi de fissuration de matériaux par thermographie. C. R. Mécanique 2008 , 336 , 677–683. [ Google Scholar ] [ CrossRef ]
  • Arya, S.; Pratap, N.; Bhatia, K. Future of Face Recognition: A Review. Procedia Comput. Sci. 2015 , 58 , 578–585. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Zafeiriou, S.; Zhang, C.; Zhang, Z. A survey on face detection in the wild: Past, present and future. Comput. Vis. Image Underst. 2015 , 138 , 1–24. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Min, R.; Xu, S.; Cui, Z. Single-Sample Face Recognition Based on Feature Expansion. IEEE Access 2019 , 7 , 45219–45229. [ Google Scholar ] [ CrossRef ]
  • Zhang, D.; An, P.; Zhang, H. Application of robust face recognition in video surveillance systems. Optoelectron. Lett. 2018 , 14 , 152–155. [ Google Scholar ] [ CrossRef ]
  • Tome, P.; Vera-Rodriguez, R.; Fierrez, J.; Ortega-Garcia, J. Facial soft biometric features for forensic face recognition. Forensic Sci. Int. 2015 , 257 , 271–284. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ Green Version ]
  • Fathy, M.E.; Patel, V.M.; Chellappa, R. Face-based Active Authentication on mobile devices. In Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brisbane, QLD, Australia, 19–24 April 2015; pp. 1687–1691. [ Google Scholar ]
  • Medapati, P.K.; Murthy, P.H.S.T.; Sridhar, K.P. LAMSTAR: For IoT-based face recognition system to manage the safety factor in smart cities. Trans. Emerg. Telecommun. Technol. 2019 , 1–15. Available online: https://onlinelibrary.wiley.com/doi/abs/10.1002/ett.3843?af=R (accessed on 10 July 2020).

Click here to enlarge figure

DatabaseApparition’s DateImagesSubjectsImages/Subject
ORL [ ]19944004010
FERET [ ]199614,1261199-
AR [ ]1998301611626
XM2VTS [ ]1999-295-
BANCA [ ]2003-208-
FRGC [ ]200650,000-7
LFW [ ]200713,2335749≈2.3
CMU Multi PIE [ ]2009>750,000337N/A
IJB-A [ ]20155712500≈11.4
CFP [ ]20167000500>14
DMFD [ ]201624604106
IJB-B [ ]201721,7981845≈36.2
MF2 [ ]20174.7 M672,057≈7
DFW [ ]201811,1571000≈5.26
IJB-C [ ]201831,3343531≈6
LFR [ ]202030,00054210–260
RMFRD [ ]202095,000525-
SMFRD [ ]2020500,00010,000-
DatabaseApparition’s DateImagesSubjectsImages/Subject
CASIA WebFace [ ]2014494,41410,575≈46.8
MegaFace [ ]20161,027,060690,572≈1.4
MS-Celeb-1M [ ]201610 M100,000100
VGGFACE [ ]20162.6 M26221000
VGGFACE2 [ ]20173.31 M9131≈362.6
MethodAuthorsYearArchitectureNetworksVerif. MetricTraining Set Accuracy (%) ± SE
1DeepFaceTaigman et al. [ ]2014CNN-93SoftmaxFacebook (4.4 M, 4 K) *97.35 ± 0.25
2DeepIDSun et al. [ ]2014CNN-960Softmax + JBCelebFaces + [ ] (202 k, 10 k) *97.45 ± 0.26
3DeepID2Sun et al. [ ]2014CNN-925Contrastive Softmax + JBCelebFaces+ (202 k, 10 k) *99.15 ± 0.13
4DeepID2+Sun et al. [ ]2014CNN-925Contrastive Softmax + JBWDRef [ ] + CelebFaces + (290 k, 12 k) *99.47 ± 0.12
5DeepID3Sun et al. [ ]2015VGGNet25Contrastive Softmax + JBWDRef + CelebFaces + (290 k,12 k)99.53 ± 0.10
6FaceNetSchroff et al. [ ]2015GoogleNet1Triplet LossGoogle (200 M, 8 M) *99.63 ± 0.09
7Web-ScaleTaigman et al. [ ]2015CNN-94Contrastive SoftmaxPrivate Database (4.5 M, 55 K) *98.37
8BAIDULiu et al. [ ]2015CNN-910Triplet LossPrivate Databse (1.2 M, 18 K) *99.77
9VGGFaceParkhi et al. [ ]2015VGGNet1Triplet LossVGGFace (2.6 M, 2.6 K)98.95
10AugmentationMasi et al. [ ]2016VGGNet-191SoftmaxCASIA WebFace (494 k, 10 k)98.06
11Range LossZhang et al. [ ]2016VGGNet-161Range LossCASIA WebFace + MS-Celeb-1M (5 M, 100 k)99.52
12Center LossWen et al. [ ]2016LeNet1Center LossCASIA WebFace + CACD2000 [ ] + Celebrity + [ ] (0.7 M, 17 k)99.28
13L-SoftmaxLiu et al. [ ]2016VGGNet-181L-SoftmaxCASIA-WebFace (490 k, 10 K)98.71
14L2-SoftmaxRanjan et al. [ ]2017ResNet-1011L2-SoftmaxMS-Celeb 1M (3.7 M, 58 k)99.78
15Marginal LossDeng et al. [ ]2017ResNet-271Marginal LossMS-Celeb 1M (4 M, 82 k)99.48
16NormFaceWang et al. [ ]2017ResNet-281Contrastive LossCASIA WebFace (494 k, 10 k)99.19 ± 0.008
17Noisy SoftmaxChen et al. [ ]2017VGGNet1Noisy SoftmaxCASIA WebFace (400 K, 14 k)99.18
18COCO LossLiu et al. [ ]2017ResNet-1281COCO LossMS-Celeb 1M (3 M, 80 k)
19Center Invariant Loss Wu et al. [ ]2017LeNet1Center Invariant LossCASIA WebFace (0.45 M, 10 k)99.12
20Von Mises-FisherHasnat et al. [ ]2017ResNet-271vMF LossMS-Celeb-1M (4.61 M, 61.24 K)99.63
21SphereFaceLiu et al. [ ]2018ResNet-641A-Softmax CASIA WebFace (494 k, 10 k)99.42
22Ring LossZheng et al. [ ]2018ResNet-641Ring LossMS-Celeb-1M (3.5 M, 31 K)99.50
23MLRGuo and Zhang [ ]2018ResNet-341CCS LossMS-Celeb-1M (10 M, 100 K)99.71
24CosfaceWang et al. [ ]2018ResNet-641Large Margin Cosine Loss CASIA WebFace (494 k, 10 k)99.73
25AM-SoftmaxWang et al. [ ]2018ResNet-201AM-Softmax LossCASIA WebFace (494 k, 10 k)99.12
26Light-CNNWu et al. [ ]2018ResNet-291SoftmaxMS-Celeb-1M (5 M, 79 K)99.33
27Affinity LossHayat et al. [ ]2019ResNet-501Affinity LossVGGFace2 (3.31 M, 8 K)99.65
28ArcFaceDeng et al. [ ]2019ResNet-1001ArcFaceMS-Celeb-1M (5.8 M, 85 k)99.83
29CLMLEHuang et al. [ ]2019ResNet-64 1CLMLE LossCASIA WebFace (494 k, 10 k)99.62
30PDSNSong et al. [ ]2019ResNet-501Pairwise Contrastive LossCASIA WebFace (494 k, 10 k)99.20
31Feature TransferYin et al. [ ] 2019LeNet1SoftmaxMS-Celeb-1M (4.8 M, 76.5 K)99.55
32Ben Fredj workBen Fredj et al. [ ]2020GoogleNet1Softmax with center lossCASIA WebFace (494 k, 10 k)99.2 ± 0.04
33MMLWei et al. [ ]2020Inception ResNet-V1 [ ]1MML LossVGGFace2 (3.05 M, 8 K)99.63
34IAMSun et al. [ ]2020Inception ResNet-V11IAM lossCASIA WebFace (494 k, 10 k)99.12
35RCM lossWu et al. [ ]2020ResNet-181Rotation Consistent Margin loss CASIA WebFace (494 k, 10 k)98.91
36ACNNLing et al. [ ]2020ResNet-1001ArcFace LossDeepGlint-MS1M (3.9 M, 86 K)99.83
37LMC
SDLMC
DLMC
Wu and Wu [ ]2020ResNet321LMC loss
SDLMC loss
DLMC loss
CASIA WebFace (494 k, 10 k)98.1399.0399.07
DatabaseApparition’s DateImagesSubjectsData Type
BU-3DFE20062500100Mesh
FRGC v1.0 [ ]2006943273Depth image
FRGC v2.0 [ ]20064007466Depth image
CASIA20064623123Depth image
ND2006200788813,450Depth image
Bosphorus20084666105Point Cloud
BJUT-3D20091200500Mesh
Texas 3DFRD20101140118Depth image
UMB-DB20111473143Depth image
BU-4DFE2008606 sequences = 60,600 (frames)1013D video

Share and Cite

Adjabi, I.; Ouahabi, A.; Benzaoui, A.; Taleb-Ahmed, A. Past, Present, and Future of Face Recognition: A Review. Electronics 2020 , 9 , 1188. https://doi.org/10.3390/electronics9081188

Adjabi I, Ouahabi A, Benzaoui A, Taleb-Ahmed A. Past, Present, and Future of Face Recognition: A Review. Electronics . 2020; 9(8):1188. https://doi.org/10.3390/electronics9081188

Adjabi, Insaf, Abdeldjalil Ouahabi, Amir Benzaoui, and Abdelmalik Taleb-Ahmed. 2020. "Past, Present, and Future of Face Recognition: A Review" Electronics 9, no. 8: 1188. https://doi.org/10.3390/electronics9081188

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

How repressive regimes use facial recognition technology

Published March 27 2024

By Jasper Jackson

Forms of facial recognition have been around since the middle of the last century, starting in the 1960s when American mathematician Woody Bledsoe got a computer to analyse the distances between facial features in photos and then try to match them to other images.

But it is only in recent years that the technology has become sophisticated enough to rapidly identify people going about their lives with a high accuracy. Coupled with the rise in surveillance, via CCTV and increasing numbers of cameras trained on the streets, it has become theoretically possible to identify almost anyone when they are caught on video.

The rollout of facial recognition has proved controversial in many democracies. In the US, the ACLU has campaigned “against the growing dangers of this unregulated surveillance technology”. In 2023 it was revealed that US police had run nearly 1m searches against a database compiled by Clearview AI, which had scraped billions of images from social media and other sources without user permission.

In the UK, privacy campaigners have raised concerns about government plans to extend the use of live facial recognition technology, as well as to allow police to run searches against 50m driving licence holders.

And it’s not just governments that can use facial recognition. Increased computing power and declining costs have seen facial recognition tools become available to the public, though they lack access to the networks of surveillance cameras.

But the use of facial recognition by authoritarian regimes is particularly worrying.

Perhaps the most extreme example comes from China, which not only has the most extensive domestic facial recognition system, but is also the biggest exporter of similar tech. It uses its huge network of cameras and systems that connect to them to do everything from shaming citizens for wearing sleepwear out in the streets to tracking members of the oppressed Uyghur minority.

Russia, too, has embraced facial recognition. The technology, in some cases trained with the help of gig workers around the world, as our latest investigation reveals, has been used extensively to target those opposing Vladimir Putin’s harsh rule. Many activists and protestors have been summoned by police after being identified on camera.

Leaks from the Kremlin Leaks project show that Putin’s own office is working on a secret project to extend and linkup surveillance across Russia, using facial recognition to ensure they can watch and identify anyone challenging the state.

As the technology becomes more powerful, with increased computing power, more widespread surveillance systems and AI-algorithms trained on more and more data, the ability to identify people via any camera system will likely become even more widespread. That is unless democracies and the people in them decide where the limits on their use should be, and enforce them.

Written by: Jasper Jackson Deputy editor: Katie Mark Editor: Franz Wild Production editor: Frankie Goodway Fact checker: Somesh Jha

Our reporting on Big Tech is funded by Open Society Foundations. None of our funders have any influence over our editorial decisions or output.

  • Artificial intelligence
  • Surveillance

About The Author

Jasper jackson.

Jasper has covered digital media and technology for more than a decade. He has previously worked for the New Statesman and the Guardian

More From Big Tech

face recognition uk essay

Save the Shire: veteran Tory aide enlisted by ‘grassroots’ anti-solar campaign

face recognition uk essay

Doctored footage and hijacked accounts: anatomy of a deepfake scam network

face recognition uk essay

Deepfakes and their threat to global democracy

face recognition uk essay

Facebook failed to block thousands of political ads peddling false information

Corporations.

  • Binary Options
  • Corporate Watch
  • Fixed-Odds Betting Machines
  • High Cost Credit
  • High Frequency Trading
  • Smoke Screen

Food and Drugs

  • Big Tobacco
  • Global Health
  • Deaths in Police Custody
  • Family Justice
  • Investigating Rape
  • Joint Enterprise
  • Rough Justice

Human Rights

  • CIA Torture
  • Citizenship Revoked
  • Drone Warfare
  • Iraq War Logs
  • Marikana Massacre
  • Migration Crisis
  • Privatised War
  • Shadow Wars
  • Surveillance State
  • Trapped in work

PR and Spin

  • Lobbying’s Hidden Influence

State Scrutiny

  • Care Sector
  • Europe Under Scrutiny
  • Political Party Funding
  • Public Sector Pay
  • Rulers’ Riches
  • Scrutinising Government
  • The Enablers

Bureau Local

  • Local Stories
  • Open Resources
  • Cost of living crisis
  • Domestic Violence
  • Homelessness
  • Is Work Working?
  • Local Power
  • The Housing Crisis
  • The People’s Newsroom
  • UK Elections

Cookies on GOV.UK

We use some essential cookies to make this website work.

We’d like to set additional cookies to understand how you use GOV.UK, remember your settings and improve government services.

We also use cookies set by other sites to help us deliver content from their services.

You have accepted additional cookies. You can change your cookie settings at any time.

You have rejected additional cookies. You can change your cookie settings at any time.

face recognition uk essay

Bring photo ID to vote Check what photo ID you'll need to vote in person in the General Election on 4 July.

  • Crime, justice and law

CDEI publishes briefing paper on facial recognition technology

The CDEI has published a Snapshot briefing paper looking at the uses and potential implications of facial recognition technology’s deployment in the UK.

Snapshot Paper - Facial Recognition Technology

PDF , 10.9 MB , 32 pages

This file may not be suitable for users of assistive technology.

What is a CDEI Snapshot?

CDEI Snapshots are designed to introduce non-expert readers to important ethical issues related to the development and deployment of AI and data-driven technology. Their purpose is to separate fact from fiction, clarify what is known and unknown about an issue, and outline possible avenues of action by government and industry. Previous Snapshots have looked at deepfakes, smart speakers and the use of AI in the personal insurance sector.

What does this paper cover?

Facial recognition technology is among the most controversial data-driven innovations in use today. Advocates claim that it could make our streets safer, our bank accounts more secure, and our public services more accessible. Critics argue that it poses a threat to privacy and other civil liberties. This paper attempts to bring clarity to this debate, putting claims under scrutiny and helping readers understand where to direct their attention. It seeks to answer several fundamental questions about FRT systems, including how they are developed, where they have been deployed to date, and the mechanisms by which they are governed.

The paper’s findings are informed by interviews with experts from across civil society, academia, industry, law enforcement and government.

What are the key findings?

The paper finds that:

FRT can be used for varied purposes. Some systems aim to verify people’s identity (e.g. to unlock an electronic device), while others seek to identify individuals (e.g. by scanning a group of people to see if any are featured on a watchlist).

FRT systems have been deployed across the public and private sectors. Several police forces have trialled live FRT, while banks have installed FRT functionality within apps to improve customer experience.

Used responsibly, FRT has the potential to enhance efficiency and security across many contexts. However, the technology also presents several risks, including to privacy and the fair treatment of individuals.

The extent to which an FRT system is beneficial or detrimental to society depends on the context, as well as the accuracy and biases of the specific algorithm deployed. Each use must be assessed according to its own merits and risks.

The use of FRT is regulated by several laws, including the Data Protection Act and the Human Rights Act (for public sector applications). However, a standalone code of practice for FRT has yet to be drawn up.

The regulatory regime governing the use of FRT in the private sector is less extensive than the one for public law enforcement. Policymakers should consider whether there is sufficient oversight of FRT in contexts such as retail and private property developments.

What happens next?

The CDEI will continue to examine the impacts of FRT on society. We are particularly interested in how the technology is being used in the private sector, and where it might be deployed to support Covid 19 response efforts (e.g. to power the digital ID systems behind C19 health certificates).

Related content

Is this page useful.

  • Yes this page is useful
  • No this page is not useful

Help us improve GOV.UK

Don’t include personal or financial information like your National Insurance number or credit card details.

To help us improve GOV.UK, we’d like to know more about your visit today. Please fill in this survey (opens in a new tab) .

Facial Recognition Technology and Ethical Concerns Essay

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

Face recognition refers to a method used to confirm or identify an individual’s identity using their face. The technology authenticates and identifies an individual based on sets of verifiable and recognizable data unique and specific to that individual. Facebook launched its DeepFace program in 2014, which can be used to identify two photographed faces belonging to one individual (Scherhag et al., 2019). While face recognition technology is gaining increasing application, especially by digital corporations, critics believe that storage and identity management have various ethical issues, including privacy and confidentiality. The use of face recognition technology is associated with various ethical concerns, such as lack of transparency and informed consent, racial discrimination, misinformation and bias, data breaches and mass surveillance.

Data privacy is undoubtedly the biggest ethical concern regarding the adoption and use of face recognition technology. Privacy is a key concern for people using the internet, especially social media. According to a study by Scherhag and colleagues, face recognition programs infringe on individuals’ inherent rights to remain under constant surveillance and have their images kept without their consent (Scherhag et al., 2019). For instance, in 2019, the European Commission banned the use of facial recognition technology in public spaces because of the ethical and privacy abuse associated with the technology (Scherhag et al., 2019). Privacy concerns associated with facial recognition revolve around unsafe data storage practices capable of exposing facial recognition information. Many corporations continue to host their facial recognition information on local servers with high-security vulnerabilities.

Facebook is among the digital corporations that announced to shut down its facial recognition software used to identify faces in videos and photographs. The corporation decided to delete over one billion facial recognition templates that the company has collected since its inception. There has been increasing concern about the ethics of facial recognition programs, and many questions have been raised over their accuracy, racial bias and privacy. Facebook has been facing severe criticism over the impact of this technology on users. The company was forced to bring down the program in 2019; however, users can turn the feature back on.

The decision made by Meta to shut down its facial recognition program was a right and ethical decision. Face recognition technology compromises privacy, making intrusive surveillance normal and often targeting marginalized people. The use of face recognition technology has gotten the company into various ethical issues. In 2019, Facebook was fined $6.5 billion by the US Federal Trade Commission to settle privacy complaints (Scherhag et al., 2019). The decision to shut down facial recognition software came after the corporation faced severe regulatory and legislative scrutiny over leaked user information.

The decision to bring down facial recognition technology positively impacts the company and its users. Not only will the company’s reputation grow strong, but also it will gain more users because the users will be assured of their privacy. Moreover, Facebook will not be involved in privacy complaints associated with face recognition technology. The company is now looking for a new form of identifying individuals with minimal privacy concerns—a narrower form of individual authentication.

The government and corporations should control facial recognition technology and be allowed to use it for narrower purposes. The technology is more effective and valuable when operated privately on an individual’s devices (Scherhag et al., 2019). Face recognition technology is not private, leading to severe security concerns. People have the right over their privacy, and their data can only be used with their consent. Therefore, the government must regulate the use of facial recognition technologies by organizations and businesses.

Scherhag, U., Rathgeb, C., Merkle, J., Breithaupt, R., & Busch, C. (2019). Face recognition systems under morphing attacks: A survey. IEEE Access , 7 , 23012-23026.

  • The Effect of Facial Configuration to Recognize Words
  • Nonverbal Communication: The Facial Expression
  • Facial Gestures
  • Aspects of Ethics of Workers Solidarity
  • Happiness in Mills' Utilitarianism Theory
  • The Three Essential Principles for Ethical Reasoning
  • Generational Responsibility for Past Injustices
  • Ethical Hedonism: The Principles of Morals and Legislation
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2023, December 11). Facial Recognition Technology and Ethical Concerns. https://ivypanda.com/essays/facial-recognition-technology-and-ethical-concerns/

"Facial Recognition Technology and Ethical Concerns." IvyPanda , 11 Dec. 2023, ivypanda.com/essays/facial-recognition-technology-and-ethical-concerns/.

IvyPanda . (2023) 'Facial Recognition Technology and Ethical Concerns'. 11 December.

IvyPanda . 2023. "Facial Recognition Technology and Ethical Concerns." December 11, 2023. https://ivypanda.com/essays/facial-recognition-technology-and-ethical-concerns/.

1. IvyPanda . "Facial Recognition Technology and Ethical Concerns." December 11, 2023. https://ivypanda.com/essays/facial-recognition-technology-and-ethical-concerns/.

Bibliography

IvyPanda . "Facial Recognition Technology and Ethical Concerns." December 11, 2023. https://ivypanda.com/essays/facial-recognition-technology-and-ethical-concerns/.

  • Share full article

Advertisement

Supported by

Police Use of Facial Recognition Is Accepted by British Court

In a closely watched case, a judge ruled that live facial recognition does not violate privacy rights. There has been little legal precedent about its use.

face recognition uk essay

By Adam Satariano

LONDON — In one of the first lawsuits to address the use of live facial recognition technology by governments, a British court ruled on Wednesday that police use of the systems is acceptable and does not violate privacy and human rights.

The case has been closely watched by law enforcement agencies, privacy groups and government officials because there is little legal precedent concerning the use of cameras in public spaces that scan people’s faces in real time and attempt to identify them from photo databases of criminal suspects. While the technology has advanced quickly, with many companies building systems that can be used by police departments, laws and regulations have been slower to develop.

The High Court dismissed the case brought by Ed Bridges , a resident of Cardiff, Wales, who said his rights were violated by the use of facial recognition by the South Wales Police. Mr. Bridges claimed that he had been recorded without permission on at least two occasions — once while shopping and again while attending a political rally.

The case centers on the use of systems that scan human faces in real time. That is different from technology used by the authorities to find matches from past images, driver’s license photographs or videos. Companies including Apple, Facebook and Google use the technology to identify people in pictures.

In Britain, the technology has been used by the South Wales Police and the Metropolitan Police Service in London. In the United States, at least five large police departments — including those in Chicago, Dallas and Los Angeles — have claimed to have run real-time facial recognition, purchased technology that can do so or expressed an interest in buying it, according to a Georgetown University study from 2016.

The South Wales Police often use live facial recognition at large events, such as the national air show and rugby matches, according to police records. The cameras scan faces in a crowd, comparing the imagines with a police database of wanted individuals. When the system finds a match, it sends an alert to officers in a command center, who then contact other officers to stop the person.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

Facial Recognition Technology

Facial recognition technology is a technology widely used today in identifying or verifying a subject from an image, video, or an audiovisual element of the subject’s face. Facial recognition is defined as software that maps, analyzes, and confirms the identity of a face from an image, video, or audiovisual component (Khan & Rizvi, 2021). Facial recognition is presently one of the most powerful surveillance tools available today. Facial recognition is used both at an individual and organization or institutional level. At an individual level, facial recognition is used, for example, in unlocking one’s phone and at the institutional level, facial recognition is used in different ways, mostly focused on surveillance. Therefore, facial recognition technology is a form of technology available to a modern-day investigator that can be used for different purposes including the collection of data and law enforcement purposes which can provide various advantages as will be examined.

What is facial recognition technology?

Facial recognition is a revolutionary technology that is fundamentally used to access an application, system, or service. The technology is a form of biometric identification that uses techniques such as body measures, that is, the face and the head to verify the identity of an individual (Ritchie et al. 2021). The verification of the identity of a person is done using a facial biometric pattern and data to provide the most accurate results. Facial recognition of technology is used for three main purposes that are facial verification, field identification, and facial identification. Facial verification uses a computer facial recognition platform to properly confirm the identity of a subject. Smartphones today are usually deployed with similar technology, for example, that assists in unlocking phones using the user’s phone. Such facial recognition technology is also used in law enforcement and correctional facilities to grant information and access to secured areas, confirm the identities of inmates, confirm the identities of people at border crossings, and many more. Field verification is the use of face recognition technology to identify subjects during field interactions. A modern-day investigator will use field recognition technology to fill in any gaps in information such as when a particular subject does not have proper identification or is uncooperative and refuses to provide proper identification. The technology can, therefore, be used to confirm the identity of a subject. Another purpose of facial recognition technology is that it provides the opportunity for facial identification (Khan & Rizvi, 2021). Entities like law enforcement use facial recognition technology to identify subjects. Facial recognition technology is usually based on a face recognition system. The system works by requiring any device that can generate images and data necessary for the creation and recording of biometric facial patterns to do so for the identified person.

How Facial Recognition Technology is Used

A person may be good with faces, may find it is easy to identify the faces of family members, acquaintances, or long-lost friends, or the person may be familiar with the facial features of another person such as their nose, mouth, eyes, and how they provide unique facial qualities. The recognition and analysis of features is exactly how facial recognition technology works. The technology works on a grand algorithmic scale that can assist a modern-day investigator to map out the features of a particular subject. The algorithm typically captures the individual’s facial signature by examining the person’s facial features which include the distance between the eyes, the distance from the subject’s forehead to the chin, and other important facial landmarks (McClellan, 2020). Facial recognition technology is, therefore, used since it captures the salient features of a person’s face that make the individual unique.

Facial recognition technologies tend to vary but there are fundamental steps that each of these technologies takes. The first step is capturing the picture of a subject’s face using either photo or video. The second step is where the facial recognition software examines the underlying features of the subject’s face. Once the system picks up the unique facial attributes of a given subject, these features become the subject’s facial signature. The next step is the subject’s facial signature is loaded into a database of known faces and runs a comparison. The last step is usually where a determination is made, where the faceprint of the subject may match that of an image in the system or may not. The last step offers the opportunity to take action based on the information captured.

Further, the use of facial recognition technology is widespread as noted since it can be used by individuals and institutions alike. Individuals use facial recognition technology to perform actions such as unlocking their phones, particularly for Apple users. At the institutional level, the technology is used for various purposes such as the following. Governmental agencies such as the Department of Homeland Security use facial recognition technology in various airports to identify criminals, fugitives, or those who have overstayed their visas (McClellan, 2021). Social media companies also utilize facial recognition technology. Social media companies such as Facebook uses the technology to personalize user experiences on their platform. The technology also ensures that the user’s information is protected from personification or misuse of identity.

The most commonly known examples of facial recognition technology available include FaceNet which was developed by Google researchers and provides a high accuracy rate, hence its ability to provide accurate results such as in Google Photos. The second well-known example is FaceApp which was recently developed and is used for pure entertainment. Users in this app can take a picture of themselves and change their features to determine how they would look if they were younger, older, or from the other gender. The other well-known example is Face ID, which is a product of Apple and is used to unlock the user’s phone as previously noted. As a result, the use of racial recognition technology is rapidly improving as the advancements in technology continue, particularly in controlled settings.

Benefits of Facial Recognition Technology to Investigation

Modern-day investigators such as scholars examining facial recognition data, law enforcement using the technology, and forensic experts conducting an investigation may benefit extensively from the use of facial recognition technology. The benefits can include the following. One of the benefits is that it can help law enforcement officials to uncover criminals or to find missing persons. The technology can, therefore, be used to improve the efficacies of law enforcement work as it can provide an investigation with the missing information necessary to nail a culprit. Law enforcement can also benefit from the technology based on the additional intelligence that it provides. Facial recognition technologies can memorize the faces of persons of interest, can map out the networks of criminal gangs, and can identify individuals suspected of crimes. The efficacy of such technologies is that they do not require an individual to have a prior engagement with the system, for example, having a criminal record background as they can provide data all relevant data. A law enforcement investigator also realizes that the technology does not make the decision regarding a particular aspect of crime as its sole role is to provide greater transparency and context that can improve the decision-making process of whether a criminal investigation should proceed or not. The interpretation is that the facial recognition technology does improve efficiency, hence saving time and resources that would have been used in its absence.

Another benefit is that facial recognition technology can assist an investigator to conduct proper investigations in a faster manner that can bring offenders to justice. The ability of the technology to improve investigation outcomes is vital since it can also be used to map out trends and statistics that can provide a researcher with valuable data on specific areas of crime or the kind of behaviors or approaches used by criminals. Besides, the use of facial recognition technology and its efficiency can help in stopping and preventing crimes. An investigator can also use the technology to provide recommendations that can be used to develop and implement appropriate policies, procedures, or programs that can lead to effective outcomes once they are implemented. as a result, facial recognition technology has well-rounded benefits that if used correctly can promote effective and efficient results, thereby improving the implementation or recommendation of working solutions.

A modern-day investigator with facial recognition technology at their disposal can improve the processes and outcomes of their investigation due to the high level of efficacy of the technology. Even as the technology continues to develop, it has been deployed widely to increase the convenience and quality of investigation, thereby providing investigators. The technology has various advantages, especially for investigators such as those working in law enforcement since it can help to increase the effectiveness and efficiency of the investigation process, hence providing compelling information to influence decision-making. The technology aids in improving transparency and identifies what can be done or notices applicable trends and patterns that can be used toward the promotion of effective outcomes. Therefore, a modern-day investigator can benefit immensely from the use of facial recognition technology in performing certain investigations.

McClellan, E. (2021). Facial recognition technology: balancing the benefits and concerns. Journal of Business & Technology Law, 15(2). https://core.ac.uk/download/pdf/323515332.pdf

Khan, Z.A. & Rizvi, A. (2021). AI based facial recognition technology and criminal justice: issues and challenges. Turkish Journal of Computer and Mathematics Education, 12(14).

Ritchie, K.L., Cartledge, C., Growns, B., Yan, A., Wang, Y., Guo, K., Kramer, R.S., Edmond, G., Martire, K.A., Rosque, M.S. & White, D. (2021). Public attitudes towards the use of automatic facial recognition technology in criminal justice systems around the world. PLoS ONE, 16(10). https://doi.org/10.1371/journal.pone.0258241

Cite This Work

To export a reference to this article please select a referencing style below:

Related Essays

China’s cyber threat to u.s. national security and economy, sexual minorities health action program, leadership in emergency care, evidence-based patient-centered needs assessment, proposal for an interdisciplinary plan, pregnancy-specific disorders: preeclampsia, popular essay topics.

  • American Dream
  • Artificial Intelligence
  • Black Lives Matter
  • Bullying Essay
  • Career Goals Essay
  • Causes of the Civil War
  • Child Abusing
  • Civil Rights Movement
  • Community Service
  • Cultural Identity
  • Cyber Bullying
  • Death Penalty
  • Depression Essay
  • Domestic Violence
  • Freedom of Speech
  • Global Warming
  • Gun Control
  • Human Trafficking
  • I Believe Essay
  • Immigration
  • Importance of Education
  • Israel and Palestine Conflict
  • Leadership Essay
  • Legalizing Marijuanas
  • Mental Health
  • National Honor Society
  • Police Brutality
  • Pollution Essay
  • Racism Essay
  • Romeo and Juliet
  • Same Sex Marriages
  • Social Media
  • The Great Gatsby
  • The Yellow Wallpaper
  • Time Management
  • To Kill a Mockingbird
  • Violent Video Games
  • What Makes You Unique
  • Why I Want to Be a Nurse
  • Send us an e-mail

Home — Essay Samples — Science — Innovation — Face Recognition Technology

test_template

Face Recognition Technology

  • Categories: Innovation

About this sample

close

Words: 1228 |

Published: Oct 11, 2018

Words: 1228 | Pages: 3 | 7 min read

Table of contents

What are biometrics, why we choose face recognition over other biometric, face recognition, implementation of face recognition technology, how to face recognition systems work, face bunch graph.

  • Finger-scan
  • Facial Recognition
  • Retina-scan
  • It requires no physical interaction on behalf of the user.
  • It is accurate and allows for high enrolment and verification rates.
  • It does not require an expert to interpret the comparison result.
  • It can use your existing hardware infrastructure, existing cameras and image capture Devices will work with no problems
  • It is the only biometric that allow you to perform passive identification in a one to. Many environments (e.g.: identifying a terrorist in a busy Airport terminal)
  • Verification. This is where the system compares the given individual with who that individual says they are and gives a yes or no decision.
  • Identification. This is where the system compares the given individual to all the Other individuals in the database and gives a ranked list of matches.
  • Capture: A physical or behavioral sample is captured by the system during Enrollment and also in the identification or verification process b. Extraction: Unique data is extracted from the sample and a template is created.
  • Comparison: The template is then compared with a new sample.
  • Match/ nonmatch: The system decides if the features extracted from the new Samples are a match or a no match d.
  • Data acquisition
  • Input processing
  • Face image classification and decision making
  • the distance between the eyes
  • width of the nose
  • depth of the eye socket
  • There are many benefits to face recognition systems such as its convenience and Social Acceptability. All you need is your picture taken for it to work.
  • Face recognition is easy to use and in many cases, it can be performed without a Person even knowing.
  • Face recognition is also one of the most inexpensive biometric in the market and Its price should continue to go down.
  • Law Enforcement: Minimizing victim trauma verifying Identify for court records, and comparing school surveillance camera images to know child molesters.
  • Security/Counterterrorism: Access control, comparing surveillance images to Know terrorist.
  • Immigration: Rapid progression through Customs.
  • Voter verification: Where eligible politicians are required to verify their identity during a voting process this is intended to stop “proxy? voting where the vote may not go as expected.
  • Residential Security: Alert homeowners of approaching personnel.
  • Banking using ATM: The software is able to quickly verify a customer’s face.
  • Physical access control of buildings areas, doors, cars or net access.

Image of Alex Wood

Cite this Essay

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Dr. Heisenberg

Verified writer

  • Expert in: Science

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

2 pages / 882 words

5 pages / 2090 words

6 pages / 2889 words

3 pages / 1493 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

Face Recognition Technology Essay

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Innovation

Epidemiology is the scientific study and analysis of the distribution of diseases within a specific demographic of people or animals. It involves a particular evaluation of when, how, and where the condition occurred [...]

America's Gift To My Generation Analysis: A Window Into the FutureImagine a world where access to education, healthcare, and opportunities for personal growth are not just privileges for a fortunate few, but inalienable rights [...]

I chose to study computer science because of my fascination with technology and its ability to shape our world. The power of computers to improve our lives, from simple tasks like sending an email to the complex algorithms [...]

Self-driving cars have long been the subject of both fascination and skepticism, promising a future where vehicles navigate the roads autonomously, revolutionizing the way we travel. In 2023, the technology and industry [...]

The phrases creativity and innovation can be heard used in the media and in everyday. Conversation to refer to both a product of human creativity and to the strategies concerned in the improvement of a product. The two [...]

The biggest and most challenging task in business today is the dealing with demands of change. This ever happening change not only confronts us but also challenges our skills in order to create value to our customers and also to [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

face recognition uk essay

We use cookies to enhance our website for you. Proceed if you agree to this policy or learn more about it.

  • Essay Database >
  • Essay Examples >
  • Essays Topics >
  • Essay on Women

Facial Recognition Essay Examples

Type of paper: Essay

Topic: Women , Education , Study , Body Language , Community , Hypothesis , Gender , Recognition

Words: 1500

Published: 03/14/2020

ORDER PAPER LIKE THIS

Face recognition is as old as human history itself. We are have always relied on our brain to help us identify faces we are familiar with. This functioning of the human mind has been used by researchers in developing technologies which is contributing to use the facial recognition system as a security measure. What started with biometric applications like iris scan and finger print scan is now being transformed into the facial recognition system. Today recognition of facial features is being adopted across industries and installations. A question that flows is which of the sexes recognize faces better. Is it males or females who show greater propensity towards facial recognition? This study is an attempt to find out whether the quality of accurate facial recognition is a function of whether one is male or female.

Introduction

It so often happens to us that recollection of the name of a known face poses a difficulty. Many a times we make mistakes in identifying the faces of people whom we have seen previously. But why does this happen? Why do we make mistakes in face recognition? Studies have shown that memory often works in the reconstructive mode and not necessarily reproductive. This means when we see a face and is able to recognize the face, it is not recollection but a recreation of the experiences related to the face. Hence we feel good when we see someone we are comfortable with and otherwise if we have had unpleasant experiences. Another experience happens when we feel a face to be familiar but is unable to exactly relate where we had seen it before (Schacter, 2001). The ability to recognize faces correctly also depends on the attention. So if we keenly observe a person for long, we will be able to recognize him/her easily on the next interaction. As we don’t observe everyone with the same attention there is bound to be difference is the degree to which we can correctly recognize faces. When such complexities of memory are involved, facial recognition mistakes are likely to happen. But following the observation above of how faces appear familiar but we tend to forget the association, it is interesting to learn how this occurs for males and females. In other words, it is an interesting area of research to study how the mind of a male and female functions when posed with the challenge of facial recognition. Is the degree of accurate facial recognition a function of a person’s gender or does males and females have the same propensity to identify or fail to identify known faces? This paper is an attempt to study our hypothesis that females perform better than males when it comes to facial recognition. The process detailed below was undertaken to collect data to support our hypothesis that females are better face recognizers. If this study validates this hypothesis, it will also provide data which will prove beyond doubt our hypothesis. Based on the purpose and situation, appropriate algorithms can be chosen for the purpose of face recognition of the subject (Basu, n.d).

Sex and facial recognition

There are studies available which point to a gender bias when it comes to recognition of faces. Studies have also found a group bias is face recognition as well. For example it has been studied that people of a particular group or community are more adapt at recognizing faces of other belonging to the same group or community (Malpass & Kravitz, 1969). Also age is a factor which plays has a role in the recognition of faces. Those studied under such experiments have shown a propensity to recognize faces of their own age more easily then faces of people younger or older to their age (Wright & Stroud, 2002). Limited studies are available which points to the fact that women are more adapt at recognizing faces when compared to their male counterparts. In such studies an interesting trend which comes to light is the fact that women easily recognize faces of other women compared to recognition of male faces (Rehnman & Herlitz, 2006). Some studies also shows that males recognize female faces readily than faces of those belonging to their group (Wright & Sladden, 2003). Given these observations it is found necessary to conduct a study comprising of both male and female respondents to see how each group performs on recognition of faces.

For this study both male and female correspondents were shown a number of male and female faces in groups. So males were asked to recognize from a group of male and female faces and females respondents were also asked to do the same. This mapping exercise was conducted for different groups of faces with each group being a heterogeneous mix of male and female faces. The number of accurate recognition for each group was then measured and the results tabulated to study to arrive at a conclusion. Twenty such groups were shown to both participants during the exercise and the number of correct recognitions made was recorded for each group.

The tabulated result from the experiment shows that females are indeed more accurate in their recognition of faces. While men respondents could identify 118 faces correctly, the figure was high for female respondents. They could correctly recognize about 163 faces of all shown during the experiment. On an average, while a male could recognize about 6 faces correctly, female respondents could recognize about 8 faces correctly. Mean deviation for males was recorded at 1. 35 while that for women were 0.75. This signifies high fluctuation in facial recognition by males while women were more consistent in their recognition of faces and hence the deviation from mean in below 1.

The above recorded findings are important for they validate the findings of some other studies which had indicated that females are more equipped for proper face recognition than males. How females are able to detect faces so accurately? Studies indicate that this can be attributed to the system of encoding which plays a crucial part (Sporer, 2001). The theory involved is that at the encoding stage, people attend to faces of their own group. This is true for female because they are more likely to observe more female faces than male faces. Therefore the percentage of correct female facial recognition is high, which also indicates to gender bias. Studies which establish that men can better recognize female faces aren’t widely available. Hence there is lack of evidence to show that men readily recognize female faces. But some studies concluded that males perform better in recognizing male faces. This has in some case been found to be better than women. The perception that women outperform men on facial recognition can be attributed to the high incidence of recognition that happens when women see women faces. This percentage pulls the overall scoring in favor of females. However, what is often brought into question is the generality of such results. Because studies can be cited, even though some are solitary, which expose differences among men and women in cases of face recognition. Hence there is further scope to study this aspect and its numerous facets to arrive at a holistic conclusion about the relation between sexuality and the ability to recognize faces.

The mini lab report, interpreted above clearly shows that there is a gender link to the percentage of correct facial recognition. Though limited in number, conducted studies had pointed to this fact and the reason for this seems to be the process of information encoding in females which in sharp contrast to that of males.

References:

Malpass, R. S., Kravitz, J. (1969). Recognition for faces of own and other race. Journal of Personality and Social Psychology. 13(4). Pg. 330-334. Rehnman, J., Herlitz, A. (2006). Higher face recognition ability in girls: Magnified by own-sex and own-ethnicity bias. Memory. 14(3). Pg. 289-296. Sporer, S. L. (2001). Recognizing faces of other ethnic groups: An integration of theories. Psychology, Public Policy and Law. 7. Pg. 36-97. Schacter, D. L. (2001). The seven sins of memory. Houghton Mifflin. Thakur, S; Sing, J.K.; Basu, D.K; Nasipuri, M; Kundu M. (n.d.). Face Recognition using Principal Component Analysis and RBF Neural Networks. IJSSST. 10(5). 7-15. Retrieved Wright, D. B., Sladden, B. (2003). An own gender bias and the importance of hair in face recognition. Acta Psychologica. 114. Pg. 101-114. Wright, D. B., Stroud, J. N. (2002). Age differences in lineup identification accuracy: People are better with their own age. Law and Human Behaviour. 26(6). Pg. 641-654.

double-banner

Cite this page

Share with friends using:

Removal Request

Removal Request

Finished papers: 159

This paper is created by writer with

ID 268478778

If you want your paper to be:

Well-researched, fact-checked, and accurate

Original, fresh, based on current data

Eloquently written and immaculately formatted

275 words = 1 page double-spaced

submit your paper

Get your papers done by pros!

Other Pages

Case study on business processes questions, research paper on ethics and social responsibility in developing a strategic plan, course work on five fashion designers, example of war of 1812 essay, leadership communication article review examples 2, good example of essay on scavenger hunt, example of essay on sustainable agriculture, literature review on what is the point of literature, good taking the last piece of corn chip from a bowl at a party essay example, free prison system essay sample, gay marriage research paper sample, free changing the resistance to change in organizations through leadership thesis proposal sample, the national identity card critical thinkings example, fukushima nuclear energy plant disaster literature review research paper samples, johnson drug company case studies example, example of essay on telenursing, purpose report examples, good critical thinking on critical response 2, gujranwala essays, finny essays, alkalinization essays, club drugs essays, mac book essays, the heroic slave essays, new identity essays, western electric essays, new comer essays, product marketing essays, cirque du soleil essays, the scramble essays, performance rating essays, product positioning essays, satis house essays, specific heat capacity essays, sixteenth birthday essays, interviewee research proposals, incarceration research proposals, intellectual property research proposals, preschool research proposals, interference research proposals, tort research proposals, remuneration research proposals, interpreting research proposals.

Password recovery email has been sent to [email protected]

Use your new password to log in

You are not register!

By clicking Register, you agree to our Terms of Service and that you have read our Privacy Policy .

Now you can download documents directly to your device!

Check your email! An email with your password has already been sent to you! Now you can download documents directly to your device.

or Use the QR code to Save this Paper to Your Phone

The sample is NOT original!

Short on a deadline?

Don't waste time. Get help with 11% off using code - GETWOWED

No, thanks! I'm fine with missing my deadline

Essay Services

  • Academic Writing Services

Essay Writing Service

  • Assignment Writing Service
  • Essay Plan Writing Service

Dissertation Services

  • Dissertation Writing Service
  • Dissertation Proposal Service
  • Topic with Titles Service
  • Literature Review Service

Report Services

  • Report Writing Service
  • Reflective Writing Service
  • Case Study Writing Service

Marking Services

  • Marking Service
  • Samples Samples
  • Reviews Reviews
  • About UKEssays
  • Our Guarantees
  • Our Quality Procedures
  • Contact UKEssays
  • Write for UKEssays

Face recognition

Avatar

Disclaimer: This is an example of a student written essay. Click here for sample essays written by our professional writers. This essay may contain factual inaccuracies or out of date material. Please refer to an authoritative source if you require up-to-date information on any health or medical issue.

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Find Out How UKEssays.com Can Help You!

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

View our academic writing services

Cite This Work

To export a reference to this article please select a referencing style below:

Give Yourself The Academic Edge Today

  • On-time delivery or your money back
  • A fully qualified writer in your subject
  • In-depth proofreading by our Quality Control Team
  • 100% confidentiality, the work is never re-sold or published
  • Standard 7-day amendment period
  • A paper written to the standard ordered
  • A detailed plagiarism report
  • A comprehensive quality report

Approximate costs for Undergraduate 2:2

7 day delivery

Delivered on-time or your money back

Reviews.io logo

1845 reviews

Get Academic Help Today!

Encrypted with a 256-bit secure payment provider

  • Limited Partnership Agreements

Fearless commentary on finance, economics, politics and power

  • Follow yvessmith on Twitter
  • Feedburner RSS Feed
  • RSS Feed for Comments
  • Subscribe via Email

Recent Items

Latest biometric surveillance scandal in uk reveals another dark side of ai-powered big brother.

Where AI-powered surveillance and control technologies meet capitalism 101.

A fresh expose by civil rights group Big Brother Watch has revealed that over the past two years eight train stations across the UK — including busy hubs such as London’s Euston and Waterloo, Manchester Piccadilly, and several smaller stations — have conducted facial and object recognition trials using AI surveillance technology. By rigging Amazon’s AI surveillance software to the stations’ CCTV cameras, the initiative was ostensibly meant to alert station staff to safety incidents and potentially reduce certain types of crime.

The data collected was sent to Amazon Rekognition, according to a Freedom of Information Act (FOIA) request obtained by Big Brother Watch. As WIRED magazine reports , “the extensive trials, overseen by the government-owned rail infrastructure body Network Rail, have deployed object recognition — a type of machine learning that can identify items in videofeeds — to detect people trespassing on tracks, monitor and predict platform overcrowding, identify antisocial behaviour (“running, shouting, skateboarding, smoking”) and spot potential bike thieves.”

In other words, it was all intended to help keep rail passengers safe, train stations clean and tidy and bikes in their place. A Network Rail spokesperson said :

We take the security of the rail network extremely seriously and use a range of advanced technologies across our stations to protect passengers, our colleagues, and the railway infrastructure from crime and other threats. When we deploy technology, we work with the police and security services to ensure that we’re taking proportionate action, and we always comply with the relevant legislation regarding the use of surveillance technologies.

That is probably not as comforting as it may sound. As I will show later in this article, the (almost certainly outgoing) Sunak government has tried everything it can to gut the limited safeguards protecting the British public from the potential downsides and dangers of AI-empowered surveillance.

Measuring Passenger “Satisfaction”

A particularly “concerning” aspect of the train station trials is their focus on “passenger demographics,” says Jake Hurfurt, the head of research and investigations at Big Brother Watch. According to documents released in response to the FOIA request, the AI-powered system could use images from the cameras to produce “a statistical analysis of age range and male/female demographics,” and is also able to “analyse for emotions” such as “happy, sad and angry.”

This is where AI-powered surveillance and control technologies meet capitalism 101. From the WIRED article (emphasis my own):

The images were captured when people crossed a “virtual tripwire” near ticket barriers, and were sent to be analysed by Amazon’s Rekognition system, which allows face and object analysis. It could allow passenger “satisfaction” to be measured, the documents say, noting that “ this data could be utilised to maximum advertising and retail revenue .”

The article offers no indication as to how that might be achieved, but the proposal itself should hardly come as a surprise. Besides serving as an instrument of government surveillance control, biometric systems will be used to maximise corporate revenues and profits — whether for the tech giants providing the hardware and software, in this case Amazon, the large financial institutions facilitating the transactions or the retail companies honing their targeted advertising techniques.

It brings to mind two scenes from the 2002 sci-fi movie (based loosely on a Philip K Dick short story), “Minority Report.” In the first, the camera takes a retina scan of the protagonist John A Anderton and a billboard calls out to him , “John Anderton! You could use a Guinness right about now”? In the second, Anderton visits a mall where he is met by an attractive female hologram advising him what clothes to buy. Set in 2054, the film imagines that advertisers will be able to personalise messages on billboards or through holograms via retinal scans.

Apart from the occasional still-born attempt , this particular dystopian scenario is yet to creep into most of our lives, though the widespread use of augmented-reality “wearables” like Apple Vision Pro will certainly make it more possible. As the WIRED article notes, AI researchers have frequently warned that using face analysis technology “to detect emotions is ‘unreliable’ and some say the technology should be banned due to the difficulty of working out how someone may be feeling from audio or video.”

On the other side of the English channel the EU Parliament has voted for a broad ban on the use of Live Facial Recognition systems in public spaces, as too have some US cities. By contrast, as we reported in October last year, the UK government is escalating its deployment of the controversial surveillance technology.

Prime Minister Rishi Sunak, the son-in-law of Indian tech billionaire N R Narayana Murthy, is determined to transform the UK into a world leader in AI governance. Said governance apparently involves gutting many of the limited safeguards protecting the public from the potential downsides and dangers of AI, of which there are many… As we reported in early August, live facial recognition (LFR) surveillance, where people’s faces are biometrically scanned by cameras in real-time and checked against a database, is being used by an increasing number of UK retailers amid a sharp upsurge in shoplifting — with the blessing, of course, of the UK government. Police forces are also being urged to step up their use of LFR. The technology has also been deployed at the Coronation of King Charles III, sports events including Formula 1, and concerts, despite ongoing concerns about its accuracy as well as the huge ethical and privacy issues it raises.

In what is surely one of the most brazen and egregious examples of mission creep you’re likely to find, the government has also authoritsed the police to create a vast facial recognition database out of passport photos of people in the UK . The ultimate goal, it seems, is to get rid of passports altogether and replace them with facial recognition technology. In January, Phil Douglas, the director general of UK Border Force, said he wanted to create an “intelligent border” that uses “much more frictionless facial recognition than we currently do”.

From The Guardian :

Douglas has been touting the potential benefits of biometrics and data security in managing the UK’s borders in recent months. In February 2023, he suggested the paper passport was becoming largely redundant – even as some celebrated the post-Brexit  return of the blue document . He told an audience at the Airport Operators Association conference in London at the time: “I’d like to see a world of completely frictionless borders where you don’t really need a passport. The technology already exists to support that.” Douglas added: “In the future, you won’t need a passport – you’ll just need biometrics.”… According to polling carried out by the International Air Transport Association in 2022, 75% of passengers worldwide would be happy to ditch passports for biometrics.

“Snooping Capital of the West”

This is a reminder that most of these trends — particularly the tech-enabled drift toward authoritarianism and centralised technocracy — are generalised, not only among the ostensibly democratic nations of the so-called “Free West” but across the world as a whole. But the UK is at the leading edge of most of them.

In an article earlier this year, Politico described the UK as “the snooping capital of the West,” snarkily noting that the country “is finally leading the world… on AI-powered surveillance.” The government last year passed the Online Safety Bill, opening up the possibility of tech firms being forced to scan people’s mobile messages – ostensibly for child abuse content. As Open Democrac y warns , this is likely to make people’s digital communications less, rather than more, secure:

The more of daily life that becomes digital, the more we rely on secure connections to ensure our data is not exploited. Encryption is the main method stopping miscreants from stealing passwords or personal information. If firms are forced to weaken security, more attacks will ensue, just at a time that we need to boost security across society. For example, if WhatsApp were instructed to make messages visible to law enforcement, that back door could be found by others, exposing personal messages. It is a pillar of information security theory that the more ways there are to access a system, the more likely an attacker will be to gain access.

The UK government has also granted police new powers to shut down protests as well as force  employees to work during industrial action – or face being sacked. Police forces are also  resorting to Section 60AA to require protesters to remove any item being worn for the purpose of concealing their identity, including, presumably, KN95 masks. Plus, as readers may recall, the Sunak government has also granted full management of the National Health Service’s federated data platform to Palantir, a US tech giant with intimate ties to US defense and intelligence agencies.

In its Data Protection and Digital Information Act (DPDI), the Sunak government even planned to abolish the roles of the Biometrics and Surveillance Camera Commission ( BSCC ), an independent advisory board that was, to some extent, helping to hold the public sector to account for its use of AI. As we pointed out late last year, the government clearly wanted to have even freer reign to surveil and control the lives of British citizens. The proposed legislation also sought to scale back the UK GDPR and Data Protection Act of 2018.

The former Biometrics and Surveillance Camera Commissioner, Professor Fraser Sampson,  described the move as “shocking” and “tantamount to vandalism.” In the end, the DPDI was ultimately excluded from the “wash-up” process before Parliament’s dissolution in the lead-up to the UK’s general elections, leaving the BSCC in tact — for now.

Another Slippery Slope 

When it comes to biometric surveillance technologies, the UK’s independent watchdogs appear to hold limited influence anyway. The two-year trials in the eight train stations all took place despite previous warnings from the UK’s Information Commissioner’s Office (ICO) against using the technology. Speaking in 2022, the ICO’s deputy commissioner Stephen Bonner said:

Developments in the biometrics and emotion AI market are immature. They may not work yet, or indeed ever. While there are opportunities present, the risks are currently greater. At the ICO, we are concerned that incorrect analysis of data could result in assumptions and judgments about a person that are inaccurate and lead to discrimination… The only sustainable biometric deployments will be those that are fully functional, accountable and backed by science… As it stands, we are yet to see any emotion AI technology develop in a way that satisfies data protection requirements, and have more general questions about proportionality, fairness and transparency in this area.

If there’s one silver lining about the technology used in the station trials, it is that it does not identify people, Carissa Véliz, an associate professor in psychology at the Institute for Ethics in AI, at the University of Oxford, told WIRED. But there is always the risk of a slippery slope, she said, citing similar AI trials on the London Underground that had initially blurred faces of people who may have been dodging faces before changing tack, unblurring photos and keeping images longer than initially planned.

Lastly, if British voters are expecting a reversal of policy on the use of digital surveillance and control technologies by a future Keir Starmer government, they are likely to be sorely disappointed, especially given the Starmer team’s cosy ties to the Tony Blair Foundation for Global Change, which often touts digital technologies and biometric surveillance systems as the cure-alls to many of the world’s deep-seated problems.

In a speech at the WEF’s 2020 cyber attack simulation event, “Cyber Polygon”, Blair said that Digital Identity would form an “inevitable” part of the digital ecosystem being constructed around us, and so government should work with technology companies to regulate their use  — as the EU and Australia have already done. It is the perfect manifestation of the 21st century Public Private Partnership — a digital panopticon designed and built by global tech companies, paid for with taxpayer funds, so that the government, security agencies and their corporate partners can more easily track, trace and control the populace.

As a recent report by Big Brother Watch documents, the Labour Party under Jeremy Corbyn’s leadership pledged to ban facial recognition but the Labour 2024 Manifesto includes no such commitment. There is also “no formal commitment in the manifesto to reject bank spying powers in the future” or the adoption of a central bank digital currency. Nor is there any commitment to prevent mandatory ID or digital identity.

Print Friendly, PDF & Email

Subscribe to Post Comments 2 comments

Thanks to the erudite and prolific Alastair Crooke, I have new insights into neoliberalism. “In 1970, Zbig Brzezinski (who was to become National Security Adviser to President Carter) published a book entitled: Between Two Ages: America’s Role in the Technetronic Era.

In it, Brzezinski wrote: “The technetronic era involves the gradual appearance of a more controlled society. Such a society…dominated by an élite, unrestrained by traditional values…[and practicing] continuous surveillance over every citizen … [together with] manipulation of the behavior and intellectual functioning of all people … [would become the new norm].”

Sound familiar?

‘The ultimate goal, it seems, is to get rid of passports altogether and replace them with facial recognition technology.’

Now that one grabbed my attention. Can you imagine. So the passport control computer gets hacked through a backdoor when an employee opened a dodgy email promising a big prize. At Dover, a bunch of people get off a ferry and the AI videos start scanning the passenger’s faces. One of them is a big, beefy guy with tats on his neck and a scar on his face from when he was fighting with the Azov brigade in the Ukraine. But that is not what the AI sees. It has been secretly trained to see a young, female blond student when it scans that guy’s face. Because there is no human inter-reaction, he walks straight through and on his way out the main door. No police will haul him over as the AI has scanned his face already. Tell me that this will never happen.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Support us! Please Donate or Subscribe!

  • This week's best tech deals
  • Apple plans for thinner hardware
  • Anker's charging gear is 50 percent off
  • Amazon Prime Day is coming in July
  • The next Nintendo Direct is on June 18

Whistleblower claims Amazon violated UK sanctions by selling facial recognition tech to Russia

Amazon denies the claims..

An ex-employee has accused Amazon of breaching UK sanctions by selling facial recognition technology to Moscow following its invasion of Ukraine, The Financial Times reported.

Charles Forrest alleged that he was unfairly dismissed in 2023 after accusing Amazon of wrongdoing on a number of issues between November 2022 and May 2023, according to the article. The allegations were presented to a London employment tribunal as part of a hearing this week.

Forrest said that Amazon closed a deal with Russian firm VisionLabs to provide access to its Rekognition facial recognition technology. It did that "through what appears to be a shell company based in the Netherlands," according to the tribunal filings. He also accused the company of breaking its self-imposed moratorium on police use of facial recognition tech implemented after the murder of George Floyd.

Amazon denied the allegations. "We believe the claims lack merit and look forward to demonstrating that through the legal process," a spokesperson told the FT . "Based on available evidence and billing records, AWS did not sell Amazon Rekognition services to VisionLabs."

Forrest was let go for "gross misconduct" after refusing to work his contractual hours and failed to respond to emails or attend meetings, Amazon alleged. It denied that Forrest made the sorts of disclosures that would entitle him to whistleblower protections.

Amazon has denied the contention it provided police with facial recognition technology, and added in a tribunal filing that "a self-imposed moratorium does not amount to a legal obligation."

Update, June 7 2024, 11:14AM ET: An Amazon spokesperson clarified that the company is denying it provided facial recognition capabilities to police, and the last paragraph of this story has been changed to reflect that. The company remains adamant it did not sell that same software to VisionLabs but has declined to provide a statement related to whether VisionLabs obtained those capabilities through an intermediary.

  • Advertisement

Election latest: More bad news for Tory campaign as latest donation figures released

The Conservatives received less than £300,000 in donations during the second week of the campaign, the latest figures show. Listen to the latest Electoral Dysfunction as you scroll.

Friday 21 June 2024 19:10, UK

  • General Election 2024
  • Tories raised less than £300,000 in donations in second week of campaign - Labour received £4.4m
  • Sunak asked if he's confident no more Tory candidates will be caught up in betting scandal
  • 'I certainly haven't bet myself,' Welsh secretary tells Sky News
  • Davey criticises PM's response to 'immoral' allegations
  • Electoral Dysfunction: What are odds betting scandal sinks Tories?
  • 'Own it': Corbyn responds to latest Starmer comments
  • Live reporting by Faith Ridler

Election essentials

  • Manifesto pledges: Alliance Party | Conservatives | Greens | Labour | Lib Dems | Plaid Cymru | Reform | SNP | Sinn Fein | Workers Party
  • Trackers:  Who's leading polls? | Is PM keeping promises?
  • Campaign Heritage: Memorable moments from elections gone by
  • Follow Sky's politics podcasts: Electoral Dysfunction | Politics At Jack And Sam's
  • Read more:  Who is standing down? | Key seats to watch | What counts as voter ID? | Check if your constituency is changing | Guide to election lingo | Sky's election night plans

By Jon Craig, chief political correspondent

Follow the money, the saying goes. But the money isn't following the Conservative Party in this selection campaign. Tory donors are on strike.

The Conservatives are not only being massively outgunned financially by Labour, but also by Reform UK and the Liberal Democrats, according to new figures.

The slump in donations, compared with the millions pouring into Tory coffers during Boris Johnson’s 2019 campaign, suggests massive disillusionment in Rishi Sunak among Tory tycoons and millionaires.

Why throw good money after bad? That's a question the Conservatives'’ former big money donors must be asking themselves with Mr Sunak’s campaign appearing to lurch from one crisis to another.

And the party's dismal showing in opinion polls, along with blunders like the PM's D-day fiasco and now a scandal over insiders accused on betting on the election date, suggest the funding gap will get even worse.

The latest figures from the Electoral Commission, generally a fairly toothless watchdog, are the first since Mr Sunak’s calamitous snub to D-day veterans and also confirm that Nigel Farage’s comeback has boosted Reform UK’s finances.

The figures are staggering. Usually league tables of donations to political parties put the Tories way out in front. Not this time. Donations between June 6 and 12 reveal the Conservatives are in fourth place. That’s relegation form.

In that week, Labour received a breath-taking £4,383,400, largely due to a £2.5m cheque from Lord Sainsbury of the supermarket dynasty. That means £5.3m in donations rolled in in the first two weeks of the campaign.

A big Blairite, Lord Sainsbury is a long-time donor who stopped giving when Jeremy Corbyn was leader. But now he’s back. Big time. Labour’s other big donor, Autoglass windscreens tycoon Gary Lubner, gave £900,000.

Also, at election time the general secretaries of the big trade unions usually get their cheque book out and hand over six or seven-figure sums. But in June 6-12 the only union donation was £100,000 from the train drivers’ union Aslef.

That means Sir Keir Starmer’s Labour party is relying more now on millionaires than union barons for its election fighting fund. The Labour leader says the party’s changed. Well, this is certainly a very significant change.

But at what price? Are the millionaire donors buying influence on Labour’s tax policy: capital gains tax, wealth taxes and so on? And what will the tycoons’ millions mean for Angela Rayner’s workers’ rights proposals?

The Conservatives’ election war chest, meanwhile, is almost bare and their finances as dire as their opinion poll ratings: just £292,500 in June 6-12 and only £882,000 in donations in the first two weeks of the campaign.

The resurgent Reform UK raised £742,000 in donations, largely from two £250,000 donations from a company called Britain Means Business, which just happens to be run by Richard Tice, the former leader.

Mr Tice may have been elbowed aside by Mr Farage, who’s replaced him as leader, but he’s still personally bankrolling the party, which claims to have received £1.5 million since Mr Farage bounced back.

Reform UK have also benefitted from a showbiz donation from celebrity supporter Holly Valance, the pop star and former Neighbours actor Holly Valance, who had a hit with "Kiss Kiss" and donated 50,000 smackers.

Even the Liberal Democrats, with £335,000, received more than the Conservatives, including £100,000 in the will of late party donor and former lawyer John Faulkner. 

That should pay for more silly Sir Ed Davey stunts.

But for Mr Sunak, facing a donors' strike, perhaps he should tap up his millionaire heiress wife Akshata Murty. Between them, according to The Sunday Times rich list, they’re worth £651 million, more than the king.

Or perhaps not. What was that about throwing good money after bad?

The Labour Party raised almost £4.4m in the second full week of the general election campaign - close to 15 times the amount brought in by the Tories.

Rishi Sunak's party took in just under £300,000 between 6 and 12 June.

Reform UK  raised more than double this figure, with £742,000 taken. However, £500,000 of this money was handed over by Britain Means Business, a company run by Reform's deputy leader Richard Tice.

The  Liberal Democrats  also took in more than the Conservatives, raising £335,000.

The Green Party raised £20,000.

You can read more from Sky News below:

The Politics Hub is now under way, hosted by political correspondent   Ali Fortescue .

The fast-paced programme dissects the inner workings of Westminster, with interviews, insights, and analysis - bringing you, the audience, into the corridors of power.

Joining Ali tonight are:

  • Alison McGovern , the shadow employment secretary;
  • Richard Graham , member of 1922 Committee.

And on her panel are:

  • Guto Harri , former director of communications for Number 10 under Boris Johnson;
  • Caroline Flint , the former Labour MP for Don Valley.

Watch live on Sky News, in the stream at the top of this page, and follow live updates here in the Politics Hub.

Our deputy political editor Sam Coates and Politico's Jack Blanchard are back with their guide to election day.

This is day 30 of the campaign. Jack and Sam discuss the betting scandal clouding the Tory campaign, last night's Question Time and Labour's Rachel Reeves opening up.

👉 Tap here to follow Politics at Jack and Sam's wherever you get your podcasts 👈

Labour's deputy leader Angela Rayner has called for the Tories and the prime minister to "come clean" and disclose further details of alleged gambling rule breaches. 

During a visit to a manufacturing technology centre near Coventry, Ms Rayner was asked if Rishi Sunak should confirm how many people were known to be involved in the election date betting row.

"I think he should be up front if he knows what the details are," she replied. 

"He should explain and inform people about that. I think it's worse for the Conservative Party that they've got this drip-feed approach. 

"I think they should just come clean and tell people what's happened." 

Nigel Farage has reiterated that he blames the West and NATO for the Russian invasion of Ukraine - as he confirmed that he previously said he "admired" Vladimir Putin as a statesman.

Speaking to the BBC, the Reform UK leader was asked about his previous comments on the conflict.

Asked about Russia invading Ukraine in 2022, Mr Farage told Nick Robinson that he has been saying there would be a war in Ukraine due to the "ever-eastward expansion of NATO and the European Union" since the Berlin Wall fell in 1989. 

He said this gave Mr Putin a reason to tell the Russian people the West was "coming for" them and "to go to war". 

The Reform leader confirmed his belief that the West "provoked" the conflict - but added that it was the Russian president's "fault".

Mr Farage was also asked about comments he made previously about Mr Putin being the statesman he most admired.

Mr Farage said he disliked him, but added: "I admired him as a political operator because he's managed to take control of running Russia."

"This is the nonsense, you know, you can pick any figure, current or historical, and say, you know, did they have good aspects?" he added.

"And if you said, well, they were very talented in one area, then suddenly you're the biggest supporter."

Our weeknight politics show  Politics Hub  will be live on Sky News from 7pm with our  political correspondent  Ali Fortescue  hosting this evening.

Joining Ali tonight:

  • Richard Graham, member of 1922 Committee;

Watch  Politics Hub  from 7pm every night during the election campaign on Sky channel 501, Virgin channel 602, Freeview channel 233, on the  Sky News website  and  app  or on  YouTube .

From first past the post to voter ID, here's everything you need to know about the general election in less than five minutes.

Government borrowing was less than expected in May, new figures have revealed.

Net borrowing - the difference between public sector spending and income - was £15bn, an increase of £0.8bn on the same time last year, the Office for National Statistics (ONS) reported on Friday.

The amount is below the £15.7bn forecast by the Office for Budget Responsibility (OBR) and less than expected by economists.

However, it was still the highest amount for the month of May since the  COVID-19 pandemic .

The ONS also said that public sector net debt, excluding public sector banks, was provisionally estimated at 99.8% of gross domestic product (GDP) in May - the highest level since March 1961.

The figure is also 3.7 percentage points higher than during the same period last year.

Economists said it showed that whoever wins the  upcoming general election  will face a string of potential financial challenges.

It's 5pm - time for your teatime election update.

The general election takes place in under two weeks, and political parties from across the House of Commons are busy on the campaign trail.

Here's what you might have missed so far today:

  • Rishi Sunak has reiterated he was "incredibly angry" when he learned about allegations that his own parliamentary aid Craig Williams, who is a Tory candidate, had placed a bet on the election;
  • Laura Saunders, the candidate for Bristol North West, and her husband, director of campaigns Tony Lee, are also being investigated by the Gambling Commission;
  • David TC Davies , the Welsh secretary, told Sky News this morning that he "certainly" did not bet on the date of the general election;
  • And, as we have just learned, the Conservatives got less than £300,000 in party donations between 7 and 12 June - far behind the £4.3m handed to Labour;
  • Mr Sunak's favourability is now at an all time low, with three quarters of Britons having an unfavourable view of him - less even than Mr Johnson's lowest polling;
  • The Welsh Conservatives have launched their manifesto today.
  • Over with Labour , who - as we just mentioned - have come top of the list for party donations for the second week of the general election campaign.
  • And Labour leader Sir Keir Starmer today said he would not enter negotiations with the Scottish government on an independence referendum if the SNP wins a majority of Scottish seats at the 4 July election;
  • Sir Keir has also admitted today that the choice the public faced in the 2019 general election - Boris Johnson or Jeremy Corbyn - "wasn't a good one";
  •  And Welsh Labour has launched its manifesto today,  with shadow chancellor Rachel Reeves detailing the "simple choice" voters have to face on 4 July.
  • Liberal Democrat leader Sir Ed Davey today criticised Rishi Sunak's response to his party's betting scandal as "not good enough"; 
  • And Plaid Cymru has claimed Welsh Labour's manifesto lacks ambition and undermines devolution. The party said that Labour is imposing further austerity on Wales with £1.8bn worth of cuts to public services.

While you're here, check out more of our election coverage below:

Be the first to get Breaking News

Install the Sky News app for free

face recognition uk essay

IMAGES

  1. A Project Report OnFacial Expression Recognition Using Deep Free Essay

    face recognition uk essay

  2. Cognitive Exam Face Recognition Essay

    face recognition uk essay

  3. Face Recognition System

    face recognition uk essay

  4. 5 Face Detection Project Ideas for Practice

    face recognition uk essay

  5. What is Face Detection & Recognition

    face recognition uk essay

  6. Facial Recognition Essay

    face recognition uk essay

VIDEO

  1. Face Recognition ☠️ #shorts #scary

  2. Face Recognition Attendance System

  3. Help everyone through face recognition 😱🔥 #viral #trending #gaming #shorts

  4. Unveiling the Amazing Face Recognition Feature on Alexa!

  5. face recognition #shorts #edit

  6. Face Reveals

COMMENTS

  1. Face recognition

    Face recognition. Face recognition are processes involved in recognition of faces. Explanations of face recognition include feature analysis versus holistic forms. Remembering and recognising faces are an important skill one applies each day of their lives. It is important to the social interactions, to work and school activities, and in ...

  2. The ethics of facial recognition technologies, surveillance, and

    Facial recognition in essence works by capturing an individual's image and then identifying that person through analysing and mapping of those captured features comparing them to identified likenesses. ... The UK's ongoing adequacy in terms of alignment to EU GDPR will continue to be judged by the EU. The GDPR has required systems to be ...

  3. Past, Present, and Future of Face Recognition: A Review

    Face recognition is one of the most active research fields of computer vision and pattern recognition, with many practical and commercial applications including identification, access control, forensics, and human-computer interactions. However, identifying a face in a crowd raises serious questions about individual freedoms and poses ethical issues. Significant methods, algorithms, approaches ...

  4. Case study: Facial Recognition

    Case study: Facial Recognition. Facial Recognition is the process where the brain recognizes, understands and interprets the human face (Face Recognition, n.d.). The face is essential for the identification of others and expresses significant social information. The face reveals significant social information, like intention, attentiveness, and ...

  5. How repressive regimes use facial recognition technology

    In the UK, privacy campaigners have raised concerns about government plans to extend the use of live facial recognition technology, as well as to allow police to run searches against 50m driving licence holders. ... But the use of facial recognition by authoritarian regimes is particularly worrying. Perhaps the most extreme example comes from ...

  6. Face Recognition Technology Essay (Critical Writing)

    Face recognition is the automatic localization of a human face in an image or video and, if necessary, identifying a person's identity based on available databases. Interest in these systems is very high due to the wide range of problems they solve (Jeevan et al., 2022). This technology is a biometric software application capable of ...

  7. CDEI publishes briefing paper on facial recognition technology

    The CDEI has published a Snapshot briefing paper looking at the uses and potential implications of facial recognition technology's deployment in the UK. From: Centre for Data Ethics and Innovation

  8. Human Processing Visual System for Recognising Faces and ...

    Throughout this essay I will focus on how the human visual system recognises objects, for example through object constancy, view-dependent vs. view-invariant recognition, and the geon theory. While also examining how this differs to recognising faces, for instance, through configuration, within category discrimination, and the inversion effect.

  9. Facial Recognition Technology and Ethical Concerns Essay

    Updated: Dec 11th, 2023. Face recognition refers to a method used to confirm or identify an individual's identity using their face. The technology authenticates and identifies an individual based on sets of verifiable and recognizable data unique and specific to that individual. Facebook launched its DeepFace program in 2014, which can be ...

  10. Police Use of Facial Recognition Is Accepted by British Court

    Sept. 4, 2019. LONDON — In one of the first lawsuits to address the use of live facial recognition technology by governments, a British court ruled on Wednesday that police use of the systems is ...

  11. Facial Recognition Technology

    Facial recognition is defined as software that maps, analyzes, and confirms the identity of a face from an image, video, or audiovisual component (Khan & Rizvi, 2021). Facial recognition is presently one of the most powerful surveillance tools available today. Facial recognition is used both at an individual and organization or institutional level.

  12. Essays on Facial Recognition

    Facial recognition technology has become an increasingly prevalent tool in various industries, including healthcare and biometrics. This essay will explore the development of facial recognition technology, its practical applications in healthcare and biometrics, as well as its future implications and recommendations for responsible implementation.

  13. Face Recognition Technology: [Essay Example], 1228 words

    The implementation of face recognition technology includes the following three stages: Data acquisition. Input processing. Face image classification and decision making. The input can be recorded video of the speaker or a still image. A sample of 1 sec duration consists of a 25 frame video sequence.

  14. Facial Emotion Recognition Systems

    1.3: Facial emotion recognition systems The aim of FERS is to replicate the human visual system in the most analogous way. This is very thought-provoking job in the area of computer vision because not only it needs effective image/video analysis methods but also well-matched feature vector used in machine learning process.

  15. Essay About Facial Recognition

    Abstract: Face recognition is as old as human history itself. We are have always relied on our brain to help us identify faces we are familiar with. This functioning of the human mind has been used by researchers in developing technologies which is contributing to use the facial recognition system as a security measure.

  16. Importance Of Face Recognition Psychology Essay

    The input of a face recognition system is an image or video stream and the output is an identification or verification of the subject or subjects that appear in the image or video. Zhao et al (2003) defined a face recognition system as a three stage process as shown in Figure 1.1. Image / Video.

  17. Amazon's AI Tested In Major UK Train Stations To Combat Crime ...

    Several underground train stations in the United Kingdom have adopted Amazon.com Inc (NASDAQ:AMZN) facial recognition technology in the Seattle-based company's latest flex of its AI muscles.

  18. Essay On Face Recognition

    Essay On Face Recognition. 2057 Words9 Pages. Abstract— Face recognition is one of the most important biometric and face image is a biometrics physical feature use to identify people. Major and Minor segments of face space are eyes, nose and mouth. In biometrics quality face is the most imperative characteristic method for recognize individuals.

  19. Face recognition

    Face recognition are processes involved in recognition of faces. Explanations of face recognition include feature analysis versus holistic forms. Remembering and recognising faces are an important skill one applies each day of their lives. It is important to the social interactions, to work and school activities, and in peoples personal family ...

  20. Latest Biometric Surveillance Scandal in UK Reveals Another Dark Side

    Where AI-powered surveillance and control technologies meet capitalism 101. A fresh expose by civil rights group Big Brother Watch has revealed that over the past two years eight train stations across the UK — including busy hubs such as London's Euston and Waterloo, Manchester Piccadilly, and several smaller stations — have conducted facial and object recognition trials using AI ...

  21. Design of Face Recognition Image Processor

    Any human face can be considered to be a combination of these standard faces. For example, one's face might be composed of the average face plus 10% from eigenface 1, 55% from eigenface 2, and even -3% from eigenface 3. Remarkably, it does not take many eigenfaces combined together to achieve a fair approximation of most faces.

  22. Face Recognition Uk Essay

    Face Recognition Uk Essay. Connect with one of the best-rated writers in your subject domain. Of course, we can deliver your assignment in 8 hours. We approach your needs with one clear vision: ensuring your 100% satisfaction. Whenever you turn to us, we'll be there for you.

  23. Face recognition

    Face recognition are processes involved in recognition of faces. Explanations of face recognition include feature analysis versus holistic forms. Remembering and recognising faces are an important ski

  24. Whistleblower claims Amazon violated UK sanctions by selling facial

    An ex-employee has accused Amazon of breaching UK sanctions by selling facial recognition technology to Moscow following its invasion of Ukraine, The Financial Times reported. Charles Forrest ...

  25. A Review on Facial Emotion Recognition to Investigate Micro ...

    Essay Writing Service. Through extensive studies, Ekman and Friesen has outlined 7 universal facial expressions; happiness, sadness, anger, fear, surprise, disgust, and contempt. When an emotional event in a video captured, there is an opportunity to detect one or more of these expressions of emotions in a single frame.

  26. Election latest: Rishi Sunak faces further questions as betting scandal

    Please see our 10pm bulletin for the key points from an evening of tough questions for the leaders of the four major parties in the UK. Join us again tomorrow morning for the latest updates. 22:54:02