Data Security

Your Password Can Be Guessed By An App Listening To Your Keystrokes

Researchers from SMU’s (Southern Methodist University) Darwin Deason Institute for Cyber-security have found that the sound waves produced when we type on a computer keyboard can be picked up by a smartphone and a skilled hacker could decipher which keys were struck.

Why?

The research was carried out to test whether the ‘always-on’ sensors in devices such as smartphones could be used to eavesdrop on people who use laptops in public places (if the phones were on the same table as the laptop) e.g. coffee shops and libraries, and whether there was a way to successfully decipher what was being typed from just the acoustic signals.

Where?

The experiment took place in a simulated noisy Conference Room at SMU where the researchers arranged several people, talking to each other and taking notes on a laptop. As many as eight mobile phones were placed on the same table as the laptops or computers, anywhere from three inches to several feet away. The study participants were not given scripts of what to say when talking, could use shorthand or full sentences when typing and could either correct typewritten errors or leave them.

What Happened?

Eric C. Larson, one of the two lead authors and an assistant professor in SMU Lyle School’s Department of Computer Science reported that the researchers were able to pick up what people were typing at an amazing 41 per cent word accuracy rate and that that this could probably be extended above 41 per cent if what researchers figured out what the top 10 words might be.

Sensors In Smart Phones

The researchers highlighted the fact that there are several sensors in smartphones that are used for orientation and although some require permission to be switched on, some are always on.  It was the sensors that were always switched on that the researchers were able to develop a specialised app for which could process the sensor output and, therefore, predict the key that was pressed by a typist.

What Does This Mean For Your Business?

Most of us may be aware of the dangers of using public Wi-Fi and how to take precautions such as using a VPN.  It is much less well-known, however, that smartphones have sensors that are always on and could potentially be used (with a special app) to eavesdrop.

Mobile device manufacturers may want to take note of this research and how their products may need to be modified to prevent this kind of hack.

Also, users of laptops may wish to consider the benefits of using a password manager for auto-filling instead of typing in passwords and potentially giving those passwords away.

Over A Million Fingerprints Exposed In Data Breach

It has been reported that more than one million fingerprints have been exposed online by biometric security firm Suprema which appears to have installed its standard Biostar 2 product on an open network.

Suprema and Biostar 2

Suprema is a South Korea-based biometric technology company and is one of the world’s top 50 security manufacturers.  Suprema offers products including biometric access control systems, time and attendance solutions, fingerprint live scanners, mobile authentication solutions and embedded fingerprint modules.

Biostar 2 is a web-based, open, and integrated security platform for access control and time and attendance, manage user permissions, integrate with 3rd party security apps, and record activity logs.  Biostar 2 is used by many thousands of companies and organisations worldwide, including the UK’s Metropolitan Police as a tool to control access to parts of secure facilities. Biostar 2 uses fingerprint scanning and recognition as part of this access control system.

What Happened?

Researchers working with cyber-security firm VPNMentor have reported that they were able to access data from Biostar 2 from 5 August until it was made private again on 13 August (Suprema were contacted by VPNMentor about the problem on 7th August).  It is not clear how long before 5 August the data had been exposed online.  The exposure of personal data to public access is believed to have been caused by the Biostar 2 product being placed on an open network.

In addition to more than one million fingerprint records being exposed, the VPNMentor researchers also claim to have found photographs of people, facial recognition data, names, addresses, unencrypted usernames and passwords, employment history details, mobile device and OS information, and even records of when employees had accessed secure areas.

VPNMentor claims that its team was able to access over 27.8 million records, a total of 23 gigabytes of data,

Affected

VPNMentor claims that many businesses worldwide were affected.  In the UK, for example, VPNMentor claims that Associated Polymer Resources (a plastics recycling company), Tile Mountain (a home decor and DIY supplier), and Medical supply store Farla Medical were among those affected.

It has been reported that the UK’s data protection watchdog, the Information Commissioner’s Office (ICO) has said that it was aware of reports about Biostar 2 and would be making enquiries.

What Does This Mean For Your Business?

For companies and organisations using Biostar 2, this is very worrying and is a reminder of how data breaches can occur through third-party routes.

In this case, fingerprint records were exposed, and the worry is that this kind of data can never be secured again once it has been stolen. Also, the large amount of other personal employee data that was taken could not only affect individual businesses but could also mean that employees and clients could be targeted for fraud and other crimes e.g. phishing campaigns and even blackmail and extortion.

The breach may have been avoided had Suprema secured its servers with better protection measures, not saved actual fingerprints but a version that couldn’t be reverse engineered instead, implemented better rules on databases, and not left a system that didn’t require authentication open to the internet.  Those companies that are still using and have concerns about Biostar2 may now wish to contact Suprema for assurances about security.

Facial Recognition at King’s Cross Prompts ICO Investigation

The UK’s data protection watchdog (the Information Commissioner’s Office  i.e. the ICO) has said that it will be investigating the use of facial recognition cameras at King’s Cross by Property Development Company Argent.

What Happened?

Following reports in the Financial Times newspaper, the ICO says that it is launching an investigation into the use of live facial recognition in the King’s Cross area of central London.  It appears that the Property Development Company, Argent, had been using the technology for an as-yet-undisclosed period, and using an as-yet-undisclosed number of cameras. A reported statement by Argent (in the Financial Times) says that Argent had been using the system to “ensure public safety”, and that facial recognition is one of several methods that the company employs to this aim.

ICO

The ICO has said that, as part of its enquiry, as well requiring detailed information from the relevant organisations (Argent in this case) about how the technology is used, it will also inspect the system and its operation on-site to assess whether or not it complies with data protection law.

The data protection watchdog has made it clear in a statement on its website that if organisations want to use facial recognition technology they must comply with the law and they do so in a fair, transparent and accountable way. The ICO will also require those companies to document how and why they believe their use of the technology is legal, proportionate and justified.

Privacy

The main concern for the ICO and for privacy groups such as Big Brother Watch is that people’s faces are being scanned to identify them as they lawfully go about their daily lives, and all without their knowledge or understanding. This could be considered a threat to their privacy.  Also, with GDPR in force, it is important to remember that if a person’s face (if filmed e.g. with CCTV) is part of their personal data, and the handling, sharing, and security of that data also becomes an issue.

Private Companies

An important area of concern to the ICO, in this case, is the fact that a private company is using facial recognition becasuse the use of this technology by private companies is difficult to monitor and control.

Problems With Police Use

Following criticism of the Police use of facial recognition technology in terms of privacy, accuracy, bias, and management of the image database, the House of Commons Science and Technology Committee has recently called for a temporary halt in the use of the facial recognition systems.  This follows an announcement in December 2018 by the ICO’s head, Elizabeth Dunham, that a formal investigation was being launched into how police forces use facial recognition technology (FRT) after high failure rates, misidentifications and worries about legality, bias, and privacy.

What Does This Mean For Your Business?

The use of facial recognition technology is being investigated by the ICO and a government committee has even called for a halt in its use over several concerns. The fact that a private company (Argent) was found, in this case, to be using the technology has therefore caused even more concern and has highlighted the possible need for more regulation and control in this area.

Companies and organisations that want to use facial recognition technology should, therefore, take note that the ICO will require them to document how and why they believe their use of the technology is legal, proportionate and justified, and make sure that they comply with the law in a fair, transparent and accountable way.

Is Your Website Sending Scammers’ Emails?

Research by Kaspersky has discovered that cyber-criminals are now hijacking and using the confirmation emails from registration, subscription and feedback forms of legitimate company websites to distribute phishing links and spam content.

How?

Kaspersky has reported that scammers are exploiting the fact that many websites require users to register their details in order to receive content. Some cyber-criminals are now using stolen email addresses to register victims via the contact forms of legitimate websites.  This allows the cyber-criminals to add their own content to the form that will then be sent to the victim in the confirmation email from the legitimate website.

For example, according to Kaspersky, a cyber-criminal uses the victim’s e-mail address as the registration address, and then enters their own advertising message in the name field e.g. “we sell discount electrical goods. Go to http://discountelectricalgoods.uk.” This means that the victim receives a confirmation message that opens with “Hello, we sell discount electrical goods. Go to http:// discountelectricalgoods.uk Please confirm your registration request”.

Where a victim is asked by a website form to confirm their email address, cyber-criminals are also able to exploit this part of the process by ensuring that victims receive an email with a malicious link.

Advantages

The main advantages to cyber-criminals of using messages sent as a response to forms from legitimate websites are that the messages can pass through anti-spam filters and have the status of official messages from a reputable company, thereby making them more likely to be noticed, opened, and responded to.  Also, as well as the technical headers in the messages being legitimate, the amount of actual spam content carried in the message (which is what the filters react to) is relatively small. The spam rating assigned to messages by anti-spam filters is based on a variety of factors, but these kinds of messages command a prevailing overall authenticity which allows them to beat filters, thereby giving cyber-criminals a more credible-looking and effective way to reach their victims.

What Does This Mean For Your Business?

Most businesses and organisations are likely to have a variety of forms on their website which could mean that they are open to having their reputation damaged if cyber-criminals are able to target the forms as a way to initiate attacks or send spam.

The advice of Kaspersky is that companies and organisations should, therefore, consider testing their own forms to see if they could be compromised.  For example, registering on your own company form with your own personal e-mail address and entering a message in the name field such as “I am selling electrical equipment” as well as including a website address and a phone number, and then checking what appears in your e-mail inbox will show if there are any verification mechanisms for that type of information.  If the message you receive begins “Hello, I am selling electrical equipment”, you should contact the people who maintain your website and ask them to create simple input checks that will generate an error if a user tries to register under a name with invalid characters or invalid parts. Kaspersky also suggests that companies and organisations could consider having their websites audited for vulnerabilities.

$1 Million Bounty For Finding iPhone Security Flaws

Apple Inc recently announced at the annual Black Hat security conference in Las Vegas that it is offering security researchers rewards of up to $1 million if they can detect security flaws its iPhones.

Change

This move marks a change in Apple’s bug bounty programme.  Previously, for example, the highest sum offered by Apple was $200,000, and the bounties had only been offered to selected researchers.

The hope appears to be that widening the pool of researchers and offering a much bigger reward could maximise security for Apple mobile devices and protect them from the risk of governments breaking into them.

State-Sponsored Threats

In recent times, state-sponsored interference in the affairs of other countries has become more commonplace with dissidents, journalists and human rights advocates being targeted, and some private companies such as Israel’s NSO Group are even reported to have been selling hacking capabilities to governments. These kinds of threats are thought to be part of the motivation for Apple’s shift in its bug bounty position.

Big Prizes

The $1 million prize appears likely to only apply to remote access to the iPhone kernel without any action from the phone’s user, although it has been reported that government contractors and brokers have paid as much as $2 million for hacking techniques that can obtain information from devices.

Apple is also reported to be making things easier for researchers by offering a modified phone with some security measures disabled.

Updates

If flaws are found in Apple mobile devices by researchers, the plan appears to be that Apple will patch the holes using software updates.

Bug Bounties Not New

Many technology companies offer the promise of monetary rewards and permission to researchers and ethical (white hat) hackers / ethical security testers to penetrate their computer system, network or computing resource in order to find (and fix) security vulnerabilities before real hackers have the opportunity use those vulnerabilities as a way in.  Also, companies like HackerOne offers guidance as to the amounts to set as bug bounties e.g. anywhere from $150 to $1000 for low severity vulnerabilities, and anywhere from $2000 to $10,000 for critical severity vulnerabilities.

Examples of bug bounty schemes run by big tech companies include Google’s ongoing VRB program which offers varying rewards ranging from $100 to $31,337 and Facebook’s white hat program (running since 2011) offering a minimum reward of $500 with over $1 million paid out so far.

What Does This Mean For Your Business?

With the growing number of security threats, a greater reliance on mobile devices, more remote working via mobile devices, mobile security is a very important issue for businesses. A tech company such as Apple offering bigger bug bounties to a wider pool of security researchers could be well worth it when you consider the damage that is done to companies and the reputation of their products and services when a breach or a hack takes place, particularly if it involves a vulnerability that may be common to all models of a certain device.

Apple has made the news more than once in recent times due to faults and flaws in its products e.g. after a bug in group-calling of its FaceTime video-calling feature was found to allow eavesdropping of a call’s recipient to take place prior to the call being taken, and when it had to offer repairs/replacements for problems relating to screen touch issues on the iPhone X and data loss and storage drive failures in 13-inch MacBook Pro computers. Apple also made the news in May this year after it had to recall two different types of plug adapter because of a possible risk of electric shock.

This bug bounty announcement by Apple, therefore, is a proactive way that it can make some positive headlines and may help the company to stay ahead of the evolving risks in the mobile market, particularly at a time when the US President has focused on possible security flaws in the hardware of Apple’s big Chinese rival Huawei.

If the bug bounties lead to better security for Apple products, this can only be good news for businesses.

Using GDPR To Get Partner’s Personal Data

A University of Oxford researcher, James Pavur, has explained how (with the consent of his partner) he was able to exploit rights granted under GDPR to obtain a large amount of his partner’s personal data from a variety of companies.

Right of Access

Mr Pavur reported that he was able to send out 75 Right of Access Requests/Subject Access Requests (SAR) in order to get the first pieces of information from companies, such as his partner’s full name, some email addresses and phone numbers. Mr Pavur reported using a fake email address to make the SARs.

SAR

A Subject Access Request (SAR), which is a legal right for everyone in the UK, is where an individual can ask a company or organisation, verbally or in writing, to confirm whether they are processing their personal data and, if so, can ask the company or organisation for a copy of that data e.g. paper copy or spreadsheet.  With a SAR, individuals have the legal right to know the specific purpose of any processing of their data, what type of data is being processed, who the recipients of that processed data are, how long that data stored, how the data was obtained from them in the first place, and for information about how that processed and stored data is being safeguarded. Under GDPR, individuals can make a SAR for free, although companies and organisations can charge “reasonable fees” if requests are unfounded, excessive (in scope), or where additional copies of data are requested to the original request.

Another 75 Requests

Mr Pavur reported that he was able to use the information that he received back from the first 75 requests to send out another 75 requests.  From the second batch of requests Mr Pavur was able to obtain a large amount of personal data about his partner including her social security number, date of birth, mother’s maiden name, previous home addresses, travel and hotel logs, her high school grades, passwords, partial credit card numbers, and some details about her online dating.

The Results

In fact, Mr Pavur reported that 24% of the targeted firms who responded (72%) accepted an email address (a false one) and a phone number as proof of identity and revealed his partner’s personal details on the strength of these.  One company even revealed the results of a historic criminal background check.

Who?

According to Mr Pavur, the prevailing pattern was that large (technology) companies responded well the requests, small companies ignored the requests, and mid-sized companies showed a lack of knowledge about how to handle and verify the requests.

What Does This Mean For Your Business?

The ICO recognises on its website that GDPR does not specify how to make a valid request and that individuals can make a SAR to a company verbally or in writing, or to any part of your organisation (including by social media) and it doesn’t have to be made to a specific person or contact point.  Such a request also doesn’t have to include the phrase ‘subject access request’ or Article 15 of the GDPR, but any request must be clear that the individual is asking for their own personal data.  This means that although there may be some confusion about whether a request has actually been made, companies should at least ensure that they have identity verification and checking procedures in place before they send out personal data anyone. Sadly, in the case of this experiment, the researcher was able to obtain a large amount of personal and sensitive data about his (very understanding) partner using a fake email address.

Businesses may benefit from looking which members of staff regularly interact with individuals and offering specific training to help those staff members identify requests.

Also, the ICO points out that it is good practice to have a policy for recording details of the requests that businesses receive, particularly those made by telephone or in-person so that businesses can check with the requester that their request has been understood.  Businesses should also keep a log of verbal requests.

Fingerprints Replacing Passwords for Some Google Services

Google has announced that users can verify their identity by using their fingerprint or screen lock instead of a password when visiting certain Google services, starting with Pixel devices and coming to all Android 7+ devices in the next few days.

How?

Google says that years of collaboration between itself and many other organizations in the FIDO Alliance and the W3C have led to the development of the FIDO2 standards, W3C WebAuthn and FIDO CTAP that allow fingerprint verification.

The key game-changer in how these new technologies can help users is that unlike the native fingerprint APIs on Android, FIDO2 biometric capabilities are available on the Web which means that the same credentials be used by both native apps and web services. The result is that users only need to register their fingerprint with a service once and the fingerprint will then work for both the native application and the web service.

Fingerprint Not Sent To Google’s Servers

Google is keen to point out that the FIDO2 design is extra-secure because it means that a user’s fingerprint is never sent to Google’s servers but is securely stored on the user’s device.  Only a cryptographic proof that a user’s finger was scanned is actually sent to Google’s servers.

Try It Out

In order to try the new fingerprint system out, you will need a phone that’s running Android 7.0 (Nougat) or later, make sure that your personal Google Account is added to your Android device, and make sure that a valid screen lock is set up on your Android device.

Next, open the Chrome app on your Android device, go to https://passwords.google.com, choose a site to view or manage a saved password, and follow the instructions to confirm that it’s you trying signing in.

Google has provided more detailed instructions here: https://support.google.com/accounts/answer/9395014?p=screenlock-verif-blog&visit_id=637012128270413921-962899874&rd=1

More Places

Google says that this is just the start of the embracing of the FIDO2 standard and that more places will soon be able to accept local alternatives to passwords as an authentication mechanism for Google and Google Cloud services.

What Does This Mean For Your Business?

Not having to use a password but to be able to rely upon fingerprint (biometric) verification (or screen lock) instead should mean greater convenience and security for users of Google’s services, and should also reduce the risk to Google of having to face the results of breaches.

The development and wider use of the FIDO2 standard is, therefore, good news for businesses and consumers alike, particularly considering that Google (at 8% share) is one of the top 10 vendors that account for 70% of the world’s cloud infrastructure services market.

Back in May, Microsoft’s Corporate Vice President and Chief Information Officer Bret Arsenault signalled (in a CBNC interview) that Microsoft was looking also to move away from passwords on their own as a means of authentication towards (biometrics) and a “passwordless future”.  For example, 90% of Microsoft’s 135,000 workforce can now log into the company’s corporate network without using passwords but instead using biometric technology such as facial recognition and fingerprint scanning via apps such as ‘Windows Hello’ and the ‘Authenticator’ app.

Lancaster University Hit By “Sophisticated and Malicious Phishing Attack”

Lancaster University, which offers a GCHQ accredited cyber-security course and has its own Cyber Security Research Centre has been hit by what it has described as a “sophisticated and malicious phishing attack”, resulting in the leak of the personal data of new university applicants.

12,000+ Affected?

On the University’s website, even though it states that only “a very small number of students” actually had their records and ID documents accessed as a result of the attack, other estimates published by IT news commentators online, and based on statistics compiled by UCAS suggest that possibly over 12,000 people may have been affected.

Who?

The attack appears to have been focused on the new student applicant data records for 2019 and 2020.

What?

According to the university, the new applicant information which may have been accessed includes names, addresses, telephone numbers, and email addresses.

There have also been reports that, following the attack, fraudulent invoices have been sent to some undergraduate applicants.

Why?

Although very little information has been divulged about the exact nature of the attack, universities are known to be particularly attractive targets for phishing emails i.e. emails designed to trick the recipient into clicking on malicious links or transferring funds.  This is because educational institutions tend to have large numbers of users spread across many different departments, different facilities and faculties, and data is moved between these, thereby making admin and IT security very complicated.  Also, universities have a lot of valuable intellectual property as well as student and staff personal data within their systems which are tempting targets for hackers.

When?

Lancaster University says that it became aware of the breach on Friday 19th July, whereupon it established an incident team to handle the situation and immediately reported the incident to the Information Commissioner’s Office (ICO).

A criminal investigation led by the National Crime Agency’s (NCA) National Cyber Crime Unit (NCU) is now believed to be under way, and the university has been focusing efforts on safeguarding its IT systems and identifying and advising any students and applicants who have been affected.

US Universities & Colleges Hit Days Before

Just days before the attack on Lancaster University came to light, The U.S. Department of Education reported that a vulnerability in the Ellucian Banner System authentication software led to 62 colleges or universities being been affected.

What Does This Mean For Your Business?

For reasons already mentioned (see the ‘Why?’ section), schools, colleges and universities are prime targets for hackers, and this is why many IT and security commentators think that the higher education sector should be looking to take cyber-security risks very seriously, and make sure that training and software are put in place to enable a more proactive approach to attack prevention.  Users, both students and staff, need to be educated about threats, and how to spot and what to do with suspicious communications by email or social media.  Students, for example, need to be aware that during summer months when they are more stressed, and when they are awaiting news of applications they may be more vulnerable to phishing attacks, and that they should only contact universities through a trusted, previously tried method, and not rely upon the contact information and links given in emails.

For Lancaster University, which has its own Cyber Security Research Centre and offers a GCHQ approve cybersecurity course, this attack, which has generated some bad publicity and may adversely affect some victims, is likely to be very embarrassing and may even deter some future applicants.

Lancaster University has advised applicants, students and staff to make contact (via email or phone) f they receive any suspicious communications.

£80,000 Fine For London Estate Agency Highlights Importance of Due Diligence in Data Protection

The issuing of an £80,000 fine by the Information Commissioner’s Office (ICO) to London-based estate agency Parliament View Ltd (LPVL) highlights the importance of due diligence when keeping customer data safe.

What Happened?

Prior to the introduction of GDPR, between March 2015 and February 2017, LPVL left their customer data exposed online after transferring the data via FTP from its server to a partner organisation which also offered a property letting transaction service. LPVL was using Microsoft’s Internet Information Services (IIS) but didn’t switch off an Anonymous Authentication Function, thereby giving anyone access to the server and the data without prompting them for a username or password.

The data that was publicly exposed included some very sensitive things which could be of value to hackers and other criminals including addresses of both tenants and landlords, bank statements and salary details, utility bills, dates of birth, driving licences (of tenants and landlords) and even copies of passports.  The ICO reported that the data of 18,610 individual users had been put at risk.

Hacker’s Ransom Request

The ICO’s tough penalty took into account the fact that not only was LPVL judged to have not taken the appropriate technical and organisational measures to prevent unlawful processing of the personal data, but that the estate agency only alerted the ICO to the breach after it had been contacted by a hacker in October who claimed to possess the personal data of LPVL’s, and who had requested a ransom.

The ICO judged that LPVL’s contraventions of the Data Protection Act were wide-ranging and likely to cause substantial damage and substantial distress to those whose personal data was taken, hence the huge fine.

Marriott International Also Fined

The Marriott International hotel chain has also just been issued with a massive £99.2m fine by the ICO for infringements of GDPR, also related to matters of due diligence.  Marriott International’s fine related to an incident that affected Starwood hotels from 2014 to 2018 (which Marriott was buying).  In this case, the ICO found that the hotel chain didn’t do enough to secure its systems and undertake due diligence when it bought Starwood.  The ICO found that the systems of the Starwood hotels group were compromised in 2014, but the exposure of customer information was not discovered until 2018 and by this time, data contained in approximately 339 million guest records globally had been exposed (7 million related to UK residents).

What Does This Mean For Your Business?

We’re now seeing the culmination of ICO investigations into incidents involving some large organisations, and the issuing of some large fines by the ICO e.g. British Airways and Marriott International, and also some lesser-known, smaller organisations – LPVL. These serve to remind all businesses of their responsibilities under GDPR.

Personal data is an asset that has real value, and therefore, organisations have a clear legal duty to ensure its security.  Part of ensuring this is carrying out proper due diligence when e.g. making corporate acquisitions (as with Marriott), when transferring data to partners (as with LPVL), and in all other situations.  Systems should be monitored to ensure that they haven’t been compromised and that adequate security is maintained.  Staff dealing with data should also be adequately trained to ensure that they act lawfully and make good decisions in data matters.

MPs Call To Stop Police Facial Recognition

Following criticism of the Police use of facial recognition technology in terms of privacy, accuracy, bias, and management of the image database, the House of Commons Science and Technology Committee has called for a temporary halt in the use of the facial recognition system.

Database Concerns

Some of the key concerns of the committee were that the Police database of custody images is not being correctly edited to remove pictures of unconvicted individuals and that innocent peoples’ pictures may be illegally included in facial recognition “watch lists” that are used by police to stop and even arrest suspects.

While the committee accepts that this may be partly due to a lack of resources to manually edit the database, the MP’s committee has also expressed concern that the images of unconvicted individuals are not being removed after six years, as is required by law.

Figures indicate that, as of February last year, there were 12.5 million images available to facial recognition searches.

Accuracy

Accuracy of facial recognition has long been a concern. For example, in December last year, ICO head Elizabeth Dunham launched a formal investigation into how police forces use facial recognition technology (FRT) after high failure rates, misidentifications and worries about legality, bias, and privacy.  For example, the trial of ‘real-time’ facial recognition technology on Champions League final day June 2017 in Cardiff, by South Wales and Gwent Police forces was criticised for costing £177,000 and yet only resulting in one arrest of a local man whose arrest was unconnected.

Also, after trials of FRT at the 2016 and 2017 Notting Hill Carnivals, the Police faced criticism that FRT was ineffective, racially discriminatory, and confused men with women.

Bias

In addition to gender bias issues, the committee also expressed concern about how a government advisory group had warned (in February) that facial recognition systems could produce inaccurate results if they had not been trained on a diverse enough range of data, such as types of faces from different races e.g. black, asian, and other ethnic minorities.  The concern was that if faces from different races are under-represented in live facial recognition training datasets, this could lead to errors.  For example, human operators/police officers who are supposed to double-check any matches made by the system by other means before acting could defer to the algorithm’s decision without doing so.

Privacy

Privacy groups such as Liberty (which is awaiting a ruling on its challenge of South Wales Police’s use of the technology) and Big Brother Watch have been vocal and active in highlighting the possible threats posed to privacy by the police use of facial technology.  Also, even Tony Porter, the Surveillance Camera Commissioner,  has criticised trials by London’s Metropolitan Police over privacy and freedom issues.

Moratorium

The committee of MPs has therefore called for the government to temporarily halt the use of facial recognition technology by police pending the introduction of a proper legal framework, guidance on trial protocols and the establishment of an oversight and evaluation system.

What Does This Mean For Your Business?

Businesses use CCTV for monitoring and security purposes, and most businesses are aware of the privacy and legal compliance aspects (GDPR) of using the system and how /where the images are managed and stored.

As a society, we are also used to being under surveillance by CCTV systems, which can have real value in helping to deter criminal activity, locate and catch perpetrators, and provide evidence for arrests and trials. The Home Office has noted that there is general public support for live facial recognition in order to (for example) identify potential terrorists and people wanted for serious violent crimes.  These, however, are not the reasons why the MP’s committee has expressed its concerns, or why ICO head Elizabeth Dunham is launched a formal investigation into how police forces use FRT.

It is likely that while businesses would support the crime and terror-busting, and crime prevention aspects of FRT used by the police,  they would also need to feel assured that the correct legal framework and evaluation system are in place to protect the rights of all and to ensure that the system is accurate and cost-effective.