Data Security

Employee Subject Access Requests Increasing Costs For Their Companies

Research by law firm Squire Patton Boggs has revealed (one year on from the introduction of GDPR ) that companies are facing cost pressures from a large number of subject access requests (SARs) coming from their own employees.

SARs

A Subject Access Requests (SAR), which is a legal right for everyone in the UK, is where an individual can ask a company or organisation, verbally or in writing, to confirm whether they are processing their personal data and, if so, can ask the company or organisation for a copy of that data e.g. paper copy or spreadsheet.  With a SAR, individuals have the legal right to know the specific purpose of any processing of their data, what type of data being processed and who the recipients of that processed data are, how long that data stored, how the data was obtained from them in the first place, and for information about how that processed and stored data is being safeguarded.

Under the old 1998 Data Protection Act, companies and organisations could charge £10 for each SAR, but under GDPR individuals can make requests for free, although companies and organisations can charge “reasonable fees” if requests are unfounded, excessive (in scope), or where additional copies of data are requested to the original request.

Big Rise In SARs From Own Employees = Rise In Costs

The Squire Patton Boggs research shows that 71% of organisations have seen an increase in the number of their own employees making official requests for personal information held, and 67% of those organisations have reported an increase in their level of expenditure in trying to fulfil those requests.

The reason for the increased costs of handling the SARs can be illustrated by the 20% of companies surveyed who said they had to adopt new software to cope with the requests, the 27% of companies who said they had hired staff specifically to deal with the higher volume of SARs, and the 83% of organisation that have been forced to implement new guidelines and procedures to help manage the situation.

Why More Requests From Employees?

It is thought that much of the rise in the volume of SARs from employees may be connected to situations where there are workplace disputes and grievances, and where employees involved feel that they need to use the mechanisms and regulations in place to help themselves or hurt the company.

What Does This Mean For Your Business?

This story is another reminder of how the changes made to data protection in the UK with the introduction of GDPR, the shift in responsibility towards companies, and the widespread knowledge about GDPR can impact upon the costs and workload of a company with SARs.  It is a reminder also, that companies need to have a system and clear policies and procedures in place that enables them to respond quickly and in a compliant way to such requests, whoever they are from.

The research has highlighted an interesting and perhaps surprising and unexpected reason for the rise in the volume of SARs, and that there may be a need now for more guidance from the ICO about employee SARs.

US Visa Applicants Now Asked For Social Media Details and More

New rules from the US State Department will mean that US visa applicants will have to submit social media names and five years’ worth of email addresses and phone numbers.

Extended To All

Under the new rules, first proposed by the Trump administration back in February 2017, whereas previously the only visa applicants who had needed such vetting were those from parts of the world known to be controlled by terrorist groups, all applicants travelling to the US to work or to study will now be required to give those details to the immigration authorities. The only exemptions will be for some diplomatic and official visa applicants.

Delivering on Election Immigration Message

The new stringent rules follow on from the proposed crackdown on immigration that was an important part of now US President Donald Trump’s message during the 2016 election campaign.

Back in July 2016, the Federal Register of the U.S. government published a proposed change to travel and entry forms which indicated that the studying of social media accounts of those travelling to the U.S. would be added to the vetting process for entry to the country. It was suggested that the proposed change would apply to the I-94 travel form, and to the Electronic System for Travel Authorisation (ESTA) visa. The reason(s) given at the time was that the “social identifiers” would be: “used for vetting purposes, as well as applicant contact information. Collecting social media data will enhance the existing investigative process and provide DHS greater clarity and visibility to possible nefarious activity and connections by providing an additional toolset which analysts and investigators may use to better analyse and investigate the case.”

There had already been reports that some U.S. border officials had actually been asking travellers to voluntarily surrender social media information since December 2016.

2017

In February 2017, the Trump administration indicated that it was about to introduce an immigration policy that would require foreign travellers to the U.S. to divulge their social media profiles, contacts and browsing history and that visitors could be denied entry if they refused to comply. At that time, the administration had already barred citizens of seven Muslim-majority countries from entering the US.

Criticism

Critics of the idea that social media details should be obtained from entrants to the US include civil rights group the American Civil Liberties Union which pointed out that there is no evidence it would be effective and that it could lead to self-censorship online.  Also, back in 2017, Jim Killock, executive director of the Open Rights Group was quoted online media as describing the proposed as “excessive and insulting”.

What Does This Mean For Your Business?

Although they may sound a little extreme, these rules have now become a reality and need to be considered by those needing a US visa.  Given the opposition to President Trump and his some of his thoughts and policies and the resulting large volume of Trump-related content that is shared and reacted to by many people, these new rules could be a real source of concern for those needing to work or to study in the US.  It is really unknown what content, and what social media activity could cause problems at immigration for travellers, and what the full consequences could be.

People may also be very uncomfortable being asked to give such personal and private details as social media names and a massive five years’ worth of email addresses and phone numbers, and about how those personal details will be stored and safeguarded (and how long for), and by whom they will be scrutinised and even shared.  The measure may, along with other reported policies and announcements from the Trump administration even discourage some people from travelling to, let alone working or studying in the US at this time. This could have a knock-on negative effect on the economy of the US, and for those companies wanting to get into the US marketplace with products or services.

GCHQ Eavesdropping Proposal Soundly Rejected

A group of 47 technology companies, rights groups and security policy experts have released an open letter stating their objections to the idea of eavesdropping on encrypted messages on behalf of GCHQ.

“Ghost” User

The objections are being made to the (as yet) hypothetical idea floated by the UK National Cyber Security Centre’s technical director Ian Levy and GCHQ’s chief codebreaker Crispin Robinson for allowing a “ghost” user / third party i.e. a person at GCHQ, to see the text of an encrypted conversation (call, chat, or group chat) without notifying the participants.

According to Levy and Robinson, they would only seek exceptional access to data where there was a legitimate need, where there that kind of access was the least intrusive way of proceeding, and where there was also appropriate legal authorisation.

Challenge

The Challenge for government security agencies in recent times has been society’s move away from conventional telecommunications channels which could lawfully and relatively easily be ‘tapped’, to digital and encrypted communications channels e.g. WhatsApp, which are essentially invisible to government eyes.  For example, back in September last year, this led to the ‘Five Eyes’ governments threatening legislative or other measures to be allowed access to end-to-end encrypted apps such as WhatsApp.  In the UK back in 2017, then Home Secretary Amber Rudd had also been pushing for ‘back doors’ to be built into encrypted services and had attracted criticism from tech companies that as well as compromising privacy, this would open secure encrypted services to the threat of hacks.

Investigatory Powers Act

The Investigatory Powers Act which became law in November 2016 in the UK included the option of ‘hacking’ warrants by the government, but the full force of the powers of the law was curtailed somewhat by legal challenges.  For example, back in December 2018, Human rights group Liberty won the right for a judicial review into part 4 of the Investigatory Powers Act.  This is the part that was supposed to give many government agencies powers to collect electronic communications and records of internet use, in bulk, without reason for suspicion.

The Open Letter

The open letter to GCHQ in Cheltenham and Adrian Fulford, the UK’s investigatory powers commissioner was signed by tech companies including Google, Apple, WhatsApp and Microsoft, 23 civil society organisations, including Big Brother Watch, Human Rights Watch, and 17 security and policy experts.  The letter called for the abandonment of the “ghost” proposal on the grounds that it could threaten cyber security and fundamental human rights, including privacy and free expression.  The coalition of signatories also urged GCHQ to avoid alternate approaches that would also threaten digital security and human rights, and said that most Web users “rely on their confidence in reputable providers to perform authentication functions and verify that the participants in a conversation are the people they think they are and only those people”. As such, the letter pointed out that the trust relationship and the authentication process would be undermined by the knowledge that a government “ghost” could be allowed to sit-in and scrutinise what may be perfectly innocent conversations.

What Does This Mean For Your Business?

With digital communications in the hands of private companies, and often encrypted, governments realise that (legal) surveillance has been made increasingly difficult for them.  This has resulted in legislation (The Investigatory Powers Act) with built-in elements to force tech companies to co-operate in allowing government access to private conversations and user data. This has, however, been met with frustration in the form of legal challenges, and other attempts by the UK government to stop end-to-end encryption have, so far, also been met with resistance, criticism, and counter-arguments by tech companies and rights groups. This latest “ghost” proposal represents the government’s next step in an ongoing dialogue around the same issue. The tech companies would clearly like to avoid more legislation and other measures (which look increasingly likely) that would undermine the trust between them and their customers, which is why the signatories have stated that they would welcome a continuing dialogue on the issues.  The government is clearly going to persist in its efforts to gain some kind of surveillance access to tech company communications services, albeit for national security (counter-terrorism) reasons for the most part, but is also keen to be seen to do so in a way that is not overtly like ‘big brother’, and in a way that allows them to navigate successfully through the existing rights legislation.

SurveyMonkey Goes to Ireland

California-based online survey software company SurveyMonkey has opened a datacentre in Dublin with a view to attracting enterprise customers in the EMEA region.

SurveyMonkey

SurveyMonkey, which was established in Portland by Ryan and Chris Finley, has more than 750 employees globally and is estimated to have more than 600,000 paying users across more than 300,000 organisational domains.  190 countries and territories use the SurveyMonkey platform which is a cloud-based, online survey tool that is offered for free, or SaaS.

The company now has offices in San Mateo, Portland, Seattle, Dublin, Ottawa, and Sydney.  The Irish office was opened in 2014 and currently has around 50 employees.  SurveyMonkey went public in 2018.

Why A Datacentre In Dublin?

There are several good reasons for the move to Dublin coupled with a focus on wooing EMEA enterprise customers, such as:

  • 16% of SurveyMonkey’s revenue during the first quarter of 2019 came from sales to the enterprise sector.
  • More than one-third of SurveyMonkey’s business revenue comes from outside the US, with the majority in Europe.
  • There is a huge opportunity for growth that’s offered by companies where SurveyMonkey has been adopted (as the free version) through back-door ‘shadow IT’, and where those enterprises can be encouraged to legitimately adopt the use of the software as company-wide deployments by being reassured that the data they collect is stored in a European data centre (Dublin). This has been termed a ‘land and expand’ strategy.
  • Dublin is ranked as one of the best places to work in Ireland and offers many benefits to tech companies and start-ups.

Phased Approach

SurveyMonkey’s strategy, of which the Dublin datacentre is a part, is a phased one with the first phase being to acquire new customers, and phase two focusing on migrating customers who already have a lot of data stored in their SurveyMonkey accounts.

In addition to expanding across Europe, SurveyMonkey will also be looking at making customers aware of the other services that it offers.

What Does This Mean For Your Business?

SurveyMonkey knows that the Europe /  EMEA region already delivers plenty of revenue and that there’s a great opportunity to expand further. Placing a datacentre in Europe may be very attractive to (and reduce risk for) enterprise customers who must be very careful about where their data is stored (refer GDPR) and who always want to reduce complexity about data storage.

This story also shows how the ‘shadow IT’ use of software has provided a way in and can be part of a successful strategy for growth and expansion.

The World Of Ethical Hackers And Bug Bounties

The fact that big tech companies are willing to pay big bucks in ‘bug bounties’ is one of the main reasons why becoming an ethical hacker / ethical security tester is increasingly attractive to many people with a variety of technical skills.

What Is An Ethical Hacker?

An ethical hacker / white hat hacker/ ethical security tester is someone who is employed by an organisation and given permission by that organisation to penetrate their computer system, network or computing resource in order to find (and fix) security vulnerabilities before real hackers have the opportunity use those vulnerabilities as a way in.

Certified

In the US, for example, a person can obtain a Certified Ethical Hacker (CEH) qualification by using the same knowledge and tools as a malicious hacker, but in a lawful and legitimate manner to assess the security posture of a system.  CEH exams test a candidate’s skills in applying techniques and using penetration (‘pen’) testing tools to compromise various simulated systems within a virtual environment.

Who?

Ethical hackers can find work, for example, with organisations that run bug bounty programmes on behalf of companies e.g. Hacker One, Bug Crowd, Synack, or they can choose to work freelance.

What Are Bug Bounties?

Bug bounties are monetary rewards offered to those who have identified errors or vulnerabilities in a computer program or system. Companies like HackerOne, for example, offer guidance as to the amounts to set as bug bounties e.g. anywhere from $150 to $1000 for low severity vulnerabilities, and anywhere from $2000 to $10,000 for critical severity vulnerabilities.

Examples of bug bounties include:

  • The ‘Hack The Pentagon’ three-year initiative run by HackerOne which has so far (since 2016) paid $75,000 to those who have found software vulnerabilities in the Defence Department’s public facing websites.
  • Google’s ongoing VRB program which offers varying rewards ranging from $100 to $31,337 depending on the type of vulnerabilities found.
  • Facebook’s Whitehat program, running since 2011, and offering a minimum reward of $500 with over $1 million paid out so far. The largest single reward is reported to be $20,000.

Motivation

Money is often not the only motivation for those involved in ethical hacking.  Many are interested in the challenge of solving the problems, getting into the industry, and getting recognition from their peers.

Training

The UK has a tech skills shortage, but some schemes do exist to help the next generation of cyber-security experts gain their knowledge and skills.  One example is the UK’s Cyber Discovery scheme which had more than 25,000 school children take part in its first year.  The scheme turns finding security loopholes into engaging games while getting children familiar with the tools that many cyber-pros use.  Top performers can then attend residential courses to help them hone their skills further.

What Does This Mean For Your Business?

Ethical hackers play an important penetration testing role in ensuring that systems and networks are as secure as possible against the known methods employed by real hackers. It is not uncommon, particularly for large companies that are popular hacking targets, to offer ongoing bug bounty programs as a way to keep testing for vulnerabilities and the rewards paid to the ethical hackers are well worth it when you consider the damage that is done to companies and their customers when a breach takes place.

Running government programs such as Cyber Discovery could, therefore, be an important way to encourage, spot, and help develop a home-grown army of cyber-security professionals which is a win/win for companies wanting to improve their security, individuals looking for careers in the cyber-security and tech industries, and filling a skills gap in the UK.

Survey Shows Half OF UK Firms Have No Cyber Resilience Plan

A survey commissioned by email security firm Mimecast and conducted by Vanson Bourne has revealed that even after GDPR’s introduction, more than half of UK firms have no Cyber Resilience Plan.

What Is A Cyber Resilience Plan?

An organisation’s cyber resilience is its ability to prepare for, respond to and recover from cyber-attacks, and a Cyber Resilience Plan details how an organisation intends to do this.  Most organisations now accept that the evolving nature of cyber-crime means that it’s no longer a case of ‘if’ but ‘when’ they will suffer a cyber-attack.  It is with this perspective in mind that a strategy should be developed to minimise the impact of any cyber-attack (financial, brand and reputational), meet legal and regulatory requirements (NIS and GDPR), improve the organisation’s culture and processes, protect customers and stakeholders, and enable the organisation to survive beyond an attack and its fallout.

More Than Half Without

Mimecast’s survey shows that even though 51% of IT decision-makers polled in the UK say they believe it is likely or inevitable they’ll suffer a negative business impact from an email-borne cyber-attack in the next 12 months, 52% still don’t have a cyber resilience plan in place.

Email Focus

Email is a critical part of the infrastructure of most organisations and yet it is the most common point of attack. It is with this in mind that the Mimecast survey has focused on the challenges that managing the security aspects of email present in terms of cyber resilience and in achieving compliance with GDPR.

E-Mail Archiving

One potential weakness that the survey revealed is that only 37% of UK IT decision-makers said that email archiving and e-discovery are included in their organisation’s cyber resilience strategy.  When you consider that email contains a great deal of personal and sensitive company data, it’s protection should really be at the core of any cyber resilience strategy.

Also, for example, in relation to GDPR, not having powerful archiving systems to enable emails to be found and deleted quickly upon a user’s request could pose a compliance challenge.

Human Error

Human error in terms of not being able to spot or know how to deal with suspicious emails is a common weakness that is exploited by cyber-criminals.

What Does This Mean For Your Business?

If the results of this survey reflect a true picture of what’s happening in many businesses, then it indicates that cyber resilience urgently needs to be given greater priority, particularly since it is now a case of ‘when’ rather than ‘if’ a cyber attack will occur.  Also, the risks of not addressing the situation could be huge in terms of risks to customers and stakeholders and the survival of the business itself, particularly with the huge potential fines with GDPR for breaches.

E-mail, and particularly email archiving (what’s stored, where and how well and quickly it can be searched) poses a serious challenge. Businesses should reassess whether their email archiving strategy is effective and safe enough and security should go beyond archive encryption to guard against impersonation attacks and malicious links.

Bearing in mind the role that human error so regularly plays in enabling attacks via email, education and training in this area alongside having clearly communicated company policy and best practice in managing email safely should form an important part of a company’s cyber resilience.

Trust Challenge For Online Sharing Services

The Global Trust Survey from service provider Jumio has revealed that a quarter of adults feel unsafe using online sharing services.

What Are Online Sharing Services?

Online sharing services refers to companies like Uber and Airbnb where multiple users can use technology to book and consume a shared offering (car and room sharing), and where those offering the service can increase the utilisation of an asset – both parties get value from the exchange. The so-called “sharing economy” also includes services such as crowdfunding, personal services, and video and audio streaming.

The Sharing Economy

The sharing economy is expected to grow to a massive $335 billion by 2020. For example, in just 11 years, Airbnb has grown from nothing to becoming a $30bn firm listing more than six million rooms, flats and houses in more than 81,000 cities across the globe. Figures show that, on average, two million people use an Airbnb property each night.

Trust Challenge Revealed

Jumio’s Global Trust Survey showed that even though online sharing services are growing, and have been with us for some time now, in the 30 days prior to the survey taking place, over 80% of UK adults said that they hadn’t used an online sharing service, and 25% of UK adults said that they felt “somewhat unsafe” or “not at all safe” when using online sharing services.

A key element in making shared services successful is trust, and recent global from PwC confirmed this where 89% of consumers agreed that the sharing economy marketplace is based on trust between providers and users.

Identity Verification Vital

One area uncovered by the Global Trust and Safety Survey which appears to be a challenge for shared services is proving and verifying identity.  For example, the survey found that 60% of users believe it is either ‘somewhat important’ or ‘very important’ for new users to undergo an identity check to prove that they are who they claim to be.

This is the reason why companies such as Lyft are rolling out continuous background checks and enhanced identity verification, and why Uber is updating its app to give an alert to riders to check the license plate, make, and model of the vehicle, and to confirm the name and picture of the driver.

What Does This Mean For Your Business?

Trust is something that takes a long time for a business to build, and it is a vital element in the success of shared services such as those where considerable risk (financial and, critically, personal risk) is involved. Trust is also something that can be very easily lost, sometimes in an instant or through one high profile incident involving that service e.g. the recent murder in the US of a student by a man posing as an Uber driver.

The results of the Global Trust Survey help to remind businesses that offer shared services that consumers need and want a layer of safety to help them feel comfortable in trying and using those services.  Companies can, therefore, help create an ecosystem of trust through the process of identity verification.

Serious Security Flaws Discovered In Popular GPS Tracker

Researchers at UK cyber-security company, Fidus Information Security, say that they have found security flaws in a popular Chinese-manufactured white-label location tracker that could be serious enough to warrant a recall.

Which Tracker?

The GPS tracker which is used as a panic alarm for elderly patients, to monitor children, and to track vehicles is white label manufactured but rebranded and sold by several different companies which reportedly include Pebbell (by HoIP Telecom), OwnFone Footprint and SureSafeGo. The tracker uses a SIM card to connect to the 2G/GPRS network.  According to Fidus at least 10,000+ of these trackers are currently used in the UK

What’s The Problem?

According to the researchers, simply sending the device a text message with a keyword can trick the tracker into revealing its real-time location. Also, other commands tried by the researchers can allow anyone to call the device and remotely listen in to its in-built microphone without the user knowing, and even remotely stop the signal from the tracker, thereby making the device effectively useless.  On its blog, Fidus lists several other things that its researchers were able to do to the device including change or completely remove all emergency contacts, disable the motion alarm, disable fall detection and remove any device PIN which had been set.

All these scenarios could pose significant risks to the (mainly vulnerable) users of the trackers.

According to Fidus, one of the main reasons why the device has so many security flaws is that it doesn’t appear that the manufacturers, nor the companies reselling the devices, have conducted any security testing or penetration testing of the device.

PIN Problem

The research by Fidus also uncovered the fact that even though the manufacturers built in PIN functionality to help lock the devices down, the PIN, by default, is disabled and users need to read the manual to find out about it, and when enabled, the PIN is required as a prefix to any commands to be accepted by the device, except for REBOOT or RESET functionality.  The problem with this is that the RESET functionality is the thing that really could provide any malicious user with the ability to gain remote control of the device.  This is because is the RESET command that wipes all stored contacts and emergency contacts, restores the device to factory defaults and means that a PIN is no longer needed.

What Does This Mean For Your Business?

What is particularly disturbing about this story is that the tracking devices are used for some of the most vulnerable members of society.  Even though they have been marketed as a way to make a person safer, the cruel irony is that it appears that if they are taken over by a malicious attacker, they could put a person at greater risk.

This story also illustrates the importance of security penetration testing in discovering and plugging security loopholes in devices before making them widely available.  This is another example of an IoT/smart device that has security loopholes related to default settings, and with an ever-growing number of IoT devices out there, many of them perhaps not tested as well as they could be, many buyers are unknowingly at risk from hackers.f

Old Routers Are Targets For Hackers

Internet security experts are warning that old routers are targets for cyber-criminals who find them an easy hacking option.

How Big Is The Threat?

Trend Micros have reported that back in 2016 there were five families of threats for routers, but this grew to 35 families of threats in 2018. Research by the American Consumer Institute in 2018 revealed that 83 per cent of home and office routers have vulnerabilities that could be exploited by attackers.  These include the more popular brands such as Linksys, NETGEAR and D-Link.

Why Are Old Routers Vulnerable?

Older routers are open to attacks that are designed to exploit simple vulnerabilities for several reasons including:

  • Routers are often forgotten about since their initial setup and consequently, 60 per cent of users have never updated their router’s firmware.
  • Routers are essentially small microcomputers.  This means that anything that can infect those can also infect routers.
  • Many home users leave the default passwords for the Wi-fi network, the admin account associated with it, and the router.
  • Even when vulnerabilities are exposed, it can take ISPs months to be able to update the firmware for their customers’ routers.
  • Today’s routers are designed to be easy and fast to work straight out of the box, and the setup doesn’t force customers to set their own passwords – security is sacrificed for convenience.
  • There are online databases where cyber-criminals can instantly access a list of known vulnerabilities by entering the name of a router manufacturer. This means that many cyber-criminals know or can easily find out what the specific holes are in legacy firmware.

What If Your Router Is Compromised?

One big problem is that because users have little real knowledge about their routers anyway and pay little attention to them apart from when their connection goes down.  It is often the case, therefore, that users tend not to know that their router has been compromised as there are no clear outward signals.

Hacking a router is commonly used to carry out other criminal and malicious activity such as Distributed Denial of Service attacks (DDoS) as part of a botnet, credential stuffing, mining bitcoin and accessing other IoT devices that link to that router.

Examples

Examples of high-profile router-based attacks include:

  • The Mirai attack that used unsecured routers to spread the Mirai malware that turned networked devices into remotely controlled “bots” that could be used as part of a botnet in large-scale network attacks.
  • The VPNFilter malware (thought to have been sponsored by the Russian state and carried out by the Fancy Bear hacking group) that infected an estimated half a million routers worldwide.
  • The exploit in Brazil spread across D-Link routers and affecting 100,000 devices, aimed at customers of Banco de Brazil.

Also, back in 2017, Virgin Media advised its 800,000 customers to change their passwords to reduce the risk of hacking after finding that many customers were still using risky default network and router passwords.

Concerns were also expressed by some security commentators about TalkTalk’s Super Router regarding the WPS feature in the router always being switched on, even if the WPS pairing button was not used, thereby meaning that attackers within range could have potentially hacked into the router and stolen the router’s Wi-Fi password.

What Does This Mean For Your Business?

If you have an old router with old firmware, you could have a weak link in your cyber-security.  If that old router links to IoT devices, these could also be at risk because of the router.

Manufacturers could help reduce the risk to business and home router users by taking steps such as disabling the internet until a user goes through a set up on the device which could include changing the password to a unique one.

Also, vendors and ISPs could help by having an active upgrade policy for out of date, vulnerable firmware, and by making sure that patches and upgrades are sent out quickly.

ISPs could do more to educate and to provide guidance on firmware updates e.g. with email bulletins.  Some tech commentators have also suggested using a tiered system where advanced users who want more control of their set-up can have the option, but everyone else gets updates rolled out automatically.

Could Biometric Regulations Be On The Way Soon?

A written parliamentary question from MP Luciana Berger about the possibility of bringing forward legislation to regulate the use of facial recognition technology has led the Home Office to hint that the legislation (and more) may be on the way soon.

Questions and Answers

The question by the MP about bringing forward ‘biometrics legislation’ related to how facial recognition was being used for immigration purposes at airports. Last month, MP David Davis also asked about possible safeguards to protect the security and privacy of citizens’ data that is held as part of the Home Office’s biometrics programme.

Caroline Nokes has said on behalf of the Home Office, in response to these and other questions about biometrics, that options to simplify and extend governance and oversight of biometrics across the Home Office sector are being looked at, including where law enforcement, border and immigration control use of biometrics is concerned.  Caroline Nokes is also reported to have said that other measures would also be looked at with a view to improving the governance and use of biometrics in advance of “possible legislation”.

Controversial

There have been several controversial incidents where the Police have used/held trials of facial recognition at events and in public places, for example:

In February this year a deliberately overt trial of live facial recognition technology by the Metropolitan Police in the centre of Romford led to an incident whereby a man who was observed pulling his jumper over part of his face and putting his head down while walking past the police cameras ended up being fined after being challenged by police.  The 8-hour trial only resulted in three arrests as a direct result of facial recognition technology.

In December 2018 ICO head Elizabeth Dunham was reported to have launched a formal investigation into how police forces use facial recognition technology after high failure rates, misidentifications and worries about legality, bias, and privacy.

A trial of facial recognition at the Champions League final at the Millennium Stadium in Cardiff back in 2017 only yielded one arrest, and this was the arrest of a local man for something unconnected to the Champions League. This prompted criticism that the trial was a waste of money.

Biometrics – Approved By The FIDO Alliance

One area where biometrics has got the seal of approval by The FIDO Alliance is in its use in facial recognition, and fingerprint scanning as part the login for millions of Windows 10 devices from next month. The FIDO Alliance is an open industry association whose mission is to develop and promote authentication standards that help reduce the world’s over-reliance on passwords.

In a recent interview with CBNC, Microsoft’s Corporate Vice President and Chief Information Officer Bret Arsenault, signalled the corporation’s move away from passwords on their own as a means of authentication towards biometrics and a “passwordless future”.  Windows Hello (the Windows 10 authenticator) has been built to align with FIDO2 standards so it works with Microsoft cloud services, and this has led to the FIDO Alliance now granting Microsoft official certification for Windows Hello from the forthcoming May 2019 upgrade.

What Does This Mean For Your Business?

Taking images of our faces as part of a facial recognition system used by the government may seem like an efficient way of identifying and verification e.g. for immigration purposes, but our facial images constitute personal data.  For this reason, we should be concerned about how and where they are gathered (with or without our knowledge) and how they are stored, as well as how and why they are used.  There are security and privacy matters to consider, and it may well make sense to put regulations and perhaps legislation in place now in order to provide some protection for citizens and to ensure that biometrics are used responsibly by all, including the state, and that privacy and security are given proper consideration.

It should be remembered that some of the police facial recognition tests have led to mistaken identity, and this is a reminder that the technology is still in its early stages, and this may provide another reason for regulations and legislation now.