GDPR

£80,000 Fine For London Estate Agency Highlights Importance of Due Diligence in Data Protection

The issuing of an £80,000 fine by the Information Commissioner’s Office (ICO) to London-based estate agency Parliament View Ltd (LPVL) highlights the importance of due diligence when keeping customer data safe.

What Happened?

Prior to the introduction of GDPR, between March 2015 and February 2017, LPVL left their customer data exposed online after transferring the data via FTP from its server to a partner organisation which also offered a property letting transaction service. LPVL was using Microsoft’s Internet Information Services (IIS) but didn’t switch off an Anonymous Authentication Function, thereby giving anyone access to the server and the data without prompting them for a username or password.

The data that was publicly exposed included some very sensitive things which could be of value to hackers and other criminals including addresses of both tenants and landlords, bank statements and salary details, utility bills, dates of birth, driving licences (of tenants and landlords) and even copies of passports.  The ICO reported that the data of 18,610 individual users had been put at risk.

Hacker’s Ransom Request

The ICO’s tough penalty took into account the fact that not only was LPVL judged to have not taken the appropriate technical and organisational measures to prevent unlawful processing of the personal data, but that the estate agency only alerted the ICO to the breach after it had been contacted by a hacker in October who claimed to possess the personal data of LPVL’s, and who had requested a ransom.

The ICO judged that LPVL’s contraventions of the Data Protection Act were wide-ranging and likely to cause substantial damage and substantial distress to those whose personal data was taken, hence the huge fine.

Marriott International Also Fined

The Marriott International hotel chain has also just been issued with a massive £99.2m fine by the ICO for infringements of GDPR, also related to matters of due diligence.  Marriott International’s fine related to an incident that affected Starwood hotels from 2014 to 2018 (which Marriott was buying).  In this case, the ICO found that the hotel chain didn’t do enough to secure its systems and undertake due diligence when it bought Starwood.  The ICO found that the systems of the Starwood hotels group were compromised in 2014, but the exposure of customer information was not discovered until 2018 and by this time, data contained in approximately 339 million guest records globally had been exposed (7 million related to UK residents).

What Does This Mean For Your Business?

We’re now seeing the culmination of ICO investigations into incidents involving some large organisations, and the issuing of some large fines by the ICO e.g. British Airways and Marriott International, and also some lesser-known, smaller organisations – LPVL. These serve to remind all businesses of their responsibilities under GDPR.

Personal data is an asset that has real value, and therefore, organisations have a clear legal duty to ensure its security.  Part of ensuring this is carrying out proper due diligence when e.g. making corporate acquisitions (as with Marriott), when transferring data to partners (as with LPVL), and in all other situations.  Systems should be monitored to ensure that they haven’t been compromised and that adequate security is maintained.  Staff dealing with data should also be adequately trained to ensure that they act lawfully and make good decisions in data matters.

Microsoft Criticised By UK’s Cyber Security Agency Over Dmarc

The UK’s National Cyber Security Centre (NCSC) has complained that it has been unable to compile meaningful statistics and draw meaningful conclusions about email security in its latest report because Microsoft stopped sending Dmarc reports two years ago.

What Is Dmarc?

Domain-based message authentication, reporting and conformance (Dmarc) is a protocol, developed by the Trusted Domain Project, to help provide greater assurance on the identity of the sender of a message, and it builds upon the email authentication technologies SPF and DKIM developed over a decade ago and the work on a collaborative system pioneered by PayPal Yahoo! Mail and later Gmail.

Dmarc allows email and service providers to share information about the validity of emails they send to each other, including giving instructions to mailbox providers about what to do if a domain’s emails aren’t protected and verified by SPF and/or DKIM e.g. moving a message directly to a spam folder or rejecting it outright. Information about messages that have passed or failed DMARC evaluation is then fed back to a DMARC register, thereby providing intelligence to the sender about messages being sent from their domain and enabling them to identify email systems being used by spammers.

Dmarc works on inbound email authentication by helping email receivers to determine if a message “aligns” with what the receiver knows about the sender and if not, Dmarc includes guidance on how to handle the “non-aligned” messages e.g. phishing and other fraudulent emails.

Why Were Microsoft’s Dmarc Reports So Important?

Microsoft’s email platforms form one of the biggest receivers of email, and data from Microsoft about the number of emails failing Dmarc gives a good indication of the number of suspicious emails being sent.  The lack of this data in the NCSC’s Mail Check service means that the NCSC’s ability to monitor and report on email security driven by Dmarc adoption has been hampered. This blind spot could have a knock-on negative impact on email security for everyone.

Public Sector Uptake – Good News

The NCSC’s latest report contains good news, however, about a significant uplift in the public sector adoption of email security protocols.  For example, public sector domains using Dmarc more than tripled from December 2017 to December 2018 to 1,369, and the number of domains with a Dmarc “quarantine” or “reject” policy (to prevent suspicious emails being delivered to inboxes) also tripled.

What Does This Mean For Your Business?

Having a collaborative intelligence sharing and effective protocol and process such as Dmarc that is being widely adopted by many organisations has significantly improved email security.  This is particularly valuable at a time when businesses face significant risks from malicious emails e.g. phishing and malware, and email is so often the way that hackers can gain access to business networks.

Sharing intelligence about the level and nature of email security threats and how they are changing over time e.g. in the trusted NCSC report, is an important tool to help businesses and security professionals understand more about how they tackle security threats going forward.  It is, therefore, disappointing that one of the world’s biggest receivers of email, which itself benefits from Dmarc, is not providing reports which could be of benefit to all businesses and organisations.

£183 Million Fine (Biggest Ever) For BA Data Breach

The Information Commissioner’s Office (ICO) has imposed a £183 million fine on British Airways, the biggest fine to date under GDPR, for a data breach where the personal details of 500,000 customers were accessed by hackers.

The Breach

The breach, which involved criminals using what is known as a ‘supply chain hack’ took place between 21st August and 5th September 2018.  The attackers were able to insert a digital skimming file, made up of only 22 lines of JavaScript code, into the online payment forms of BA’s website and app. The malicious page in the app (identified by a RiskIQ researcher) was built using the same components as the real website, thereby giving a very close match to the design and functionality of the real thing. The skimming file meant that payment details entered into the malicious page by customers were intercepted live by the hackers who are believed to have been part of the Magecart group. Encryption was ineffective because the details were stolen before it reached company servers.

The fact that CVV codes were taken in the attack, which are not meant to be stored by companies, was a strong indicator of live skimming ‘supply chain’ attack.

Magecart is also believed to have used a similar digital skimmer hidden in a third-party element (chatbot) of the payment process to hack the Ticketmaster websites where 40,000 UK users were affected.

500,000 Affected In BA Breach

A staggering 500,000 personal and customer payment details were stolen in the BA Breach including names, email addresses, and credit card details including card numbers, expiry dates and the three-digit CVV codes.

Why Such A Big Fine?

The record-breaking £183 million fine was imposed because, under the General Data Protection Regulation (GDPR), a company can be fined 1.5% of its worldwide turnover and a maximum 4% of its worldwide turnover. In the case of BA, the £183 million equates to 1.5% of its worldwide turnover in 2017. 

The largest fine previous to this was imposed prior to GDPR under the old Data Protection Act where Facebook was fined £500,000 for its role in the sharing of customer data with Cambridge Analytica.

What Does This Mean For Your Business?

This enormous fine is a reminder of the powers granted to the ICO under GDPR and of just how seriously matters of data protection are now viewed, particularly where large companies which should have the protective measures in place are concerned. Even though BA has expressed surprise at the size of the fine it is worth remembering that 500,000 customer details were stolen including credit card numbers by what was actually a well-targeted and tailored but relatively simple method of attack.  This exposed vulnerabilities in the payment systems of a big company that should really have been picked up earlier.  

Despite the fine being £183 million at 1.5% of BA’s worldwide turnover, it could have been worse since the maximum fine is 4% of turnover. The fine for BA should send a powerful message to other corporations that they need to make the data protection of their customers a top priority.

Fraud Reported on Deliveroo and Just Eat App

Some Deliveroo and Just Eat customers have reported that their accounts have been used to buy food that they didn’t order, but both companies deny a data breach.

What Happened?

Several Deliveroo customers are reported to have been sent an email from the company stating that the email address linked to their account had been changed, after which it was found that food had been ordered through their account by using credit which an unknown person had obtained by claiming refunds for previous orders.

In the case of Just Eat, some customers also reported having their card details used to purchase food that they had not ordered.

Another Source

Both companies are reported to have denied that their systems had been breached and have said that the customer details used to fraudulently order the food were obtained from another, third-party source.

Password Sharing

Deliveroo is reported as saying that cyber-criminals know that people re-use passwords for multiple online services and that they can obtain login credentials gained from other breaches on other sites to try to access Deliveroo accounts.  This clearly indicates that Deliveroo believes that password sharing may have been a key factor in this fraud.

Expect To Lose Money To Online Fraud

Online fraud is now so prevalent that it appears that many people are resigned to the fact that they will be directly affected, and the message about the dangers of password sharing is not getting through.

For example, the UK National Cyber Security Centre research from April shows that 42% of Brits expect to lose money to online fraud by 2021.

The UK Cyber Survey found also that 70% believe they will likely be a victim of at least one specific type of cyber-crime over the next two years, and that 37% of those surveyed agree that losing money or personal details over the internet is unavoidable these days. The survey also found that fewer than half of those questioned used a separate, hard-to-guess password for their main email account.

1234 Still Most Popular + Dark Net

It’s not just password sharing that’s the problem but also that many people still appear to be choosing obvious passwords.  For example, the NCSC’s recent study into breached passwords revealed that 123456 featured 23 million times, making it still the most widely used password on breached accounts.

Also, recent Surrey University research showed that cyber-criminals now have their own invisible Internet on the so-called ‘dark net’ to allow them to communicate and trade beyond the view of the authorities, and that login details obtained from previous breaches are relatively cheap and easy to buy there.

Not The First Time For Deliveroo

It should be noted that, even though Deliveroo appears to have put the burden of responsibility elsewhere for these recent attacks, some customers had their accounts hacked and unordered food purchases were made back in 2016.  At the time the company also blamed the problems on passwords that had been stolen from another service in a major data breach, although some security commentators have suggested that Deliveroo should now look at whether its security systems are secure enough.

What Does This Mean For Your Business?

If Deliveroo and Just Eat’s claims are to be believed, users of these and many other services may be leaving themselves open to fraud by making bad password choices and/or may be unaware that they are using login credentials that have already been stolen or can be obtained by methods such as credential stuffing. Making good password choices is a simple but important way that we can protect ourselves, and Action Fraud suggests that we should all use strong, unique passwords for online accounts and enable two-factor authentication where it is available.

Ideally, passwords should never be shared between accounts because if one breach has taken place on one site, login details can very quickly be tried on other sites by cyber-criminals.  For example, in January a collection of credential stuffing lists (login details taken from other site breaches) containing around 2.7 billion records, including 773 million unique email address and password combinations was discovered being distributed on a hacking forum.

Websites such as https://haveibeenpwned.com/ enable you to check whether your email address and login details have already been stolen in data breaches from other websites and platforms.

Suspected Russian Disinformation Campaign Rumbled

An investigation by the Atlantic Council’s Digital Forensic Research Lab (DFRLab) claims to have unearthed a widespread disinformation campaign aimed at influencing online conversations about several topics, that appears to originate in Russia.

Facebook Accounts

Sixteen suspected Russian fake accounts that were closed by in early May 2019 led researchers to an apparent campaign which stretched across 30 social networks and blogging platforms and used nine languages. The campaign appeared to be focused away from the main platforms such as Facebook and Twitter and was played out instead on blogging sites, subreddits, and online forums.

Even though the scale of the apparent disinformation operation appears to be beyond the abilities of  a small or ad hoc group (the scale has been described as “remarkable”), and that the operation appears to have been working out of Russia,  the DFRLab has pointed out that there is not enough real evidence to suggest that the Russian state / Kremlin is behind it and that the investigation is still ongoing.

What Kind Of Disinformation?

It has been reported that the broad topic areas of the disinformation appear to reflect Moscow’s foreign policy goals e.g. Ukraine, Armenia, opposition to NATO, although conversations have been started and steered around subjects relating to Brexit, Northern Ireland, the recent EU elections, immigration, UK and US relations, the recent turmoil in Venezuela and other issues. Some of the disinformation is reported to have included:

Fake accounts in 2018 of an alleged plot, apparently discovered by Spanish intelligence, to assassinate Boris Johnson.

Shared screenshots of a false exchange between Democratic Unionist Party leader, Arlene Foster, and chief EU Brexit negotiator, Michel Barnier, which appeared to show a secret negotiation behind Theresa May’s back. Also, false information was spread about the Real IRA.

The publishing of a fraudulent letter in French, German, and broken English, featuring a screenshot of a letter allegedly written by Italian-Swedish MEP Anna Maria Corazza was published on various platforms as an attempt to influence the European Parliament elections in May 2019.

Failed and Discovered

The main reasons why the disinformation essentially failed and was discovered were that:

  • Communications were generally not sent via the main, most popular social media platforms.
  • The campaign relied on many forged documents and falsehoods which were relatively easy to spot.
  • So much trouble was taken to hide the source of the campaign e.g. each post was made on a single-use account created the same day and not used again, that the messages themselves hardly saw the light of day and appeared to lack credibility.

What Does This Mean For Your Business?

The fact that someone / some power is going to the trouble to spread disinformation on such a scale with regard to influencing the politics and government of another country is worrying in itself, and the knowledge that it is happening may make people more sceptical about the messages they read online, which can help to muddy the waters on international relations even more.

If messages from a foreign power are used to influence votes in a particular way, this could have a serious knock-on effect on the economy and government policy decisions which is likely to affect the business environment and therefore the trading conditions domestically and globally for UK businesses.  Some have described the current time as being a ‘post-truth’ age where shared objective standards for truth are being replaced by repeated assertions of emotion that are disconnected from real details.  This kind of disinformation campaign can only feed into that and make things more complicated for businesses that need to be able to have reality, truth, clear rules, and more predictable environments to help them reduce risk in business decisions.

ICO’s Own Website Fails GDPR Compliance Test

Irony and embarrassment are the order of the day as the Information Commissioner’s Office, which is responsible for ensuring GDPR compliance in the websites of businesses and organisations has been forced to admit that its own website is not GDPR compliant.

Cookie Consent Notice

The problem, as pointed out to the ICO by Adam Rose, a lawyer at Mishcon de Reya, is that the ICO’s website currently uses implied consent to place cookies on mobile devices, which is prohibited under the Electronic Communications Regulations (PECR) 2003.  These Regulations operate alongside GDPR, and as highlighted on the ICO’s own website, consent needs to be clearly given for cookies (e.g. by a tick box) and where they are set, the website needs to give users, mobile or otherwise, a clear explanation of what the cookies do and why.

Article 6

It has been reported that Mr Rose argued that the ICO’s own website’s cookie consent tools were at odds with Article 6 of PECR.

ICO’s Own Guide

For example, in the ICO’s own online guide, in terms of getting marketing consent, it states that “some form of very clear positive action” is needed, “for example, ticking a box, clicking an icon, or sending an email – and the person must fully understand that they are giving you consent”.

Cookies Admission

Under “Cookies” in the guide, and in admission of not being fully compliant itself at the moment, the ICO now states that “We use a cookies tool on our website which relies on implied consent of users.  In recognition of the fact that the implementation date for the revised e-Privacy Regulation remains unknown, we are taking reasonable steps now to align our use of cookies the standard of consent required by GDPR.  This means that we are in the process of updating the tool (Civic Cookie Tool) which, by default, requires explicit opt-in action by users of our website.”

This means that the ICO has yet to upgrade to the version of the Civic Cookie Tool which includes explicit opt-in, and therefore, the ICO isn’t currently compliant with the laws that it is supposed to help implement and uphold.

Why?

Even though the ICO announced back in May last year that it would be upgrading to the new version of the Civic Cookie Tool, this has not yet happened. This appears to indicate a possible failure on the ICO’s part in the planning and implementation aspects of this particular tool on its website.

Also, as some tech and security commentators have pointed out, there is still a lack of clear legal rules on cookie compliance, and this has even led to confusion on some points among data protection experts.

It could also be argued that a lack of regulatory enforcement against cookie compliance breaches may mean that most website operators can still put consent rules to the bottom of the list of business priorities with no fear of consequence.  It’s also unclear if the regulator would or would not be able to carry out some kind of enforcement of the law against itself.

What Does This Mean For Your Business?

Many businesses may be thinking that, aside from the obvious irony of the regulator not being totally compliant, what hope do the rest of us have of getting it right if the ICO can’t?

This story could also act as a reminder to businesses that consent is a complicated area in data protection, and that it may be worth revisiting what cookie consent tools are in place on their websites and whether they are up to date and compliant.  For example, as the ICO has discovered, if you’re responsible for implementing the updated version of tools relating to your GDPR compliance, the planning and implementation needs to be managed in order to avoid unwittingly leaving the organisation open to possible infringements of current regulations.

Facial Recognition Glasses For Covert Surveillance

The “iFalcon Face Control” AR glasses that incorporate an 8-megapixel camera in the frame and NNTC facial recognition technology (are due to go on sale next year) are reported to have already been deployed into several security operations.

US / Dubai Manufactured

The facial recognition-enabled smart glasses are made by American company Vuzix and use facial recognition algorithms from Dubai-based company NNTC.  It has been reported that the NNTC facial recognition algorithms rank in the top three for accuracy in the US government’s Face Recognition Vendor Test and can detect up to 15 faces per frame per second, thereby enabling them to identify a specific individual in less than a second.

To date, only 50 pairs of the facial recognition-enabled glasses have been produced, all of which have been sold to security and law enforcement and are, according to NNTC, being used as part of security operations in the United Arab Emirates capital Abu Dhabi.

The iFalcon Glasses Won’t Need An Internet Connection

The iFalcon Face Control glasses that are due to go on sale next year will come with a portable base station.  This will mean that they will have a portable connection to a stored a database of targets, thereby giving the user greater mobility as they won’t need an Internet connection for the software to function.

Similar Used In China

Facial recognition glasses have already been used by police forces in China last year in order to keep blacklisted people e.g. certain journalists, political dissidents, and human rights activists away from the annual gathering of China’s National People’s Congress.

Other Deployments

Known use of facial recognition for law enforcement already happens in the US through its incorporation with body cameras and CCTV cameras, and in the UK it has been used in deliberately overt trials and deployments e.g. a two-day trial in Romford, London by the Metropolitan Police in December 2018 using use vehicle-mounted cameras, at the Champions League final at the Millennium Stadium in Cardiff 2017, and at the Notting Hill Carnival in 2016 and 2017.

Criticism and Problems

The use of facial recognition technology at events and trials in the UK has, however, come under fire over several issues including poor levels of accuracy, a lack of transparency in how it is used, the possible infringement of privacy and data security rights e.g. what happens to images, and value for money in terms of deployment costs versus arrests.

This led to ICO head Elizabeth Dunham launching a formal investigation into how police forces use facial recognition technology (FRT) in the UK.

Data security and privacy are such thorny subjects for agencies, organisations and businesses alike that even though using facial recognition to help organise photos has been a standard feature across the social media industry, Microsoft is now issuing an update to its Windows 10 Photos app that prompts users to perform the almost impossible task of confirming that all appropriate consents from the people in the user’s photos and videos have been obtained in order to use facial recognition to find photos of friends and loved ones.  This move shifts the burden of responsibility away from Microsoft to the user.

What Does This Mean For Your Business?

The covert and mobile nature of these new glasses not only seems to be somewhat dystopian and ‘big brother’ but could, in theory, provide a way for users to simply get around existing data protection and privacy laws e.g. GDPR.

As a society, we are to an extent, used to being under surveillance by CCTV systems, which most people recognise as having real value in helping to deter criminal activity, locate and catch perpetrators, and provide evidence for arrests and trials. The covert use of facial recognition glasses is, however, another step further on from this and from the deliberately overt and public trials of facial recognition in the UK to date.  As such, to be used in the UK, it will require faith to be put in the authorities that it is used responsibly, and that its accuracy is proven, and that rights groups are able to access facts, figures, and information about the technology, where and how it is used, and the results.  Presumably, the ICO may also have questions about the use of such glasses.

If there is no public transparency about their use, this could also result in suspicion, campaigning against their use and a possible backlash.

Employee Subject Access Requests Increasing Costs For Their Companies

Research by law firm Squire Patton Boggs has revealed (one year on from the introduction of GDPR ) that companies are facing cost pressures from a large number of subject access requests (SARs) coming from their own employees.

SARs

A Subject Access Requests (SAR), which is a legal right for everyone in the UK, is where an individual can ask a company or organisation, verbally or in writing, to confirm whether they are processing their personal data and, if so, can ask the company or organisation for a copy of that data e.g. paper copy or spreadsheet.  With a SAR, individuals have the legal right to know the specific purpose of any processing of their data, what type of data being processed and who the recipients of that processed data are, how long that data stored, how the data was obtained from them in the first place, and for information about how that processed and stored data is being safeguarded.

Under the old 1998 Data Protection Act, companies and organisations could charge £10 for each SAR, but under GDPR individuals can make requests for free, although companies and organisations can charge “reasonable fees” if requests are unfounded, excessive (in scope), or where additional copies of data are requested to the original request.

Big Rise In SARs From Own Employees = Rise In Costs

The Squire Patton Boggs research shows that 71% of organisations have seen an increase in the number of their own employees making official requests for personal information held, and 67% of those organisations have reported an increase in their level of expenditure in trying to fulfil those requests.

The reason for the increased costs of handling the SARs can be illustrated by the 20% of companies surveyed who said they had to adopt new software to cope with the requests, the 27% of companies who said they had hired staff specifically to deal with the higher volume of SARs, and the 83% of organisation that have been forced to implement new guidelines and procedures to help manage the situation.

Why More Requests From Employees?

It is thought that much of the rise in the volume of SARs from employees may be connected to situations where there are workplace disputes and grievances, and where employees involved feel that they need to use the mechanisms and regulations in place to help themselves or hurt the company.

What Does This Mean For Your Business?

This story is another reminder of how the changes made to data protection in the UK with the introduction of GDPR, the shift in responsibility towards companies, and the widespread knowledge about GDPR can impact upon the costs and workload of a company with SARs.  It is a reminder also, that companies need to have a system and clear policies and procedures in place that enables them to respond quickly and in a compliant way to such requests, whoever they are from.

The research has highlighted an interesting and perhaps surprising and unexpected reason for the rise in the volume of SARs, and that there may be a need now for more guidance from the ICO about employee SARs.

US Visa Applicants Now Asked For Social Media Details and More

New rules from the US State Department will mean that US visa applicants will have to submit social media names and five years’ worth of email addresses and phone numbers.

Extended To All

Under the new rules, first proposed by the Trump administration back in February 2017, whereas previously the only visa applicants who had needed such vetting were those from parts of the world known to be controlled by terrorist groups, all applicants travelling to the US to work or to study will now be required to give those details to the immigration authorities. The only exemptions will be for some diplomatic and official visa applicants.

Delivering on Election Immigration Message

The new stringent rules follow on from the proposed crackdown on immigration that was an important part of now US President Donald Trump’s message during the 2016 election campaign.

Back in July 2016, the Federal Register of the U.S. government published a proposed change to travel and entry forms which indicated that the studying of social media accounts of those travelling to the U.S. would be added to the vetting process for entry to the country. It was suggested that the proposed change would apply to the I-94 travel form, and to the Electronic System for Travel Authorisation (ESTA) visa. The reason(s) given at the time was that the “social identifiers” would be: “used for vetting purposes, as well as applicant contact information. Collecting social media data will enhance the existing investigative process and provide DHS greater clarity and visibility to possible nefarious activity and connections by providing an additional toolset which analysts and investigators may use to better analyse and investigate the case.”

There had already been reports that some U.S. border officials had actually been asking travellers to voluntarily surrender social media information since December 2016.

2017

In February 2017, the Trump administration indicated that it was about to introduce an immigration policy that would require foreign travellers to the U.S. to divulge their social media profiles, contacts and browsing history and that visitors could be denied entry if they refused to comply. At that time, the administration had already barred citizens of seven Muslim-majority countries from entering the US.

Criticism

Critics of the idea that social media details should be obtained from entrants to the US include civil rights group the American Civil Liberties Union which pointed out that there is no evidence it would be effective and that it could lead to self-censorship online.  Also, back in 2017, Jim Killock, executive director of the Open Rights Group was quoted online media as describing the proposed as “excessive and insulting”.

What Does This Mean For Your Business?

Although they may sound a little extreme, these rules have now become a reality and need to be considered by those needing a US visa.  Given the opposition to President Trump and his some of his thoughts and policies and the resulting large volume of Trump-related content that is shared and reacted to by many people, these new rules could be a real source of concern for those needing to work or to study in the US.  It is really unknown what content, and what social media activity could cause problems at immigration for travellers, and what the full consequences could be.

People may also be very uncomfortable being asked to give such personal and private details as social media names and a massive five years’ worth of email addresses and phone numbers, and about how those personal details will be stored and safeguarded (and how long for), and by whom they will be scrutinised and even shared.  The measure may, along with other reported policies and announcements from the Trump administration even discourage some people from travelling to, let alone working or studying in the US at this time. This could have a knock-on negative effect on the economy of the US, and for those companies wanting to get into the US marketplace with products or services.

GCHQ Eavesdropping Proposal Soundly Rejected

A group of 47 technology companies, rights groups and security policy experts have released an open letter stating their objections to the idea of eavesdropping on encrypted messages on behalf of GCHQ.

“Ghost” User

The objections are being made to the (as yet) hypothetical idea floated by the UK National Cyber Security Centre’s technical director Ian Levy and GCHQ’s chief codebreaker Crispin Robinson for allowing a “ghost” user / third party i.e. a person at GCHQ, to see the text of an encrypted conversation (call, chat, or group chat) without notifying the participants.

According to Levy and Robinson, they would only seek exceptional access to data where there was a legitimate need, where there that kind of access was the least intrusive way of proceeding, and where there was also appropriate legal authorisation.

Challenge

The Challenge for government security agencies in recent times has been society’s move away from conventional telecommunications channels which could lawfully and relatively easily be ‘tapped’, to digital and encrypted communications channels e.g. WhatsApp, which are essentially invisible to government eyes.  For example, back in September last year, this led to the ‘Five Eyes’ governments threatening legislative or other measures to be allowed access to end-to-end encrypted apps such as WhatsApp.  In the UK back in 2017, then Home Secretary Amber Rudd had also been pushing for ‘back doors’ to be built into encrypted services and had attracted criticism from tech companies that as well as compromising privacy, this would open secure encrypted services to the threat of hacks.

Investigatory Powers Act

The Investigatory Powers Act which became law in November 2016 in the UK included the option of ‘hacking’ warrants by the government, but the full force of the powers of the law was curtailed somewhat by legal challenges.  For example, back in December 2018, Human rights group Liberty won the right for a judicial review into part 4 of the Investigatory Powers Act.  This is the part that was supposed to give many government agencies powers to collect electronic communications and records of internet use, in bulk, without reason for suspicion.

The Open Letter

The open letter to GCHQ in Cheltenham and Adrian Fulford, the UK’s investigatory powers commissioner was signed by tech companies including Google, Apple, WhatsApp and Microsoft, 23 civil society organisations, including Big Brother Watch, Human Rights Watch, and 17 security and policy experts.  The letter called for the abandonment of the “ghost” proposal on the grounds that it could threaten cyber security and fundamental human rights, including privacy and free expression.  The coalition of signatories also urged GCHQ to avoid alternate approaches that would also threaten digital security and human rights, and said that most Web users “rely on their confidence in reputable providers to perform authentication functions and verify that the participants in a conversation are the people they think they are and only those people”. As such, the letter pointed out that the trust relationship and the authentication process would be undermined by the knowledge that a government “ghost” could be allowed to sit-in and scrutinise what may be perfectly innocent conversations.

What Does This Mean For Your Business?

With digital communications in the hands of private companies, and often encrypted, governments realise that (legal) surveillance has been made increasingly difficult for them.  This has resulted in legislation (The Investigatory Powers Act) with built-in elements to force tech companies to co-operate in allowing government access to private conversations and user data. This has, however, been met with frustration in the form of legal challenges, and other attempts by the UK government to stop end-to-end encryption have, so far, also been met with resistance, criticism, and counter-arguments by tech companies and rights groups. This latest “ghost” proposal represents the government’s next step in an ongoing dialogue around the same issue. The tech companies would clearly like to avoid more legislation and other measures (which look increasingly likely) that would undermine the trust between them and their customers, which is why the signatories have stated that they would welcome a continuing dialogue on the issues.  The government is clearly going to persist in its efforts to gain some kind of surveillance access to tech company communications services, albeit for national security (counter-terrorism) reasons for the most part, but is also keen to be seen to do so in a way that is not overtly like ‘big brother’, and in a way that allows them to navigate successfully through the existing rights legislation.