Data Security

UK National Surveillance Camera Day

In a world first, the UK played host to an awareness-raising National Surveillance Camera Day on 20 June as part of the National Surveillance Camera Strategy.

National Surveillance Camera Day

The National Surveillance Camera Day, which is part of the UK government’s National Surveillance Camera Strategy for England and Wales consisted of events around the country that were designed to raise awareness, inform and lead to a debate about the many different aspects of CCTV camera use (and facial recognition use) in the UK. The Surveillance Camera Commissioner (SCC) wanted the public to take the day as an opportunity to have their say about the future of surveillance cameras with the regulators and service providers listening.

It is hoped that points raised in the debates triggered by the day could help inform policymakers and service providers about how the public feels about surveillance practices and how surveillance camera system use fits with society’s needs and expectations.

One of the key events to mark the day was the “doors open” initiative to allow the public to see first-hand how surveillance camera control centres are operated at the premises of signatories to the initiative e.g. local authorities, police forces, hospitals, and universities.

What / Who Is The SCC?

The Surveillance Camera Commissioner (SCC) for England and Wales is appointed by the Home Secretary as set out in the Protection of Freedoms Act 2012 (PoFA) and it is the Commissioner’s role to ensure surveillance camera systems in public places keep people safe and protect and support them. The current SCC is Tony Porter.

What Is The National Surveillance Camera Strategy?

The National Surveillance Camera Strategy is the government document, presented by the SCC that outlines the plans for surveillance camera use going forward.  The 27-page document is available online here:  https://www.gov.uk/government/publications/national-surveillance-camera-strategy-for-england-and-wales

Two Related World Firsts

Another related world first that took place on the same day as National Surveillance Camera Day was the launch by the SCC of a “secure by default” list of minimum requirements for manufacturers of video surveillance systems, designed for manufacturers by manufacturers.  The hope is that where manufacturers meet the new “secure by default” minimum requirements, this should ensure that the default settings of a product are as secure as possible, and therefore less likely to be vulnerable to cyber-attacks that could lead to data breaches.

What Does This Mean For Your Business?

Most of us are used to (and often no longer notice) CCTV cameras in use in business premises and public spaces, and we accept that they have a value in protecting us and our businesses in terms of deterring criminals and playing an important role in identifying them, and in providing valuable evidence of crime.

Holding a National Surveillance Camera day highlights the fact that new and emerging technologies e.g. facial recognition and AI are currently causing concern in terms of possible infringements to civil liberties, privacy and security, and an ‘open-day’ style approach could have benefits both ways.  For example, it could serve to reassure the public and at least let them feel that their views and concerns will be listened to, while at the same time giving policy-makers an opportunity to gauge public opinion and gather information that could help guide their strategy and communications.

It is good news that manufacturers are setting themselves minimum security standards for their CCTV systems as part of “secure by default”, as this could have knock-on positive effects in protecting our personal data.

Samsung’s Advice To Virus-Check TVs Causes Customer Concern

Samsung’s recent release of a how-to virus check video coupled with the advice to complete the check “every few weeks” has caused confusion and concern among customers.

Video

At the heart of Samsung’s virus-checking information release was a 19-second video guide that Samsung said had been posted simply to educate and inform customers. The video guide, which was watched more than 200,000 times, was presented to customers via a tweet which it is reported, has since been deleted.

The video showed Samsung TV owners how to access the sub-menu and go to the System Manager to conduct their own “Smart Security Scan”.

Although this feature is already built-in to Samsung TVs, it was the fact that the tweeted video contained the advice that customers would need to carry out the scan themselves every few weeks to prevent malicious software attacks that caused concern that there were known attack attempts or that their QLED TVs were vulnerable in some way.

Misunderstanding

Samsung is since reported to have said that the video was simply for information and was a proactive way to remind and educate customers that the feature existed and how to operate it as a preventative measure and that the video was not sent as a reaction to a specific current threat.

What Are The Risks?

A smart TV is essentially an IoT device, and as such, faces similar potential risks to other IoT devices, although Samsung TVs don’t appear to be at any more of risk than other devices.  In fact, back in 2017, after claims that many zero-day vulnerabilities had been found in Samsung’s smart TV operating system, the company reminded users that its TVs already contained features that allowed them to detect malicious code at platform and application levels.

That said, Samsung’s Smart TVs are likely to have a built-in microphone, an Internet connection with streaming apps, and customers may enter credit card details for buying on-demand video content. All this means that the potential privacy and security risks exist.

What Does This Mean For Your Business?

It appears that security and privacy are very sensitive subjects for consumers and that an attempt to remind customers about a security feature ended up highlighting one of the risks of owning a smart TV, leading to concern and an unnecessary PR gaffe.

In the light of the tweet and video, some security commentators have criticised Samsung for making security checks the responsibility of the customer rather than the company sending out automatic security updates.  Also, the company may be expecting too much of some of its customers to ask them to delve into the perhaps complicated sub-menu to find the virus scan feature, and to do so on a regular basis.

ICO’s Own Website Fails GDPR Compliance Test

Irony and embarrassment are the order of the day as the Information Commissioner’s Office, which is responsible for ensuring GDPR compliance in the websites of businesses and organisations has been forced to admit that its own website is not GDPR compliant.

Cookie Consent Notice

The problem, as pointed out to the ICO by Adam Rose, a lawyer at Mishcon de Reya, is that the ICO’s website currently uses implied consent to place cookies on mobile devices, which is prohibited under the Electronic Communications Regulations (PECR) 2003.  These Regulations operate alongside GDPR, and as highlighted on the ICO’s own website, consent needs to be clearly given for cookies (e.g. by a tick box) and where they are set, the website needs to give users, mobile or otherwise, a clear explanation of what the cookies do and why.

Article 6

It has been reported that Mr Rose argued that the ICO’s own website’s cookie consent tools were at odds with Article 6 of PECR.

ICO’s Own Guide

For example, in the ICO’s own online guide, in terms of getting marketing consent, it states that “some form of very clear positive action” is needed, “for example, ticking a box, clicking an icon, or sending an email – and the person must fully understand that they are giving you consent”.

Cookies Admission

Under “Cookies” in the guide, and in admission of not being fully compliant itself at the moment, the ICO now states that “We use a cookies tool on our website which relies on implied consent of users.  In recognition of the fact that the implementation date for the revised e-Privacy Regulation remains unknown, we are taking reasonable steps now to align our use of cookies the standard of consent required by GDPR.  This means that we are in the process of updating the tool (Civic Cookie Tool) which, by default, requires explicit opt-in action by users of our website.”

This means that the ICO has yet to upgrade to the version of the Civic Cookie Tool which includes explicit opt-in, and therefore, the ICO isn’t currently compliant with the laws that it is supposed to help implement and uphold.

Why?

Even though the ICO announced back in May last year that it would be upgrading to the new version of the Civic Cookie Tool, this has not yet happened. This appears to indicate a possible failure on the ICO’s part in the planning and implementation aspects of this particular tool on its website.

Also, as some tech and security commentators have pointed out, there is still a lack of clear legal rules on cookie compliance, and this has even led to confusion on some points among data protection experts.

It could also be argued that a lack of regulatory enforcement against cookie compliance breaches may mean that most website operators can still put consent rules to the bottom of the list of business priorities with no fear of consequence.  It’s also unclear if the regulator would or would not be able to carry out some kind of enforcement of the law against itself.

What Does This Mean For Your Business?

Many businesses may be thinking that, aside from the obvious irony of the regulator not being totally compliant, what hope do the rest of us have of getting it right if the ICO can’t?

This story could also act as a reminder to businesses that consent is a complicated area in data protection, and that it may be worth revisiting what cookie consent tools are in place on their websites and whether they are up to date and compliant.  For example, as the ICO has discovered, if you’re responsible for implementing the updated version of tools relating to your GDPR compliance, the planning and implementation needs to be managed in order to avoid unwittingly leaving the organisation open to possible infringements of current regulations.

Facial Recognition Glasses For Covert Surveillance

The “iFalcon Face Control” AR glasses that incorporate an 8-megapixel camera in the frame and NNTC facial recognition technology (are due to go on sale next year) are reported to have already been deployed into several security operations.

US / Dubai Manufactured

The facial recognition-enabled smart glasses are made by American company Vuzix and use facial recognition algorithms from Dubai-based company NNTC.  It has been reported that the NNTC facial recognition algorithms rank in the top three for accuracy in the US government’s Face Recognition Vendor Test and can detect up to 15 faces per frame per second, thereby enabling them to identify a specific individual in less than a second.

To date, only 50 pairs of the facial recognition-enabled glasses have been produced, all of which have been sold to security and law enforcement and are, according to NNTC, being used as part of security operations in the United Arab Emirates capital Abu Dhabi.

The iFalcon Glasses Won’t Need An Internet Connection

The iFalcon Face Control glasses that are due to go on sale next year will come with a portable base station.  This will mean that they will have a portable connection to a stored a database of targets, thereby giving the user greater mobility as they won’t need an Internet connection for the software to function.

Similar Used In China

Facial recognition glasses have already been used by police forces in China last year in order to keep blacklisted people e.g. certain journalists, political dissidents, and human rights activists away from the annual gathering of China’s National People’s Congress.

Other Deployments

Known use of facial recognition for law enforcement already happens in the US through its incorporation with body cameras and CCTV cameras, and in the UK it has been used in deliberately overt trials and deployments e.g. a two-day trial in Romford, London by the Metropolitan Police in December 2018 using use vehicle-mounted cameras, at the Champions League final at the Millennium Stadium in Cardiff 2017, and at the Notting Hill Carnival in 2016 and 2017.

Criticism and Problems

The use of facial recognition technology at events and trials in the UK has, however, come under fire over several issues including poor levels of accuracy, a lack of transparency in how it is used, the possible infringement of privacy and data security rights e.g. what happens to images, and value for money in terms of deployment costs versus arrests.

This led to ICO head Elizabeth Dunham launching a formal investigation into how police forces use facial recognition technology (FRT) in the UK.

Data security and privacy are such thorny subjects for agencies, organisations and businesses alike that even though using facial recognition to help organise photos has been a standard feature across the social media industry, Microsoft is now issuing an update to its Windows 10 Photos app that prompts users to perform the almost impossible task of confirming that all appropriate consents from the people in the user’s photos and videos have been obtained in order to use facial recognition to find photos of friends and loved ones.  This move shifts the burden of responsibility away from Microsoft to the user.

What Does This Mean For Your Business?

The covert and mobile nature of these new glasses not only seems to be somewhat dystopian and ‘big brother’ but could, in theory, provide a way for users to simply get around existing data protection and privacy laws e.g. GDPR.

As a society, we are to an extent, used to being under surveillance by CCTV systems, which most people recognise as having real value in helping to deter criminal activity, locate and catch perpetrators, and provide evidence for arrests and trials. The covert use of facial recognition glasses is, however, another step further on from this and from the deliberately overt and public trials of facial recognition in the UK to date.  As such, to be used in the UK, it will require faith to be put in the authorities that it is used responsibly, and that its accuracy is proven, and that rights groups are able to access facts, figures, and information about the technology, where and how it is used, and the results.  Presumably, the ICO may also have questions about the use of such glasses.

If there is no public transparency about their use, this could also result in suspicion, campaigning against their use and a possible backlash.

Criminal Secrets Of The Dark Net Revealed

Recent Surrey University research, ‘Web Of Profit’ commissioned by virtualisation-based security firm Bromium has shown that cyber-criminals are moving to their own invisible Internet on the so-called ‘dark net’ to allow them to communicate and trade beyond the view of the authorities.

What Is The Dark Net?

The dark net describes parts of the Internet which are closed to public view or hidden networks and are associated with the encrypted part of the Internet called the ‘Tor’ network where illicit trading takes place.  The dark net is not accessible to search engines and requires special software installed or network configurations made to access it e.g. Tor, which can be accessed via a customised browser from Vidalia.

Deeper

Infiltration and closing down of some of the dark net marketplaces by the authorities are now believed to have led to cyber-criminals moving to a more secure, invisible part of the dark net in order to continue communicating and trading.

How?

Much of the communication about possible targets and tactics between cyber-criminals now takes place on secure apps, forums and chatrooms.  For example, cyber-criminals communicate using the encrypted app ‘Telegram’ because it offers security, anonymity, and encrypted channels for the sale of prohibited goods.

Diverse Dark Net Marketplace

Posing as customers and getting first-hand information from hackers about the costs a range of cyber-attacks, the researchers were able to obtain shocking details such as:

  • Access to corporate networks is being sold openly, with 60% of the sellers offering access to more than 10 business networks at a time. Prices for remote logins for corporate networks ranged from only £1.50-£24, and targeted attacks on companies were offered at a price of £3,500.
  • Phishing kits are available for as little as $40, as are fake Amazon receipts and invoices for $52.
  • Targeted attacks on individuals can be purchased for $2,000, and even Espionage and insider trading are up for sale from $1,000 to $15,000.

Corporations Targeted

One thing that was very clear from the research is that cyber-criminals are very much focusing on corporations as targets with listings for attacks on enterprises having grown by 20% since 2016. The kinds of things being sold include credentials for accessing business email accounts.

Specific Industries

The research also showed that cyber-criminals are moving away from commodity malware and now prefer to tailor tools such as bespoke versions of malware as a way of targeting specific industries or organisations.  For example, the researchers found that 40% of their attempts to request dark net hacking services targeting companies in the Fortune 500 or FTSE 100 received positive responses from sellers, and that the services on offer even come with service plans for conducting the hack, and price tags ranging from $150 to $10,000, depending on the company to be targeted.

The industries that are most frequently targeted using malware tools that are being traded on the dark net include banking (34%), e-commerce (20%), healthcare (15%) and even education (12%).

Researchers also uncovered evidence that vendors are now acting on behalf of clients to hack organisations, obtain IP and trade secrets and disrupt operations.

What Does This Mean For Your Business?

The dark net is not new, but some commentators believe that the heavy-handed nature of some of the police work to catch criminals on the dark net is responsible for pushing criminal communication and trading activity further underground into their own invisible areas.  End-to-end encrypted communications tools such as Telegram mean that cyber-criminals can carry on communicating beyond the reach of the authorities.

The research should show businesses that there is now real cause for concern about the sensitive, informed and finely tuned approach that cyber-criminals are taking in their targeting of organisations, right from the biggest companies down to SME’s.  This should be a reminder that cyber-security should be given priority, especially when it comes to defending against phishing campaigns, which are one of the most successful ways that criminals gain access to company networks.

Law enforcement agencies also need to do more now to infiltrate, gather intelligence, and try to deter and stop the use of different forums, channels and other areas of the dark net in order to at least prevent some of the more open trading of hacking services and tools.

Mastercard’s AI-Based Digital Wellness Could Make Online Purchasing Easier and Safer

Mastercard has announced the introduction of its Digital Wellness program which utilises AI-based click-to-pay technology and new standards in order to provide an easier and safer online shopping experience.

The Program

The Mastercard Digital Wellness program provides tips and resources that are designed to help businesses (especially small and independent businesses) protect themselves from cyber-attacks and data breaches. The program includes Secure Remote Commerce, Mastercard’s Cyber Readiness Institute (a collective of business leaders), and The Global Cyber Alliance which provides SMBs with free cyber-security tools.

New Click-To-Pay Checkout System

Coming out of the Digital Wellness Program is Mastercard’s new click-to-pay checkout system which is enabled by Mastercard’s deployment of EMVCo’s (Europay, Mastercard, Visa) specification. The standards that make up EMVCO’s specification provide a foundation that enables the processing of e-commerce transactions in a consistent, streamlined fashion over a variety of digital channels and devices, including smartphones, tablets, PCs and other connected devices.

This means that the click-to-pay checkout system can be used for all kinds of online shopping, across multiple devices, and across cards, and can replace old key-entry checkout systems.

Tokenization and NuData

The click-to-pay checkout system incorporates tokenization and NuData, which represent Mastercard’s AI and machine learning tech. NuData can prevent fraud by (for example) monitoring website traffic changes, analysing changes in browsers and web surfing speeds, and verifying all the user data that makes a user unique (such as an individual’s scroll speed on their device).

The inclusion of AI technology means greater security and no need for customers to enter passwords when they pay.

The Advantages

The key advantages of the click-to-pay checkout system from the Digital Wellness Program are that:

  • It tackles the problem that customers feel unease when it comes to paying for things online because of the added security.
  • It’s fast and easy – the instant click-to-pay with no need for passwords tackles the reluctance of online shoppers to create a new user account.
  • Merchants who adopt the system have a system from a known and trusted provider that could give them a better chance of preventing fraud.

These factors mean that the system could make customers more likely to feel comfortable shopping for things on smaller websites or with unknown retailers.

What Does This Mean For Your Business?

For Mastercard, this is a way of selling its services to the huge market of smaller and independent businesses.

For merchants, it’s a way for them to leverage the latest AI tech to protect themselves and their customers from fraud, and tackle popular known barriers to purchases from smaller retailers online i.e. worries about security and the unwillingness to take the time to set up a new user account when they want to buy something.

For customers, the system should provide a safe and fast purchasing experience which can only reflect well on the merchant.  It remains to be seen, however, how many merchants take up the new system and what the cost versus benefit implications will be.

Premium, Paid For Version Of Mozilla’s Firefox Planned

It has been reported that Mozilla will be introducing a (paid for) premium subscription-based Firefox service this October to run alongside the free, open-source Firefox browser.

Why?

Mozilla’s share of the (free) browser market has been squeezed by some heavy competition from Google’s Chrome browser and although the Firefox browser is present on many computers and is used to sell people services, it isn’t actually making Mozilla any money.  Also, Mozilla relies heavily on revenue that it receives from search companies that pay to be featured in the Firefox browser, with much of that money coming from its competitor Google. Mozilla, therefore, is looking to diversify and find a way to build its own additional independent revenue stream from the bundling of value-adding services that it already has.

What?

Reports indicate that the new paid for bundled service could include:

  • VPN bandwidth that exceeds what’s available (free) via Mozilla’s ProtonMail VPN partnership i.e. giving paying customers for its new service access to a premium level VPN bandwidth.
  • An as yet, unspecified allotment of secure cloud storage.

Other possible parts of the bundled subscription service could include (although this has not been confirmed):

  • Mozilla’s free file transfer service “Firefox Send”.
  • Mozilla’s password manager “Lockwise”.
  • Firefox Monitor, Mozilla’s service, similar to HaveIBeenPwned.com, which allows you to check whether your personal information has been compromised by any of the numerous data breaches.
  • The “Pocket” application, also known as “Read It Later” which helps with managing a reading list of articles from the Internet by letting you save web pages and videos to Pocket in just one click. Mozilla acquired this service in 2017, and it already has a Premium version available for $45 per year.
  • Tools from ‘Scroll’ (a start-up working with Mozilla) that could result in users of the new premium service getting access to certain news sites.

How Much?

Current reports indicate that the premium Firefox service could cost users around the $10 per month mark.

Still Free Firefox

Mozilla has announced that it won’t charge for existing Firefox features as part of its shift to offering subscription services and that the free Firefox browser will continue to run as normal.

What Does This Mean For Your Business?

For Mozilla, this offers a way to diversify and generate a stream of revenue that isn’t connected to Google and monetises the synergies that it can get from a bundle of some of the products and services that it already owns. It’s also another way to compete in a tough browser market where there is one very strong and dominant market leader that already monetises popular advertising services that display across other browsers and platforms.

For users, access to a premium level VPN bandwidth and secure cloud storage from a known and trusted brand may justify a monthly subscription, particularly with some of the other value-adding services that could be bundled in and may not have been tried businesses to date.

Employee Subject Access Requests Increasing Costs For Their Companies

Research by law firm Squire Patton Boggs has revealed (one year on from the introduction of GDPR ) that companies are facing cost pressures from a large number of subject access requests (SARs) coming from their own employees.

SARs

A Subject Access Requests (SAR), which is a legal right for everyone in the UK, is where an individual can ask a company or organisation, verbally or in writing, to confirm whether they are processing their personal data and, if so, can ask the company or organisation for a copy of that data e.g. paper copy or spreadsheet.  With a SAR, individuals have the legal right to know the specific purpose of any processing of their data, what type of data being processed and who the recipients of that processed data are, how long that data stored, how the data was obtained from them in the first place, and for information about how that processed and stored data is being safeguarded.

Under the old 1998 Data Protection Act, companies and organisations could charge £10 for each SAR, but under GDPR individuals can make requests for free, although companies and organisations can charge “reasonable fees” if requests are unfounded, excessive (in scope), or where additional copies of data are requested to the original request.

Big Rise In SARs From Own Employees = Rise In Costs

The Squire Patton Boggs research shows that 71% of organisations have seen an increase in the number of their own employees making official requests for personal information held, and 67% of those organisations have reported an increase in their level of expenditure in trying to fulfil those requests.

The reason for the increased costs of handling the SARs can be illustrated by the 20% of companies surveyed who said they had to adopt new software to cope with the requests, the 27% of companies who said they had hired staff specifically to deal with the higher volume of SARs, and the 83% of organisation that have been forced to implement new guidelines and procedures to help manage the situation.

Why More Requests From Employees?

It is thought that much of the rise in the volume of SARs from employees may be connected to situations where there are workplace disputes and grievances, and where employees involved feel that they need to use the mechanisms and regulations in place to help themselves or hurt the company.

What Does This Mean For Your Business?

This story is another reminder of how the changes made to data protection in the UK with the introduction of GDPR, the shift in responsibility towards companies, and the widespread knowledge about GDPR can impact upon the costs and workload of a company with SARs.  It is a reminder also, that companies need to have a system and clear policies and procedures in place that enables them to respond quickly and in a compliant way to such requests, whoever they are from.

The research has highlighted an interesting and perhaps surprising and unexpected reason for the rise in the volume of SARs, and that there may be a need now for more guidance from the ICO about employee SARs.

US Visa Applicants Now Asked For Social Media Details and More

New rules from the US State Department will mean that US visa applicants will have to submit social media names and five years’ worth of email addresses and phone numbers.

Extended To All

Under the new rules, first proposed by the Trump administration back in February 2017, whereas previously the only visa applicants who had needed such vetting were those from parts of the world known to be controlled by terrorist groups, all applicants travelling to the US to work or to study will now be required to give those details to the immigration authorities. The only exemptions will be for some diplomatic and official visa applicants.

Delivering on Election Immigration Message

The new stringent rules follow on from the proposed crackdown on immigration that was an important part of now US President Donald Trump’s message during the 2016 election campaign.

Back in July 2016, the Federal Register of the U.S. government published a proposed change to travel and entry forms which indicated that the studying of social media accounts of those travelling to the U.S. would be added to the vetting process for entry to the country. It was suggested that the proposed change would apply to the I-94 travel form, and to the Electronic System for Travel Authorisation (ESTA) visa. The reason(s) given at the time was that the “social identifiers” would be: “used for vetting purposes, as well as applicant contact information. Collecting social media data will enhance the existing investigative process and provide DHS greater clarity and visibility to possible nefarious activity and connections by providing an additional toolset which analysts and investigators may use to better analyse and investigate the case.”

There had already been reports that some U.S. border officials had actually been asking travellers to voluntarily surrender social media information since December 2016.

2017

In February 2017, the Trump administration indicated that it was about to introduce an immigration policy that would require foreign travellers to the U.S. to divulge their social media profiles, contacts and browsing history and that visitors could be denied entry if they refused to comply. At that time, the administration had already barred citizens of seven Muslim-majority countries from entering the US.

Criticism

Critics of the idea that social media details should be obtained from entrants to the US include civil rights group the American Civil Liberties Union which pointed out that there is no evidence it would be effective and that it could lead to self-censorship online.  Also, back in 2017, Jim Killock, executive director of the Open Rights Group was quoted online media as describing the proposed as “excessive and insulting”.

What Does This Mean For Your Business?

Although they may sound a little extreme, these rules have now become a reality and need to be considered by those needing a US visa.  Given the opposition to President Trump and his some of his thoughts and policies and the resulting large volume of Trump-related content that is shared and reacted to by many people, these new rules could be a real source of concern for those needing to work or to study in the US.  It is really unknown what content, and what social media activity could cause problems at immigration for travellers, and what the full consequences could be.

People may also be very uncomfortable being asked to give such personal and private details as social media names and a massive five years’ worth of email addresses and phone numbers, and about how those personal details will be stored and safeguarded (and how long for), and by whom they will be scrutinised and even shared.  The measure may, along with other reported policies and announcements from the Trump administration even discourage some people from travelling to, let alone working or studying in the US at this time. This could have a knock-on negative effect on the economy of the US, and for those companies wanting to get into the US marketplace with products or services.

GCHQ Eavesdropping Proposal Soundly Rejected

A group of 47 technology companies, rights groups and security policy experts have released an open letter stating their objections to the idea of eavesdropping on encrypted messages on behalf of GCHQ.

“Ghost” User

The objections are being made to the (as yet) hypothetical idea floated by the UK National Cyber Security Centre’s technical director Ian Levy and GCHQ’s chief codebreaker Crispin Robinson for allowing a “ghost” user / third party i.e. a person at GCHQ, to see the text of an encrypted conversation (call, chat, or group chat) without notifying the participants.

According to Levy and Robinson, they would only seek exceptional access to data where there was a legitimate need, where there that kind of access was the least intrusive way of proceeding, and where there was also appropriate legal authorisation.

Challenge

The Challenge for government security agencies in recent times has been society’s move away from conventional telecommunications channels which could lawfully and relatively easily be ‘tapped’, to digital and encrypted communications channels e.g. WhatsApp, which are essentially invisible to government eyes.  For example, back in September last year, this led to the ‘Five Eyes’ governments threatening legislative or other measures to be allowed access to end-to-end encrypted apps such as WhatsApp.  In the UK back in 2017, then Home Secretary Amber Rudd had also been pushing for ‘back doors’ to be built into encrypted services and had attracted criticism from tech companies that as well as compromising privacy, this would open secure encrypted services to the threat of hacks.

Investigatory Powers Act

The Investigatory Powers Act which became law in November 2016 in the UK included the option of ‘hacking’ warrants by the government, but the full force of the powers of the law was curtailed somewhat by legal challenges.  For example, back in December 2018, Human rights group Liberty won the right for a judicial review into part 4 of the Investigatory Powers Act.  This is the part that was supposed to give many government agencies powers to collect electronic communications and records of internet use, in bulk, without reason for suspicion.

The Open Letter

The open letter to GCHQ in Cheltenham and Adrian Fulford, the UK’s investigatory powers commissioner was signed by tech companies including Google, Apple, WhatsApp and Microsoft, 23 civil society organisations, including Big Brother Watch, Human Rights Watch, and 17 security and policy experts.  The letter called for the abandonment of the “ghost” proposal on the grounds that it could threaten cyber security and fundamental human rights, including privacy and free expression.  The coalition of signatories also urged GCHQ to avoid alternate approaches that would also threaten digital security and human rights, and said that most Web users “rely on their confidence in reputable providers to perform authentication functions and verify that the participants in a conversation are the people they think they are and only those people”. As such, the letter pointed out that the trust relationship and the authentication process would be undermined by the knowledge that a government “ghost” could be allowed to sit-in and scrutinise what may be perfectly innocent conversations.

What Does This Mean For Your Business?

With digital communications in the hands of private companies, and often encrypted, governments realise that (legal) surveillance has been made increasingly difficult for them.  This has resulted in legislation (The Investigatory Powers Act) with built-in elements to force tech companies to co-operate in allowing government access to private conversations and user data. This has, however, been met with frustration in the form of legal challenges, and other attempts by the UK government to stop end-to-end encryption have, so far, also been met with resistance, criticism, and counter-arguments by tech companies and rights groups. This latest “ghost” proposal represents the government’s next step in an ongoing dialogue around the same issue. The tech companies would clearly like to avoid more legislation and other measures (which look increasingly likely) that would undermine the trust between them and their customers, which is why the signatories have stated that they would welcome a continuing dialogue on the issues.  The government is clearly going to persist in its efforts to gain some kind of surveillance access to tech company communications services, albeit for national security (counter-terrorism) reasons for the most part, but is also keen to be seen to do so in a way that is not overtly like ‘big brother’, and in a way that allows them to navigate successfully through the existing rights legislation.