Data Security

Police Face Recognition Software Flawed

Following an investigation by campaign group Big Brother Watch, the UK’s Information Commissioner, Elizabeth Denham, has said that the Police could face legal action if concerns over accuracy and privacy with facial recognition systems are not addressed.

What Facial Recognition Systems?

A freedom of information request sent to every police force in the UK by Big Brother Watch shows that The Metropolitan Police used facial recognition at the Notting Hill carnival in 2016 and 2017, and at a Remembrance Sunday event, and South Wales Police used facial recognition technology between May 2017 and March 2018. Leicestershire Police also tested facial recognition in 2015.

What’s The Problem?

The two main concerns with the system (as identified by Big Brother Watch and the ICO) are that the facial recognition systems are not accurate in identifying the real criminals or suspects, and that the images of innocent people are being stored on ‘watch’ lists for up to a month, and this could potentially lead to false accusations or arrests.

How Do Facial Recognition Systems Work?

Facial recognition software typically works by using a scanned image of a person’s face (from the existing stock of police photos of mug shots from previous arrests), and then uses algorithms to measure ‘landmarks’ on the face e.g. the position of features and the shape of the eyes, nose and cheekbones. This data is used to make a digital template of a person’s face, which is then converted into a unique code.

High-powered cameras are then used to scan crowds. The cameras link to specialist software that can compare the camera image data to data stored in the police database (the digital template) to find a potential ‘match’. Possible matches are then flagged to officers, and these lists of possible matches are stored in the system for up to 30 days.

A real-time automated facial recognition (AFR) system, like the one the police use at events, incorporates facial recognition and ‘slow time’ static face search.

Inaccuracies

The systems used by the police so far have been criticised for simply not being accurate. For example, of the 2,685 “matches” made by the system used by South Wales Police between May 2017 and March 2018, 2,451 were false alarms.

Keeping Photos of Innocent People On Watch Lists

Big Brother Watch has been critical of the police keeping photos of innocent people that have ended up on lists of (false) possible matches, as selected by the software. Big Brother Watch has expressed concern that this could affect an individual’s right to a private life and freedom of expression, and could result in damaging false accusations and / or arrests.
The police have said that they don’t consider the ‘possible’ face selections as false positive matches because additional checks and balances are applied to them to confirm identification following system alerts.

The police have also stated that all alerts against watch lists are deleted after 30 days, and faces in the video stream that do not generate an alert are deleted immediately.

Criticisms

As well as accusations of inaccuracy and possibly infringing the rights of innocent people, the use of facial recognition systems by the police has also attracted criticism for not appearing to have a clear legal basis, oversight or governmental strategy, and for not delivering value for money in terms of the number of arrests made vs the cost of the systems.

What Does This Mean For Your Business?

It is worrying that there are clearly substantial inaccuracies in facial recognition systems, and that the images of innocent people could be sitting on police watch lists for some time, and could potentially result in wrongful arrests. The argument that ‘if you’ve done nothing wrong, you have nothing to fear’ simply doesn’t stand up if police are being given cold, hard computer information to say that a person is a suspect and should be questioned / arrested, no matter what the circumstances. That argument is also an abdication from a shared responsibility, which could lead to the green light being given to the erosion of rights without questions being asked. As people in many other countries would testify, rights relating to freedom and privacy should be valued, and when these rights are gone, it’s very difficult to get them back again.

The storing of facial images on computer systems is also a matter for security, particularly since they are regarded as ‘personal data’ under the new GDPR which comes into force this month.

There is, of course, an upside to the police being able to use these systems if it leads to the faster arrest of genuine criminals, and makes the country safer for all.

Despite the findings of a study from YouGov / GMX (August 2016) that showed that UK people still have a number of trust concerns about the use of biometrics for security, biometrics represents a good opportunity for businesses to stay one step ahead of cyber-criminals. Biometric authentication / verification systems are thought to be far more secure than password-based systems, which is the reason why banks and credit companies are now using them.

Facial recognition systems have value-adding, real-life business applications too. For example, last year, a ride-hailing service called Careem (similar to Uber but operating in more than fifty cities in the Middle East and North Africa) announced that it was adding facial recognition software to its driver app to help with customer safety.

Efail – Encryption Flaw

A German newspaper has released details of a security vulnerability, discovered by researchers at Munster University of Applied Sciences, in PGP (Pretty Good Privacy) data encryption.

What Is PGP?

PGP (Pretty Good Privacy) is an encryption program that is used for signing, encrypting, and decrypting texts, e-mails, files, directories, and disk partitions, and to increase the security of e-mail communications. As well as being used to encrypt and decrypt email, PGP is also used to sign messages so that the receiver can verify both the identity of the sender and the integrity of the content. PGP works using a private key that is kept secret, and a public key that the sender and receiver share.

The technology is also known by the name of GPG (Gnu Privacy Guard or GnuPG), and is a compatible GPL-licensed alternative.

What’s The Flaw?

The flaw, which was first thought by some security experts to affected the core protocol of PGP (which would make all uses of the encryption method, including file encryption, vulnerable), is now believed to be related to any email programs that don’t check for decryption errors properly before following links in emails that include HTML code i.e. email programs that have been designed without appropriate safeguards.

‘Efail’ Attacks

The flaw leaves this system of encryption open to what have been called ‘efail’ attacks. This involves attackers trying to gain access to encrypted emails (for example by eavesdropping on network traffic), and compromising email accounts, email servers, backup systems or client computers. The idea is to reveal the plaintext of encrypted emails (in the OpenPGP and S/MIME standards).

This type of attack can be carried out by direct exfiltration, where vulnerabilities in Apple Mail, iOS Mail and Mozilla Thunderbird can be abused to directly exfiltrate the plaintext of encrypted emails, or by a CBC/CFB gadget. This is where vulnerabilities in the specification of OpenPGP and S/MIME are abused to exfiltrate the plaintext.

What Could Happen?

The main fear appears to be that the vulnerabilities could be used to decrypt stored, encrypted emails that have been sent in the past (if an attacker can gain access). It is thought that the vulnerabilities could also create a channel for sneaking personal data or commercial data and business secrets off devices as well as for decrypting messages.

What Does This Mean For Your Business?

It is frustrating for businesses to learn that the email programs they may be using, and a method of encryption, supposed to make things more secure, could actually be providin a route for criminals to steal data and secrets.

The advice from those familiar with the details of the flaw is that users of PGP email can disable HTML in their mail programs, thereby keeping them safe from attacks based on this particular vulnerability. Also, users can choose to decrypt emails with PGP decryption tools that are separate from email programs.

More detailed information and advice concerning the flaw can be found here: https://efail.de/#i-have

Twitter Says Change Your Password

Twitter has advised all users to change their passwords after a bug caused the passwords to be stored in easily readable, plain text on an internal computer log.

The Bug – Passwords Visible Before ‘Hashing’

Twitter reported on their own blog that the bug that stored passwords had been ‘unmasked’ in an internal log. The bug is reported to have written the passwords into that internal log before Twitter’s hashing process had been completed.

The hashing process disguises Twitter passwords, making them very difficult to read. Hashing uses the ‘bcrypt’ function which replaces actual passwords with a random set of numbers and letters. It is this set of replaced characters that should be stored in Twitter’s system, as these allow the systems to validate account credentials without revealing customer password.

Millions Affected?

The fact that the passwords were revealed on an internal server, albeit for what is estimated to be for several months, and that there appears to be no evidence of anyone outside the company seeing the passwords, and no evidence of a theft or passwords turning up for sale on hacker site, indicates that it is unlikely that many of the 330 million Twitter users have anything real to fear from the breach.

Big Breaches

In this case, Twitter appears to have behaved responsibly and acted quickly by reporting the bug to regulators, fixing the bug, and quickly and publicly advising all customers to change their passwords.

Twitter’s behaviour appears to be in stark contrast to the way other companies have handled big breaches. For example, back in November 2017 Uber was reported to have concealed a massive data breach from a hack involving the data of 57 million customers and drivers, and then paid the hackers $100,000 to delete the data and to keep quiet about it.

Breaches can happen for all kinds of reasons, and while Twitter’s breach was very much caused and fixed by Twitter internally, others have been less lucky. For example, an outsourcing provider of the Red Cross Blood Service in Australia accidentally published the Service’s entire database to a public web server, thereby resulting in Australia’s largest ever data breach.

What Does This Mean For Your Business?

If you have a Twitter account, personal or business, the advice from Twitter is quite simply to change your password, and change it on any other service where you may have used the same password. Twitter is also advising customers to make the new password a strong one that isn’t reused on other websites, and to enable two-factor authentication. You may also want to use a password manager to make sure you’re using strong, unique passwords everywhere.

In this case, Twitter has acted quickly, appropriately and transparently, thereby minimising risks to customers and risks to its own brand reputation. Twitter will want this message of responsibility to be received loud and clear, particularly at a time where GDPR (and its hefty fines) is just around the corner, and a time when other competing social networks i.e. Facebook have damaged customer trust by acting less responsibly with their data through the Cambridge Analytica scandal.

8 More Security Flaws Found In Processors

Following on from the revelation in January that 2 major security flaws are present in nearly all modern processors, security researchers have now found 8 more potentially serious flaws.

Eight?

According to reports by German tech news magazine c’t, the 8 new security flaws in chips / processors were discovered by several different security teams. The magazine is reported to have been given the full technical details of the vulnerabilities by researchers and has been able to verify them.

The new ‘family’ of bugs have been dubbed Spectre Next Generation (Spectre NB), after the original Spectre bug that was made public along with the ‘Meltdown’ bug at the beginning of the year.

90 Days To Respond

The researchers who discovered the bugs have followed bug disclosure protocols, and have given chip-makers and others 90 days to respond and to prepare patches before they release details of the bugs. The 90 day time limit ran out on Monday 7th May.

Co-ordinated Disclosure

Intel is reported to have been reluctant to simply acknowledge the existence of the bugs, preferring to have what it calls a ‘co-ordinated disclosure’, presumably near the end of the protocol time limit, when there has been time to prepare patches and to mitigate any other issues.

It is not yet clear if AMD processors are also potentially vulnerable to the Spectre-NG problems.

How Serious Are The Flaws?

There have been no reports, as yet, of any of the 8 newly-discovered flaws being used by cyber-criminals to attack firms and extract data. According to the magazine C’t, however, Intel had classified half of the flaws as “high risk”, and the others as “medium risk”.

It is believed that one of the more serious flaws could provide a way for attackers access a vulnerable virtual computer, and thereby reach the server behind it, or reach other software programs running on that machine. It has been reported that Cloud services like Amazon’s AWS may be at risk from this flaw.

Meltdown and Spectre

The original Meltdown and Spectre flaws were found to have been present in nearly all modern processors / microchips, meaning that most computerised devices are potentially vulnerable to attack, including all iPhones, iPads and Macs.

Meltdown was found to leave passwords and personal data vulnerable to attacks, and could be applied to different cloud service providers as well as individual devices. It is believed that Meltdown could affect every processor since 1995, except for Intel Itanium and Intel Atom before 2013.

Spectre, which was found to affect Intel, AMD and ARM (mainly Cortex-A) processors, allows applications to be fooled into leaking confidential information. Spectre affects almost all systems including desktops, laptops, cloud servers, and smartphones.

What Does This Mean For Your Business?

The discovery of a family of 8 more flaws on top of the original 2 ‘Spectre’ and ‘Meltdown’ flaws is more bad news for businesses, particularly when they are trying to make things as secure as possible for the introduction of GDPR. Sadly, it is very likely that your devices are affected by the several or all of the flaws because they are hardware flaws at architectural level, more or less across the board for all devices that use processors. The best advice now is to install all available patches and make sure that you are receiving updates for all your systems, software and devices.

Although closing hardware flaws using software patches and updates is a big job for manufacturers and software companies, it is the only realistic and quick answer at this stage to a large-scale problem that has present for a long time, but has only recently been discovered.

Regular patching is a good basic security habit to get into anyway. Research from summer 2017 (Fortinet Global Threat Landscape Report) shows that 9 out of 10 impacted businesses are being hacked through un-patched vulnerabilities, and that many of these vulnerabilities are 3 or more years old, and there are already patches available for them.

Cambridge Analytica Ordered To Turn Over All Data On US Professor

The UK data watchdog, the Information Commissioner’s Office (ICO), has ordered the consulting firm Cambridge Analytica to hand over all the personal information it has on US citizen Professor David Carroll, or face prosecution.

Demand Made in May 2017

The consulting firm, which is reported to have ceased operations and filed for bankruptcy in the wake of the recent scandal involving its access to and use of Facebook users’ details is facing the Enforcement Notice and possible legal action (if it doesn’t comply) because it has not fully met a demand made by Professor Carroll early last year.

Who Is Professor David Carroll?

David Carroll is a professor at the New School’s Parsons School of Design. Although Professor Carroll is based in New York and is not a UK citizen, he used a subject access request (part of British data protection law) to ask Cambridge Analytica’s branch in the UK to provide all the data it had gathered on him. With this type of request, organisations need to respond within 40 days with a copy of the data, the source of the data, and if the organisation will be giving the data to others.

It has been reported that Professor Carroll, a Democrat, was interested from an academic perspective, in the practice of political ad targeting in elections. Professor Carroll alleges that he was also concerned that he may have been targeted with messages that criticised Secretary Hillary Clinton with falsified or exaggerated information that may have negatively affected his sentiment about her candidacy.

Sent A Spreadsheet

Some weeks after Professor Carroll filed the subject access request in early 2017, Cambridge Analytica sent him a spreadsheet of information it had about him.

It has been reported that Cambridge Analytica had accurately predicted his views on some issues, and had scored Carroll a nine 9 of 10 on what it called a “traditional social and moral values importance rank.”

What’s The Problem?

Even though Carroll was given a spreadsheet with some information, he wanted to know what that ranking meant and what it was based on, and where the data about him came from. Cambridge Analytica CEO Alexander Nix told a UK parliamentary committee that his company would not provide American citizens, like David Carroll, all the data it holds on them, or tell them where the data came from, and Nix said that there was no legislation in the US that allowed individuals to make such a request.

The UK’s Information Commissioner, Elizabeth Denham, sent a letter to Cambridge Analytica asking where the data on Professor Carroll came from, and what had been done with it. Elizabeth Denham is also reported to have said that, whether or not the people behind Cambridge Analytica decide to fold their operation, a continued refusal to engage with the ICO will still potentially breach an Enforcement Notice, and it will then become a criminal matter.

What Does This Mean For Your Business?

Many people have been shocked and angered by the recent scandal involving Facebook and its sharing of Facebook user data with Cambridge Analytica. The action by Professor Carroll could not only shed light on how millions of American voters were targeted online in the run-up to the 2016 election, but it could also lead to a wider understanding of what data is stored about us and how it is used by companies and organisations.

The right to request personal data that an organisation holds about us is a cornerstone right in data protection law, and this right will be brought into even sharper focus by the introduction of GDPR this month. GDPR will also give EU citizens the ‘right to be forgotten’, and has already put pressure on UK companies to put their data house in order, and prepare to comply or face stiff penalties.

This story also shows that American citizens can request information from companies that process their data in the UK.