Legislation

Businesses Get Extra Time To Meet New Payment Processing Rules

The Financial Conduct Authority (FCA) has given UK businesses an extra 6 months to reach compliance with the new Strong Customer Authentication (SCA) rules for payment processing.

What Are The SCA Rules?

The SCA rules, introduced in 2019, are intended to the improve security of payments and limit fraud by making sure that whoever requests access to a person’s account or tries to make a payment, is the account holder or someone to whom the account holder has given consent.

These new rules, which come from the EU Payments Services Directive (PSD2), which came into effect in January 2018, mean that online payments of more than €50 will need two methods of authentication from the person making the payment e.g. password, fingerprint (biometric) or a phone number. This also means that online customers will not be able to check out using just a credit or debit card but will also need an additional form of identification.

Card Present

For normal ‘card present’ situations (not online) contactless will still be OK for ‘low value’ transactions of less than €50 at point-of-sale and Chip and PIN will still be suitable for values above €50.

Recurring Payments Exempt

Where a recurring payment of the same value is being made from a card to the same merchant e.g. subscriptions and memberships, the initial set up will require authentication, but subsequent transactions will be exempt.

Put Back

The first deadline for the implementation of the SCA rules was 14th September 2019 but this was put back by 18 months.

While the deadline for the implementation of SCA is still 31st December 2020 in the rest of the European Economic Area (EEA), in the UK, the FCA has now announced that, in order to help merchants who have been severely affected by the Covid-19 crisis the enforcement of SCA has now been delayed until 14th September 2021.

What Does This Mean For Your Business?

Most businesses would agree that high levels of online fraud are bad for everyone and just reduce consumer confidence, so if the introduction of new improved payment security measures can reduce fraud this will be helpful.  The COVID-19 crisis has, however, hit businesses very hard and for many, it’s been a case of simply trying to keep the business going, let alone worry about how they can comply with new payment rules in time.  This latest extension is, therefore, good news and should lessen the burden on merchants as the lockdown is lifted and the country tries to find the new normal in a post-COVID business environment.

Featured Article – Maintaining Security During The COVID-19 Health Crisis

The current global health crisis may bring many different IT security challenges to businesses and organisations and this article highlights some of the ways that you can prepare to keep IT security covered as best you can at this difficult time.

Larger and Smaller Businesses – Some Different Challenges

Larger organisations may be at an advantage as they may already have policies, procedures, equipment and security arrangements in place for remote working, although they may find themselves more stretched as many more staff work from home than usual.

Smaller businesses and organisations, however, may be less well used to and equipped for suddenly having to send staff home to work. This means that they may have a lot more work to do now in order to prepare, and their IT personnel will find themselves needing to prioritise and be prepared to provide more on-demand support over the coming weeks.

Guide

Even though larger and smaller companies may have different challenges on a different scale, here is brief guide incorporating a list of suggestions that could help many businesses and organisations to stay secure while employees, contractors and other stakeholders are working remotely:

– Alert all staff to the possibility of email-borne threats and other social engineering attacks.  For example, over the last few weeks, cybercriminals have been sending COVID-19 related phishing emails e.g. bogus workplace policy emails, emails purporting to be from a doctor offering details of a vaccine/cure, emails with a promise of a tax refund and more.  The message to employees should be to not open unfamiliar emails and certainly don’t click on any attachments or links to external pages from any suspect emails.

– Make sure that any software and software-based protection used by employees working from home is secure and up to date.  For example, this could include making sure their devices have up to date operating systems and browsers, firewall software and anti-virus software is installed and up to date, and make sure that employees install any new updates as soon as possible.

– Ensure that any devices used by employees are managed, secure (have downloaded trusted security apps), have appropriate protection e.g. data loss protection, updated anti-malware, and a capacity to be centrally monitored if possible. Ensure that all devices, including employee mobiles (which can carry confidential information), are password-protected, and can encrypt data to prevent theft.

– Monitor the supply chain arrangements where possible.  If a supplier is geographically remote, for example, and if the Covid-19 crisis has left a supplier short of qualified IT and/or security staff, or if contract staff/cover staff, or unfamiliar staff members have been brought in to replace staff members e.g. particularly in accounts, this could present a security risk.  Taking the time to conduct at least basic checks on who you dealing with could prevent social engineering, phishing and other security threats, and exercising caution and offering your own known secure channel suggestions where suppliers may be short of  IT-security staff could help to maintain your company’s security posture.

– Although employees are likely to stay at home in the current situation, you will still need to make sure that they are made aware of your policy about accessing information on public or unsecured networks e.g. using a VPN on mobile devices to encrypt data.

– Make sure you have a 24-hour reporting procedure for any stolen or lost equipment/devices.

– Pay attention to user identity management. For example, have a user account for each employee, and give appropriate access to each employee.  This should help to prevent unauthorised access by other persons.  Also, control which programs and data each employee has access to, and which level of user rights they have on certain platforms.

– Make employees aware that they must use only strong, unique passwords to sign-in to your network, and that these details should be changed regularly e.g. every 3 months.  Also, make sure that multi-factor authentication is used by employees.

– Stay on top of managing the workforce and general daily operations.  For example, make sure that key IT staff are available at all times, communication channels and procedures are clear and functioning, handover procedures are covered, any sickness (which looks likely) can have cover planned, and that productivity targets can be met despite remote working.

– Remind employees that they still need to comply with GDPR while working remotely and ensure that help and advice are available for this where needed.

– Use this experience to keep the company’s disaster recovery and business continuity plans up to date.

– Schedule regular, virtual/online meetings with staff and ensure that all employees have the contact details of other relevant employees.

– If you’re not already using a collaborative working platform e.g. Teams or Slack, consider the possibility of introducing this kind of working to help deal with future, similar threats.

Looking Forward

At this point, the country, businesses, and many individuals are thinking more about survival strategies, but taking time to ensure that IT security is maintained is important in making companies less vulnerable at a time when operations don’t follow normal patterns and when many cybercriminals are looking to capitalise on any weaknesses caused by the COVID-19 health emergency.

Survey Reveals IR35 Tax Reforms Legal Action Risk For Private Sector Companies

A survey by ContractorCalculator has revealed that many private sector companies may be at risk of legal action through misinterpreting the new IR35 tax reforms.

What Is IR35?

The IR35 tax reform legislation, set to be introduced this April, is designed to stop tax avoidance from ‘disguised employment’, which occurs when self-employed contractors set up their own limited company to pay themselves through dividends (which are not subject to National Insurance).  IR35 will essentially mean that, from April 2020, medium-to-larger private sector organisations become responsible for determining whether the non-permanent contractors and freelancers should be taxed in the same way as permanent employees (inside IR35) or as off-payroll workers (outside IR35), based upon the work they do and how it is performed.

Also, the tax liability will transfer from the contractor to the fee-paying party i.e. the recruiter or the company that directly engages the contractor. HMRC hopes that the IR35 reforms will stop contractors from deliberately misclassifying themselves in order to reduce their employment tax liabilities.

The idea for the introduction of the legislation dates back to 1999 with Chancellor Gordon Brown and Chancellor Philip Hammond introduced IR35 for public bodies using contractors from April 2017.

National Insurance

One of the potential problem areas for private sector companies revealed by the ContractorCalculator questionnaire, answered by some 12,000 contractors, is that some may be unlawfully deducting employers’ national insurance contributions (NICs) from their contractors’ pay.  This means that they are effectively imposing double taxation on these contractors.

Given that 42% of contractors said they weren’t aware that such deductions were unlawful, the survey appears to show that although these companies have been acting unlawfully, it is likely to be because they have simply misinterpreted the new tax reforms given the complicated nature of the IR35.

Tribunal Threat

The survey also showed that 58% of survey participants are classified as ‘inside’ IR35 (taxed in the same way as permanent employees) said that they would consider taking their client to an employment tribunal because, if they have to pay the same amount of tax as a permanent employee, they feel that they should receive the same benefits as permanent employees e.g. sick pay and a pension.

Contractor Loses Case

On this subject, there was news this week that an IT contractor who had worked through his limited company Northern Light Solutions for Nationwide for several years and been treated as outside IR35 has lost an appeal to HMRC against a £70,000 tax demand whereby HMRC had argued, successfully, that he should have been categorised as inside IR35.

What Does This Mean For Your Business?

When the IR35 tax reforms were first announced, many business owners thought that the reforms appeared to be very complex and that not enough had been done by the government to raise awareness of the changes and to educate businesses and contractors about the implications and responsibilities.  This survey appears to support this and shows that this lack of knowledge and awareness of IR35 by businesses could be leaving them open to the risk of legal action.  Contactors and the companies that use their services need to learn quickly about the dangers of hiring freelance workers long-term and companies that use freelancers need to conduct correct due diligence in order to ensure that the business relationship they have with them complies with IR35.

Dentist’s Legal Challenges To Anonymity of Negative Google Reviewer

ABC News in Australia has reported how a Melbourne dentist has convinced a Federal Court Judge to order tech giant Google to produce identifying information about a person who posted a damaging negative review about the dentist on Google’s platform.

What Happened?

The dentist, Dr Matthew Kabbabe, alleges that a reviewer’s comment posted on Google approximately three months ago advised others to “stay away” from his practice and that it damaged his teeth-whitening business and had a knock-on negative impact on his life.

Even though Google provides a platform to allow reviews to be posted in order to benefit businesses (if reviews are good), perhaps encourage and guide businesses to give good service, and to help Google users to decide whether to use a service, the comment was the only bad one on a page of five-star reviews. In addition to the possibly defamatory nature of the comment, Dr Kabbabe’s objection to the anonymity that Google offers comment posters, and that it could, as such be, something posted by a competitor or disgruntled ex-employee to damage his (or any other business) drove him to take the matter to the Federal Court after, it has been reported, his requests to Google to take the comment down were unsuccessful.

Landmark Ruling

Not only did Federal Court Judge Justice Bernard Murphy request that Google divulge identifying information about the comment poster, listed only a “CBsm 23″ (name, phone number, IP addresses, location metadata), but also the tech giant has been ordered to provide any other Google accounts (name and email addresses)  which are from the same IP address during the period of time in question.

Can Reply

Reviews posted on Google can be replied to by businesses as long as the replies comply with Google’s guidelines.

Dealing with some apparently unfair customer comments online is becoming more common for many businesses.  For example, hotels and restaurants have long struggled with how to respond to potentially damaging criticism left by customers on TripAdvisor. Recently, the owner of the Oriel Daniel Tearoom in Llangefni, Anglesey made the news when they responded to negative comments with brutal responses and threats of lifetime bans.

What Does This Mean For Your Business?

For the most part, potential customers are likely to be able to take a balanced view of comments that they read when finding out more about a business, but the fact that a Federal judge ruled in favour of not allowing those who have posted potentially damaging comments to hide behind online anonymity means that there may well be an argument for platforms to amend rules to try to redress the balance more in the favour of businesses.  It does seem unfair that, as in the case of the dentist, where the overwhelming majority of comments have been good, an individual, who may be a competitor or person with an axe to grind is allowed to anonymously and publicly publish damaging comments, whether justified or not, for a global audience to see and with no need to prove their allegations – something that would be subject to legal scrutiny in the offline world.  It will be interesting to see Google’s response to this ground-breaking ruling.

Featured Article – Combatting Fake News

The spread of misinformation/disinformation/fake news by a variety of media including digital and printed stories and deepfake videos is a growing threat in what has been described as out ‘post-truth era’, and many people, organisations and governments are looking for effective ways to weed out fake news, and to help people to make informed judgements about what they hear and see.

The exposure of fake news and its part in recent election scandals, the common and frequent use of the term by prominent figures and publishers, and the need for the use of fact-checking services have all contributed to an erosion of public trust in the news they consume. For example, YouGov research used to produce annual Digital News Report (2019) from the Reuters Institute for the Study of Journalism at the University of Oxford showed that public concern about misinformation remains extremely high, reaching a 55 per cent average across 38 countries with less than half (49 per cent) of people trusting the news media they use themselves.

The spread of fake news online, particularly at election times, is of real concern and with the UK election just passed, the UK Brexit referendum, the 2017 UK general election, and the last U.S. presidential election all being found to have suffered interference in the form of so-called ‘fake news’ (and with the 59th US presidential election scheduled for Tuesday, November 3, 2020) the subject is high on the world agenda.

Challenges

Those trying to combat the spread of fake news face a common set of challenges, such as those identified by CEO of OurNews, Richard Zack, which include:

– There are people (and state-sponsored actors) worldwide who are making it harder for people to know what to believe e.g. through spreading fake news and misinformation, and distorting stories).

– Many people don’t trust the media or don’t trust fact-checkers.

– Simply presenting facts doesn’t change peoples’ minds.

– People prefer/find it easier to accept stories that reinforce their existing beliefs.

Also, some research (Stanford’s Graduate School of Education) has shown that young people may be more susceptible to seeing and believing fake news.

Combatting Fake News

So, who’s doing what online to meet these challenges and combat the fake news problem?  Here are some examples of those organisations and services leading the fightback, and what methods they are using.

Browser-Based Tools

Recent YouGov research showed that 26% per cent of people say they have started relying on more ‘reputable’ sources of news, but as well as simply choosing what they regard to be trustworthy sources, people can now choose to use services which give them shorthand information on which to make judgements about the reliability of news and its sources.

Since people consume online news via a browser, browser extensions (and app-based services) have become more popular.  These include:

– Our.News.  This service uses a combination of objective facts (about an article) with subjective views that incorporate user ratings to create labels (like nutrition labels on food) next to new articles that a reader can use to make a judgement.  Our.News labels use publisher descriptions from Freedom Forum, bias ratings from AllSides, information about an article’s sources author and editor.  It also uses fact-checking information from sources including PolitiFact, Snopes and FactCheck.org, and labels such as “clickbait” or “satire” along with and user ratings and reviews.  The Our.News browser extension is available for Firefox and Chrome, and there is an iOS app. For more information go to https://our.news/.

– NewsGuard. This service, for personal use or for NewsGuard’s library and school system partners, offers a reliability rating score of 0-100 for each site based on its performance on nine key criteria, ratings icons (green-red ratings) next to links on all of the top search engines, social media platforms, and news aggregation websites.  Also, NewsGuard gives summaries showing who owns each site, its political leaning (if any), as well as warnings about hoaxes, political propaganda, conspiracy theories, advertising influences and more.  For more information, go to https://www.newsguardtech.com/.

Platforms

Another approach to combatting fake news is to create a news platform that collects and publishes news that has been checked and is given a clear visual rating for users of that platform.

One such example is Credder, a news review platform which allows journalists and the public to review articles, and to create credibility ratings for every article, author, and outlet.  Credder focuses on credibility, not clicks, and uses a Gold Cheese (yellow) symbol next to articles, authors, and outlets with a rating of 60% or higher, and a Mouldy Cheese (green) symbol next to articles, authors, and outlets with a rating of 59% or less. Readers can, therefore, make a quick choice about what they choose to read based on these symbols and the trust-value that they create.

Credder also displays a ‘Leaderboard’ which is based on rankings determined by the credibility and quantity of reviewed articles. Currently, Credder ranks nationalgeographic.com, gizmodo.com and cjr.org as top sources with 100% ratings.  For more information see https://credder.com/.

Automation and AI

Many people now consider automation and AI to be an approach and a technology that is ‘intelligent’, fast, and scalable enough to start to tackle the vast amount of fake news that is being produced and circulated.  For example, Google and Microsoft have been using AI to automatically assess the truth of articles.  Also, initiatives like the Fake News Challenge (http://www.fakenewschallenge.org/) seeks to explore how AI technologies, particularly machine learning and natural language processing, can be employed to combat fake news and supports the idea that AI technologies hold promise for significantly automating parts of the procedure human fact-checkers use to determine if a story is real or a hoax.

However, the human-written rules underpinning AI, and how AI is ‘trained’ can also lead to bias.

Government

Governments clearly have an important role to play in the combatting of fake news, especially since fake news/misinformation has been shown to have been spread via different channels e.g. social media to influence aspects of democracy and electoral decision making.

For example, in February 2019, the Digital, Culture, Media and Sport Committee published a report on disinformation and ‘fake news’ highlighting how “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms”.  The UK government called for a shift in the balance of power between “platforms and people” and for tech companies to adhere to a code of conduct written into law by Parliament and overseen by an independent regulator.

Also, in the US, Facebook’s Mark Zuckerberg has been made to appear before the U.S. Congress to discuss how Facebook tackles false reports.

Finland – Tackling Fake News Early

One example of a government taking a different approach to tackling fake news is that of Finland, a country that has recently been rated Europe’s most resistant nation to fake news.  In Finland, evaluation of news and fact-checking behaviour in the school curriculum was introduced in a government strategy after 2014, when Finland was targeted with fake news stories from its Russian neighbour.  The changes to the school curriculum across core areas in all subjects are, therefore, designed to make Finnish people, from a very young age, able to detect and do their part to fight false information.

Social Media

The use of Facebook to spread fake news that is likely to have influenced voters in the UK Brexit referendum, the 2017 UK general election and the last U.S. presidential election put social media and its responsibilities very much in the spotlight.  Also, the Cambridge Analytica scandal and the illegal harvesting of 50 million Facebook profiles in early 2014 for apparent electoral profiling purposes damaged trust in the social media giant.

Since then, Facebook has tried to be seen to be actively tackling the spread of fake news via its platform.  Its efforts include:

– Hiring the London-based, registered charity ‘Full Fact’, who review stories, images and videos, in an attempt to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.  Facebook is also reported to be working with fact-checkers in more than 20 countries, and to have had a working relationship with Full Fact since 2016.

– In October 2018, Facebook also announced that a new rule for the UK now means that anyone who wishes to place an advert relating to a live political issue or promoting a UK political candidate, referencing political figures, political parties, elections, legislation before Parliament and past referenda that are the subject of national debate, will need to prove their identity, and prove that they are based in the UK. The adverts they post will also have to carry a “Paid for by” disclaimer to enable Facebook users to see who they are engaging with when viewing the ad.

– In October 2019, Facebook launched its own ‘News’ tab on its mobile app which directs users to unbiased, curated articles from credible sources in a bid to publicly combat fake news and help restore trust in its own brand.

– In January this year, Monika Bickert, Vice President of Facebook’s Global Policy Management announced that Facebook is banning deepfakes and “all types of manipulated media”.

Other Platforms & Political Adverts

Political advertising has become mixed up with the spread of misinformation in the public perception in recent times.  With this in mind, some of the big tech and social media players have been very public about making new rules for political advertising.

For example, in November 2019, Twitter Inc banned political ads, including ads referencing a political candidate, party, election or legislation.  Also, at the end of 2019, Google took a stand against political advertising by saying that it would limit audience targeting for election adverts to age, gender and the general location at a postal code level.

Going Forward

With a U.S. election this year, and with the sheer number of sources, and with the scale and resources that some (state-sponsored) actors have, the spread of fake news is something that is likely to remain a serious problem for some time yet.  From the Finnish example of creating citizens who have a better chance than most of spotting fake news to browser-based extensions, moderated news platforms, the use of AI, government and other scrutiny and interventions, we are all now aware of the problem, the fight-back is underway, and we are getting more access to ways in which we can make our own more informed decisions about what we read and watch and how credible and genuine it is.

Apple Fined £21M For Slowing Old iPhone

The French competition and fraud watchdog DGCCRF has fined tech giant Apple €25 million (£21 million) for slowing down some old iPhones and not telling people how to fix the problem.

What Happened?

Back in 2017, some iPhone users were sharing concerns online that their iPhone’s performance had slowed with age but had sped up after a battery replacement. This led to a customer sharing comparative performance tests of different models of the iPhone 6S on Reddit, which appeared to support the customer suspicions.

Technology website Geeknebench also shared the results of its own tests of several iPhones running different versions of the iOS operating system where some showed slower performance than others.

After customers concerns mounted and received more press, Apple publicly admitted that it had made changes one year earlier in the iOS 10.2.1 software update that is likely to have been responsible for the slowdown that customers may have experienced in iPhone 6, iPhone 6 Plus, iPhone 6s, iPhone 6s Plus, and iPhone SE.

Apple issued an apology to customers in January 2018.

Why?

According to Apple, the slowing down of the phones was due to the lithium-ion batteries becoming less capable of supplying peak current demands over time, so in order to prevent the phones from shutting down (and to protect their components), Apple released a software update to smooth-out the battery performance.

What The Watchdog Says

The DGCCRF has ruled, however, that Apple needs to pay €25 million fine and to display a statement on its website for a month because the iOS software update negatively affected the performance of ageing devices, customers were not told that the 10.2.1 and 11.2 iOS updates would cause a slowing down of their devices and that customers were also not told that replacing the battery rather than replacing the whole phone would solve the problem.

What Does This Mean For Your Business?

When this story first made the headlines, it was a serious embarrassment for Apple and a blot on the copybook of a brand that had managed to maintain an image of trust and reliability. This story illustrates how managing customer relationships in an age where information is shared quickly and widely by customers via the Internet involves making smart decisions about transparency and being seen to be up-front with loyal customers.

It is very likely that Apple regrets the entire incident and that even though the French regulator, in this case, has decided to impose a big fine, it is likely to be of more annoyance to Apple that customers have to be reminded of the incident again several years later and that the company will now have to display a notice on its website for a month as a further reminder.

Featured Article – Proposed New UK Law To Cover IoT Security

The UK government’s Department for Digital, Culture, Media and Sport (DCMS), has announced that it will soon be preparing new legislation to enforce new standards that will protect users of IoT devices from known hacking and spying risks.

IoT Household Gadgets

This commitment to legislate leads on from last year’s proposal by then Digital Minister Margot James and follows a seven-month consultation with GCHQ’s National Cyber Security Centre, and with stakeholders including manufacturers, retailers, and academics.

The proposed new legislation will improve digital protection for users of a growing number of smart household devices (devices with an Internet connection) that are broadly grouped together as the ‘Internet of Things’ (IoT).  These gadgets, of which there is an estimated 14 billion+ worldwide (Gartner), include kitchen appliances and gadgets, connected TVs, smart speakers, home security cameras, baby monitors and more.

In business settings, IoT devices can include elevators, doors, or whole heating and fire safety systems in office buildings.

What Are The Risks?

The risks are that the Internet connection in IoT devices can, if adequate security measures are not in place, provide a way in for hackers to steal personal data, spy on users in their own homes, or remotely take control of devices in order to misuse them.

Default Passwords and Link To Major Utilities

The main security issue of many of these devices is that they have pre-set, default unchangeable passwords, and once these passwords have been discovered by cyber-criminals, the IoT devices are wide open to being tampered with and misused.

Also, IoT devices are deployed in many systems that link to and are supplied by major utilities e.g. smart meters in homes. This means that a large-scale attack on these IoT systems could affect the economy.

Examples

Real-life examples of the kind of IoT hacking that the new legislation will seek to prevent include:

– Hackers talking to a young girl in her bedroom via a ‘Ring’ home security camera (Mississippi, December 2019).  In the same month, a Florida family were subjected to vocal, racial abuse in their own home and subjected to a loud alarm blast after a hacker took over their ‘Ring’ security system without permission.

– In May 2018, A US woman reported that a private home conversation had been recorded by her Amazon’s voice assistant, and then sent it to a random phone contact who happened to be her husband’s employee.

– Back in 2017, researchers discovered that a sex toy with an in-built camera could also be hacked.

– In October 2016, the ‘Mirai’ attack used thousands of household IoT devices as a botnet to launch an online distributed denial of service (DDoS) attack (on the DNS service ‘Dyn’) with global consequences.

New Legislation

The proposed new legislation will be intended to put pressure on manufacturers to ensure that:

– All internet-enabled devices have a unique password and not a default one.

– There is a public point of contact for the reporting of any vulnerabilities in IoT products.

– The minimum length of time that a device will receive security updates is clearly stated.

Challenges

Even though legislation could make manufacturers try harder to make IoT devices more secure, technical experts and commentators have pointed out that there are many challenges to making internet-enabled/smart devices secure because:

  • Adding security to household internet-enabled ‘commodity’ items costs money. This would have to be passed on to the customer in higher prices, but this would mean that the price would not be competitive. Therefore, it may be that security is being sacrificed to keep costs down-sell now and worry about security later.
  • Even if there is a security problem in a device, the firmware (the device’s software) is not always easy to update. There are also costs involved in doing so which manufacturers of lower-end devices may not be willing to incur.
  • With devices which are typically infrequent and long-lasting purchases e.g. white goods, we tend to keep them until they stop working, and we are unlikely to replace them because they have a security vulnerability that is not fully understood. As such, these devices are likely to remain available to be used by cyber-criminals for a long time.

Looking Ahead

Introducing legislation that only requires manufacturers to make relatively simple changes to make sure that smart devices come with unique passwords and are adequately labelled with safety and contact information sounds as though it shouldn’t be too costly or difficult.  The pressure of having to display a label, by law, that indicates how safe the item is, could provide that extra motivation for manufacturers to make the changes and could be very helpful for security-conscious consumers.

The motivation for manufacturers to make the changes to the IoT devices will be even greater if faced with the prospect of retailers eventually being barred from selling products that don’t have a label, as was originally planned for the proposed legislation.

The hope from cyber-security experts and commentators is that the proposed new legislation won’t be watered down before it becomes law.

Police Images of Serious Offenders Reportedly Shared With Private Landlord For Facial Recognition Trial

There have been calls for government intervention after it was alleged that South Yorkshire Police shared its images of serious offenders with a private landlord (Meadowhall shopping centre in Sheffield) as part of a live facial recognition trial.

The Facial Trial

The alleged details of the image-sharing for the trial were brought to the attention of the public by the BBC radio programme File on 4, and by privacy group Big Brother Watch.

It has been reported that the Meadowhall shopping centre’s facial recognition trial ran for four weeks between January and March 2018 and that no signs warning visitors that facial recognition was in use were displayed. The owner of Meadowhall shopping centre is reported as saying (last August) that the data from the facial recognition trial was “deleted immediately” after the trial ended. It has also been reported that the police have confirmed that they supported the trial.

Questions

The disclosure has prompted some commentators to question not only the ethical and legal perspective of not just holding public facial recognition trials without displaying signs but also of the police allegedly sharing photos of criminals (presumably from their own records) with a private landlord.

The UK Home Office’s Surveillance Camera Code of Practice, however, does appear to support the use of facial recognition or other biometric characteristic recognition systems if their use is “clearly justified and proportionate.”

Other Shopping Centres

Other facial recognition trials in shopping centres and public shopping areas have been met with a negative response too.  For example, the halting of a trial at the Trafford Centre shopping mall in Manchester in 2018, and with the Kings Cross facial recognition trial (between May 2016 and March 2018) which is still the subject of an ICO investigation.

Met Rolling Out Facial Recognition Anyway

Meanwhile, and despite a warning from Elizabeth Denham, the UK’s Information Commissioner, back in November, the Metropolitan Police has announced it will be going ahead with its plans to use live facial recognition cameras on an operational basis for the first time on London’s streets to find suspects wanted for serious or violent crime. Also, it has been reported that South Wales Police will be going ahead in the Spring with a trial of body-worn facial recognition cameras.

EU – No Ban

Even though many privacy campaigners were hoping that the EC would push for a ban on the use of facial recognition in public spaces for up to five years while new regulations for its use are put in place, Reuters has reported that The European Union has now scrapped any possibility of a ban on facial recognition technology in public spaces.

Facebook Pays

Meanwhile, Facebook has just announced that it will pay £421m to a group of Facebook users in Illinois, who argued that its facial recognition tool violated the state’s privacy laws.

What Does This Mean For Your Business?

Most people would accept that facial recognition could be a helpful tool in fighting crime, saving costs, and catching known criminals more quickly and that this would be of benefit to businesses and individuals. The challenge, however, is that despite ICO investigations and calls for caution, and despite problems that the technology is known to have e.g. being inaccurate and showing a bias (being better at identifying white and male faces), not to mention its impact on privacy, the police appear to be pushing ahead with its use anyway.  For privacy campaigners and others, this may give the impression that their real concerns (many of which are shared by the ICO) are being pushed aside in an apparent rush to get the technology rolled out. It appears to many that the use of the technology is happening before any of the major problems with it have been resolved and before there has been a proper debate or the introduction of an up-to-date statutory law and code of practice for the technology.

EU Considers Ban on Facial Recognition

It has been reported that the European Commission is considering a ban on the use of facial recognition in public spaces for up to five years while new regulations for its use are put in place.

Document

The reports of a possible three to five-year ban come from an 18-page EC report, which has been seen by some major news distributors.

Why?

Facial recognition trials in the UK first raised the issues of how the technology can be intrusive, can infringe upon a person’s privacy and data rights, and how facial recognition technology is not always accurate.  These issues have also been identified and raised in the UK, For example:

– In December 2018, Elizabeth Denham, the UK’s Information Commissioner launched a formal investigation into how police forces used FRT after high failure rates, misidentifications and worries about legality, bias, and privacy. This stemmed from the trial of ‘real-time’ facial recognition technology on Champions League final day June 2017 in Cardiff, by South Wales and Gwent Police forces, which was criticised for costing £177,000 and yet only resulting in one arrest of a local man whose arrest was unconnected.

– Trials of FRT at the 2016 and 2017 Notting Hill Carnivals led to the Police facing criticism that FRT was ineffective, racially discriminatory, and confused men with women.

– In September 2018 a letter, written by Big Brother Watch (a privacy campaign group) and signed by more than 18 politicians, 25 campaign groups, and numerous academics and barristers highlighted concerns that facial recognition is being adopted in the UK before it has been properly scrutinised.

– In September 2019 it was revealed that the owners of King’s Cross Estate had been using FRT without telling the public, and with London’s Metropolitan Police Service supplying the images for a database.

– In December 2019, a US report showed that, after tests by The National Institute of Standards and Technology (Nist) of 189 algorithms from 99 developers, their facial recognition technology was found to be less accurate at identifying African-American and Asian faces, and was particularly prone to misidentifying African-American females.

Impact Assessment

The 18-page EC report is said to contain the recommendation that a three to five-year ban on the public use of facial recognition technology would allow time to develop a methodology for assessing the impacts of (and developing risk management measures for) the use of facial recognition technology.

Google Calls For AI To Be Regulated

The way in which artificial intelligence (AI) is being widely and quickly deployed before the regulation of the technology has had a chance a to catch up is the subject of recent comments by Sundar Pichai, the head of Google’s parent company, Alphabet’.  Mr Pichai (in the Financial Times) called for regulation with a sensible approach and for a set of rules for areas of AI development such as self-driving cars and AI usage in health.

What Does This Mean For Your Business?

It seems that there is some discomfort in the UK, Europe and beyond that relatively new technologies which have known flaws, and are of concern to government representatives, interest groups and the public are being rolled out before the necessary regulations and risk management measures have had time to be properly considered and developed.  It is true that facial recognition could have real benefits (e.g. fighting crime) which could have benefits for many businesses and that AI has a vast range of opportunities for businesses to save money and time plus innovating products, services and processes.  However, the flaws in these technologies, and their potential to be used improperly, covertly, and in a way that could infringe the rights of the public cannot be ignored, and it is likely to be a good thing in the long term, that time is taken and efforts are made now to address the issues of stakeholders and develop regulations and measures that could prevent bigger problems involving these technologies further down the line.

£100m Fines Across Europe In The First 18 Months of GDPR

It has been reported that since the EU’s General Data Protection Regulation (GDPR) came into force in May 2018, £100m of data protection fines have been imposed on companies and organisations across Europe.

The Picture In The UK

The research, conducted by law firm DLA Piper, shows that the total fines imposed in the UK by the ICO stands at £274,000, but this figure is likely to be much higher following the finalising of penalties to be imposed on BA and Marriott.  For example, Marriott could be facing a £99 million fine for data breach between 2014 and 2018 that, reportedly involved up to 383 million guests, and BA (owned by IAG) could be facing a record-breaking £183 million for a breach of its data systems last year that could have affected 500,000 customers.

Also, the DLA Piper research shows that although the UK did not rankly highly in terms of fines, the UK ranked third in the number of breach notifications, with 22,181 reports since May 2018.  This equates to a relative ranking of 13th for data breach notifications per 100,000 people in the UK.

Increased Rate of Reporting

On the subject of breach notifications, the research shows a big increase in the rate of reporting, with 247 reports per day over the six months of GDPR between May 2018 and January 2019, which rose to 278 per day throughout last year. This rise in reporting is thought to be due to a much greater (and increasing) awareness about GDPR and the issue of data breaches.

France and Germany Hit Hardest With Fines

The fines imposed in the UK under GDPR are very small compared to Germany where fines totalled 51.1 million euros (top of the table for fines in Europe) and France where 24.6 million euros in fines were handed out.  In the case of France, much of the figure of fines collected relates to one penalty handed out to Google last January.

Already Strict Laws & Different Interpretations

It is thought that businesses in the UK having to meet the requirements of the already relatively strict Data Protection Act 1998 (the bones of which proved not to differ greatly from GDPR) is the reason why the UK finds itself (currently) further down the table in terms of fines and data breach notifications per 100,000 people.

Also, the EU’s Data Protection Directive wasn’t adopted until 1995, and GDPR appears to have been interpreted differently across Europe because it is principle-based, and therefore, apparently open to some level of interpretation.

What Does This Mean For Your Business?

These figures show that a greater awareness of data breach issues, greater reporting of breaches, and increased activity and enforcement action by regulators across Europe are likely to contribute to more big fines being imposed over the coming year.  This means that businesses and organisations need to ensure that they stay on top of the issue of data security and GDPR compliance.  Small businesses and SMEs shouldn’t assume that work done to ensure basic compliance on the introduction of GDPR back in 2018 is enough or that the ICO would only be interested in big companies as regulators appear to be increasing the number of staff who are able to review reports and cases.  It should also be remembered, however, the ICO is most likely to want to advise, help and guide businesses to comply where possible.