Legislation

£80,000 Fine For London Estate Agency Highlights Importance of Due Diligence in Data Protection

The issuing of an £80,000 fine by the Information Commissioner’s Office (ICO) to London-based estate agency Parliament View Ltd (LPVL) highlights the importance of due diligence when keeping customer data safe.

What Happened?

Prior to the introduction of GDPR, between March 2015 and February 2017, LPVL left their customer data exposed online after transferring the data via FTP from its server to a partner organisation which also offered a property letting transaction service. LPVL was using Microsoft’s Internet Information Services (IIS) but didn’t switch off an Anonymous Authentication Function, thereby giving anyone access to the server and the data without prompting them for a username or password.

The data that was publicly exposed included some very sensitive things which could be of value to hackers and other criminals including addresses of both tenants and landlords, bank statements and salary details, utility bills, dates of birth, driving licences (of tenants and landlords) and even copies of passports.  The ICO reported that the data of 18,610 individual users had been put at risk.

Hacker’s Ransom Request

The ICO’s tough penalty took into account the fact that not only was LPVL judged to have not taken the appropriate technical and organisational measures to prevent unlawful processing of the personal data, but that the estate agency only alerted the ICO to the breach after it had been contacted by a hacker in October who claimed to possess the personal data of LPVL’s, and who had requested a ransom.

The ICO judged that LPVL’s contraventions of the Data Protection Act were wide-ranging and likely to cause substantial damage and substantial distress to those whose personal data was taken, hence the huge fine.

Marriott International Also Fined

The Marriott International hotel chain has also just been issued with a massive £99.2m fine by the ICO for infringements of GDPR, also related to matters of due diligence.  Marriott International’s fine related to an incident that affected Starwood hotels from 2014 to 2018 (which Marriott was buying).  In this case, the ICO found that the hotel chain didn’t do enough to secure its systems and undertake due diligence when it bought Starwood.  The ICO found that the systems of the Starwood hotels group were compromised in 2014, but the exposure of customer information was not discovered until 2018 and by this time, data contained in approximately 339 million guest records globally had been exposed (7 million related to UK residents).

What Does This Mean For Your Business?

We’re now seeing the culmination of ICO investigations into incidents involving some large organisations, and the issuing of some large fines by the ICO e.g. British Airways and Marriott International, and also some lesser-known, smaller organisations – LPVL. These serve to remind all businesses of their responsibilities under GDPR.

Personal data is an asset that has real value, and therefore, organisations have a clear legal duty to ensure its security.  Part of ensuring this is carrying out proper due diligence when e.g. making corporate acquisitions (as with Marriott), when transferring data to partners (as with LPVL), and in all other situations.  Systems should be monitored to ensure that they haven’t been compromised and that adequate security is maintained.  Staff dealing with data should also be adequately trained to ensure that they act lawfully and make good decisions in data matters.

MPs Call To Stop Police Facial Recognition

Following criticism of the Police use of facial recognition technology in terms of privacy, accuracy, bias, and management of the image database, the House of Commons Science and Technology Committee has called for a temporary halt in the use of the facial recognition system.

Database Concerns

Some of the key concerns of the committee were that the Police database of custody images is not being correctly edited to remove pictures of unconvicted individuals and that innocent peoples’ pictures may be illegally included in facial recognition “watch lists” that are used by police to stop and even arrest suspects.

While the committee accepts that this may be partly due to a lack of resources to manually edit the database, the MP’s committee has also expressed concern that the images of unconvicted individuals are not being removed after six years, as is required by law.

Figures indicate that, as of February last year, there were 12.5 million images available to facial recognition searches.

Accuracy

Accuracy of facial recognition has long been a concern. For example, in December last year, ICO head Elizabeth Dunham launched a formal investigation into how police forces use facial recognition technology (FRT) after high failure rates, misidentifications and worries about legality, bias, and privacy.  For example, the trial of ‘real-time’ facial recognition technology on Champions League final day June 2017 in Cardiff, by South Wales and Gwent Police forces was criticised for costing £177,000 and yet only resulting in one arrest of a local man whose arrest was unconnected.

Also, after trials of FRT at the 2016 and 2017 Notting Hill Carnivals, the Police faced criticism that FRT was ineffective, racially discriminatory, and confused men with women.

Bias

In addition to gender bias issues, the committee also expressed concern about how a government advisory group had warned (in February) that facial recognition systems could produce inaccurate results if they had not been trained on a diverse enough range of data, such as types of faces from different races e.g. black, asian, and other ethnic minorities.  The concern was that if faces from different races are under-represented in live facial recognition training datasets, this could lead to errors.  For example, human operators/police officers who are supposed to double-check any matches made by the system by other means before acting could defer to the algorithm’s decision without doing so.

Privacy

Privacy groups such as Liberty (which is awaiting a ruling on its challenge of South Wales Police’s use of the technology) and Big Brother Watch have been vocal and active in highlighting the possible threats posed to privacy by the police use of facial technology.  Also, even Tony Porter, the Surveillance Camera Commissioner,  has criticised trials by London’s Metropolitan Police over privacy and freedom issues.

Moratorium

The committee of MPs has therefore called for the government to temporarily halt the use of facial recognition technology by police pending the introduction of a proper legal framework, guidance on trial protocols and the establishment of an oversight and evaluation system.

What Does This Mean For Your Business?

Businesses use CCTV for monitoring and security purposes, and most businesses are aware of the privacy and legal compliance aspects (GDPR) of using the system and how /where the images are managed and stored.

As a society, we are also used to being under surveillance by CCTV systems, which can have real value in helping to deter criminal activity, locate and catch perpetrators, and provide evidence for arrests and trials. The Home Office has noted that there is general public support for live facial recognition in order to (for example) identify potential terrorists and people wanted for serious violent crimes.  These, however, are not the reasons why the MP’s committee has expressed its concerns, or why ICO head Elizabeth Dunham is launched a formal investigation into how police forces use FRT.

It is likely that while businesses would support the crime and terror-busting, and crime prevention aspects of FRT used by the police,  they would also need to feel assured that the correct legal framework and evaluation system are in place to protect the rights of all and to ensure that the system is accurate and cost-effective.

E-Waste Inquiry

The growing number of connected electronic devices in use in the UK has led to an inquiry by the cross-party Environmental Audit Committee (EAC) which will focus on reducing ‘e-waste’ (dumped devices) and creating a circular economy.

Mostly As Landfill

One of the most startling statistics which has led to the EAC inquiry is that 90% of the 44.7 million tonnes of e-waste produced worldwide in 2017 ends up in landfill, is incinerated, is illegally traded or is otherwise treated in a sub-standard way.

Plastics & Precious Metals

The need to do something now to prevent an even bigger future problem has been heightened by recent reports of plastic waste and microplastic particles in the world’s oceans, sea creatures, and even frozen in arctic ice.  For example, figures (SAS.org) show that it is estimated there are 5.25 trillion macro and microplastic pieces floating in the open ocean weighing up to 269,000 tonnes.  The UK is also acknowledged to be a major exporter of waste to developing countries, many of which are not equipped to dispose of the waste in a socially and environmentally responsible way.

In the case of e-waste, plastic is just one of the components which could pose a serious environmental risk and could represent a missed opportunity to recycle valuable elements. For example, the UK currently produces 24.9kg of e-waste per person, which is nearly 10kg more than the European Union (EU) average.

In addition to large quantities of plastic, e-waste can include high value, difficult to obtain elements such as lithium, tantalum and tungsten, and other polluting and dangerous chemicals (up to 60 different metals and chemicals) which could pose a risk to public health, wildlife and the wider ecosystem if, for example, they got into the water supply via landfill.   

Need To Create A Circular Economy

In addition to investigating the e-waste problem, the EAC will be investigating the UK’s e-waste industry and looking at how a circular economy can be created for electronic goods.  A circular economy is an economic system aimed at minimising waste and making the most of resources.

What Does This Mean For Your Business?

We know that the growing number of devices is creating a massive e-waste problem, and it is good that the UK government is launching its own inquiry, hopefully bringing one more to the 67 countries that have enacted legislation to deal with the e-waste they generate. 

Some commentators have noted that, having a more digital and connected world could actually help to accelerate progress towards Sustainable Development Goals, thereby helping emerging economies, and ensuring that less precious minerals, metals and resources are dumped into landfill.

Some of the suggested ways to help deal with e-waste problem, and which will have an impact on many businesses, not least those who manufacture and sell devices, are looking at ways to dematerialise the electronics industry e.g. through device-as-a-service business models, better product tracking and take-back schemes, and entrepreneurs, investors, academics, business leaders and lawmakers working together to find ways to make the circular economy work.

Employee Subject Access Requests Increasing Costs For Their Companies

Research by law firm Squire Patton Boggs has revealed (one year on from the introduction of GDPR ) that companies are facing cost pressures from a large number of subject access requests (SARs) coming from their own employees.

SARs

A Subject Access Requests (SAR), which is a legal right for everyone in the UK, is where an individual can ask a company or organisation, verbally or in writing, to confirm whether they are processing their personal data and, if so, can ask the company or organisation for a copy of that data e.g. paper copy or spreadsheet.  With a SAR, individuals have the legal right to know the specific purpose of any processing of their data, what type of data being processed and who the recipients of that processed data are, how long that data stored, how the data was obtained from them in the first place, and for information about how that processed and stored data is being safeguarded.

Under the old 1998 Data Protection Act, companies and organisations could charge £10 for each SAR, but under GDPR individuals can make requests for free, although companies and organisations can charge “reasonable fees” if requests are unfounded, excessive (in scope), or where additional copies of data are requested to the original request.

Big Rise In SARs From Own Employees = Rise In Costs

The Squire Patton Boggs research shows that 71% of organisations have seen an increase in the number of their own employees making official requests for personal information held, and 67% of those organisations have reported an increase in their level of expenditure in trying to fulfil those requests.

The reason for the increased costs of handling the SARs can be illustrated by the 20% of companies surveyed who said they had to adopt new software to cope with the requests, the 27% of companies who said they had hired staff specifically to deal with the higher volume of SARs, and the 83% of organisation that have been forced to implement new guidelines and procedures to help manage the situation.

Why More Requests From Employees?

It is thought that much of the rise in the volume of SARs from employees may be connected to situations where there are workplace disputes and grievances, and where employees involved feel that they need to use the mechanisms and regulations in place to help themselves or hurt the company.

What Does This Mean For Your Business?

This story is another reminder of how the changes made to data protection in the UK with the introduction of GDPR, the shift in responsibility towards companies, and the widespread knowledge about GDPR can impact upon the costs and workload of a company with SARs.  It is a reminder also, that companies need to have a system and clear policies and procedures in place that enables them to respond quickly and in a compliant way to such requests, whoever they are from.

The research has highlighted an interesting and perhaps surprising and unexpected reason for the rise in the volume of SARs, and that there may be a need now for more guidance from the ICO about employee SARs.

US Visa Applicants Now Asked For Social Media Details and More

New rules from the US State Department will mean that US visa applicants will have to submit social media names and five years’ worth of email addresses and phone numbers.

Extended To All

Under the new rules, first proposed by the Trump administration back in February 2017, whereas previously the only visa applicants who had needed such vetting were those from parts of the world known to be controlled by terrorist groups, all applicants travelling to the US to work or to study will now be required to give those details to the immigration authorities. The only exemptions will be for some diplomatic and official visa applicants.

Delivering on Election Immigration Message

The new stringent rules follow on from the proposed crackdown on immigration that was an important part of now US President Donald Trump’s message during the 2016 election campaign.

Back in July 2016, the Federal Register of the U.S. government published a proposed change to travel and entry forms which indicated that the studying of social media accounts of those travelling to the U.S. would be added to the vetting process for entry to the country. It was suggested that the proposed change would apply to the I-94 travel form, and to the Electronic System for Travel Authorisation (ESTA) visa. The reason(s) given at the time was that the “social identifiers” would be: “used for vetting purposes, as well as applicant contact information. Collecting social media data will enhance the existing investigative process and provide DHS greater clarity and visibility to possible nefarious activity and connections by providing an additional toolset which analysts and investigators may use to better analyse and investigate the case.”

There had already been reports that some U.S. border officials had actually been asking travellers to voluntarily surrender social media information since December 2016.

2017

In February 2017, the Trump administration indicated that it was about to introduce an immigration policy that would require foreign travellers to the U.S. to divulge their social media profiles, contacts and browsing history and that visitors could be denied entry if they refused to comply. At that time, the administration had already barred citizens of seven Muslim-majority countries from entering the US.

Criticism

Critics of the idea that social media details should be obtained from entrants to the US include civil rights group the American Civil Liberties Union which pointed out that there is no evidence it would be effective and that it could lead to self-censorship online.  Also, back in 2017, Jim Killock, executive director of the Open Rights Group was quoted online media as describing the proposed as “excessive and insulting”.

What Does This Mean For Your Business?

Although they may sound a little extreme, these rules have now become a reality and need to be considered by those needing a US visa.  Given the opposition to President Trump and his some of his thoughts and policies and the resulting large volume of Trump-related content that is shared and reacted to by many people, these new rules could be a real source of concern for those needing to work or to study in the US.  It is really unknown what content, and what social media activity could cause problems at immigration for travellers, and what the full consequences could be.

People may also be very uncomfortable being asked to give such personal and private details as social media names and a massive five years’ worth of email addresses and phone numbers, and about how those personal details will be stored and safeguarded (and how long for), and by whom they will be scrutinised and even shared.  The measure may, along with other reported policies and announcements from the Trump administration even discourage some people from travelling to, let alone working or studying in the US at this time. This could have a knock-on negative effect on the economy of the US, and for those companies wanting to get into the US marketplace with products or services.

GCHQ Eavesdropping Proposal Soundly Rejected

A group of 47 technology companies, rights groups and security policy experts have released an open letter stating their objections to the idea of eavesdropping on encrypted messages on behalf of GCHQ.

“Ghost” User

The objections are being made to the (as yet) hypothetical idea floated by the UK National Cyber Security Centre’s technical director Ian Levy and GCHQ’s chief codebreaker Crispin Robinson for allowing a “ghost” user / third party i.e. a person at GCHQ, to see the text of an encrypted conversation (call, chat, or group chat) without notifying the participants.

According to Levy and Robinson, they would only seek exceptional access to data where there was a legitimate need, where there that kind of access was the least intrusive way of proceeding, and where there was also appropriate legal authorisation.

Challenge

The Challenge for government security agencies in recent times has been society’s move away from conventional telecommunications channels which could lawfully and relatively easily be ‘tapped’, to digital and encrypted communications channels e.g. WhatsApp, which are essentially invisible to government eyes.  For example, back in September last year, this led to the ‘Five Eyes’ governments threatening legislative or other measures to be allowed access to end-to-end encrypted apps such as WhatsApp.  In the UK back in 2017, then Home Secretary Amber Rudd had also been pushing for ‘back doors’ to be built into encrypted services and had attracted criticism from tech companies that as well as compromising privacy, this would open secure encrypted services to the threat of hacks.

Investigatory Powers Act

The Investigatory Powers Act which became law in November 2016 in the UK included the option of ‘hacking’ warrants by the government, but the full force of the powers of the law was curtailed somewhat by legal challenges.  For example, back in December 2018, Human rights group Liberty won the right for a judicial review into part 4 of the Investigatory Powers Act.  This is the part that was supposed to give many government agencies powers to collect electronic communications and records of internet use, in bulk, without reason for suspicion.

The Open Letter

The open letter to GCHQ in Cheltenham and Adrian Fulford, the UK’s investigatory powers commissioner was signed by tech companies including Google, Apple, WhatsApp and Microsoft, 23 civil society organisations, including Big Brother Watch, Human Rights Watch, and 17 security and policy experts.  The letter called for the abandonment of the “ghost” proposal on the grounds that it could threaten cyber security and fundamental human rights, including privacy and free expression.  The coalition of signatories also urged GCHQ to avoid alternate approaches that would also threaten digital security and human rights, and said that most Web users “rely on their confidence in reputable providers to perform authentication functions and verify that the participants in a conversation are the people they think they are and only those people”. As such, the letter pointed out that the trust relationship and the authentication process would be undermined by the knowledge that a government “ghost” could be allowed to sit-in and scrutinise what may be perfectly innocent conversations.

What Does This Mean For Your Business?

With digital communications in the hands of private companies, and often encrypted, governments realise that (legal) surveillance has been made increasingly difficult for them.  This has resulted in legislation (The Investigatory Powers Act) with built-in elements to force tech companies to co-operate in allowing government access to private conversations and user data. This has, however, been met with frustration in the form of legal challenges, and other attempts by the UK government to stop end-to-end encryption have, so far, also been met with resistance, criticism, and counter-arguments by tech companies and rights groups. This latest “ghost” proposal represents the government’s next step in an ongoing dialogue around the same issue. The tech companies would clearly like to avoid more legislation and other measures (which look increasingly likely) that would undermine the trust between them and their customers, which is why the signatories have stated that they would welcome a continuing dialogue on the issues.  The government is clearly going to persist in its efforts to gain some kind of surveillance access to tech company communications services, albeit for national security (counter-terrorism) reasons for the most part, but is also keen to be seen to do so in a way that is not overtly like ‘big brother’, and in a way that allows them to navigate successfully through the existing rights legislation.

GDPR Says HMRC Must Delete Five Million Voice Records

The Information Commissioner’s Office (ICO) has concluded that HMRC has breached GDPR in the way that it collected the biometric voice records of users and now must delete five million biometric voice files.

What Voice Files?

Back in January 2017, HMRC introduced a system whereby customers calling the tax credits and Self-Assessment helpline could enrol for voice identification (Voice ID) as a means of speeding up the security steps. The system uses 100 different characteristics to recognise the voice of an individual and can create a voiceprint that is unique to that individual.

When customers call HMRC for the first time, they are asked to repeat the vocal passphrase “my voice is my password” to up to five times to register before speaking to a human adviser.  The recorded passphrase is stored in an HMRC database and can be used as a means of verification/authentication in future calls.

It was reported that in the 18 months following the introduction of the system, HMRC acquired 5 million peoples’ voiceprints this way.

What’s The Problem?

Privacy campaigners questioned the lawfulness of the system and in June 2018, privacy campaigning group ‘Big Brother Watch’ reported that its own investigation had revealed that HMRC had (allegedly) taken the five million taxpayers’ biometric voiceprints without their consent.

Big Brother Watch alleged that the automated system offered callers no choice but to do as instructed and create a biometric voice ID for a Government database.  The only way to avoid creating the voice ID on calling, as identified by Big Brother Watch, was to say “no” three times to the automated questions, whereupon the system still resolved to offer a voice ID next time.

Big Brother Watch highlighted the fact that GDPR prohibits the processing of biometric data for the purpose of uniquely identifying a person, unless there is a lawful basis under Article 6, and that because voiceprints are sensitive data but are not strictly necessary for dealing with tax issues, HMRC should request the explicit consent of each taxpayer to enrol them in the scheme (Article 9 of GDPR).

This led to Big Brother Watch registering a formal complaint with the ICO.

Decision

The ICO has now concluded that HMRC’s voice system was not adhering to the data protection rules and effectively pushed people into the system without explicit consent.

The decision from the ICO is that HMRC now must delete the five million records taken prior to October 2018, the date when the system was changed to make it compliant with GDPR.  HMRC has until 5th June to delete the five million voice records, which the state’s tax authority says it is confident it can do long before that deadline.

What Does This Mean For Your Business?

Big Brother Watch believes this to be the biggest ever deletion of biometric IDs from a state database, and privacy campaigners have hailed the ICO’s decision as setting an important precedent that restores data rights for millions of ordinary people.

Many businesses and organisations are now switching/planning to switch to using biometric identification/verification systems instead of password-based systems, and this story is an important reminder that these are subject to GDPR. For example, images and unique Voiceprint IDs are personal data that require explicit consent to be given, and that people should have the right to opt out as well as to opt-in.

New UK ‘Duty of Care’ Rules To Apply To Social Media Companies

The new ‘Online Harms’ whitepaper marks a world first as the UK government plans to introduce regulation to hold social media and other tech companies to account for the nature of the content they display, backed by the policing power of an independent regulator and the threat of fines or a ban.

Duty of Care

The proposed new legal framework from the Department for Digital, Culture, Media and Sport (DCMS) and the Home Office aims to give social media and tech companies a duty of care to protect users from threats, harm, and other damaging content relating to cyberbullying, terrorism, disinformation, child sexual exploitation and encouragement of behaviours that could be damaging.

The need for such regulation has been recognised for some time and was brought into sharper focus recently by the death in the UK of 14-year-old Molly Russell, who was reported to have viewed online material on depression and suicide, and in March this year, the live streaming on one of Facebook’s platforms of the mass shooting at a mosque in New Zealand which led Australia to suggest fines for social media and web-hosting companies and imprisonment of executives if violent content is not removed.

The Proposed Measures

The proposed measures by the UK government in its white paper include:

  • Imposing a new statutory “duty of care” that will hold companies accountable for the safety of their users, as well as a commitment to tackle the harm caused by their services.
  • Tougher requirements on tech companies to stop the dissemination of child abuse and terrorist content online.
  • The appointment of an independent regulator with the power to force social media platforms and tech companies to publish transparency reports on the amount of harmful content on their platforms and what they are doing to address the issue.
  • Forcing companies to respond to users’ complaints, and act quickly to address them.
  • The introduction of codes of practice by the regulator which will include requirements to minimise the spread of misleading and harmful disinformation using dedicated fact checkers (at election time).
  • The introduction of a “safety by design” framework that could help companies to incorporate the necessary online safety features in their new apps and platforms at the development stage.

GDPR-Style Fines (Or A Ban)

Culture, Media and Sport Secretary Jeremy Wright has said that tech companies that don’t do everything reasonably practicable to stop harmful content on their platforms could face fines comparable with those imposed for serious GDPR breaches e.g. 4% of a company’s turnover.

It has also been suggested that under the new rules to be policed by an independent regulator, bosses could be held personally accountable for not stopping harmful content on their platforms. It has also been suggested that in the most serious cases, companies could be banned from operating in Britain if they do not everything reasonably practical to stop harmful content being spread via their platforms.

Balance

Although there is a general recognition that regulation to protect, particularly young people, from harmful/damaging content is a good thing, a proportionate and predictable balance needs to be struck between protecting society and supporting innovation and free speech.

Facebook is reported to have said that it is looking forward to working with the government to ensure new regulations were effective and have a standard approach across platforms.

Criticism

The government’s proposals will now have a 12-week consultation, but the main criticism to date has been that parts of the government’s approach in the proposals are too vague and that regulations alone can’t solve all the problems.

What Does This Mean For Your Business?

Clearly, the UK government believes that self-regulation among social media and tech companies does not work.  The tech industry has generally given a positive response to the government’s proposals and to an approach that is risk-based and proportionate rather than one size fits all.  The hope is that the vaguer elements of the proposals can be clarified and improved over the next 3 months of consultation.

To ensure the maximum protection for UK citizens, any regulations should be complemented by ongoing education for children, young people and adults to make sure that they have the skills and awareness to navigate the digital world safely and securely.

Controversial Copyright Directive Backed By MEPs

The European Parliament has given its backing to new copyright rules, including the controversial Article 13, (opposed by many big tech companies) which could now change the way that Europe’s creative and digital industries work.

European Parliament Vote

The new copyright rules are encapsulated in the EU Copyright Directive which, having gone through many revisions was finally backed by 348 MEPs, with 278 voting against it.  It will now be up to EU member states (likely to soon exclude the UK) to approve the EP’s decision and if so, member states will still have two years to implement it.

Article 11 and Article 13

Most of the opposition and argument around the EU Copyright Directive relate to article 11 and 13.

Article 11 states that search engines and news aggregate platforms should pay to use links from news websites.

Article 13 applies to services that have been available in the EU for more than three years or have an annual turnover of more than £8.8m.  Article 13 shifts the burden of responsibility so that big tech companies rather than users will have to bear the responsibility for ANY content that is posted on their platforms that doesn’t have a copyright licence e.g. television programmes, movies, and music, and may even cover YouTube, Dailymotion, Soundcloud and more.  So far, the main worry has been about music but this new change in the law has widened the scope.  The new directive says that tech companies will need to make “best efforts” to get permission from the copyright holder, make sure that material specified by rights holders is not made available, and act quickly to remove any infringing material.

Objections

The main objections that have been voiced about the Directive, and particularly these two sections are that:

  • Companies that want to use links from news websites will face an increase in their costs and extra red tape.
  • Big tech companies will find it incredibly difficult and potentially very costly and time-consuming to try and police all content that’s uploaded on their platforms with regards to copyright status.  This may mean that costly and complicated filters may need to be applied to any content before it is uploaded.  There are also worries that the algorithms used to make filters could make mistakes and may take down content that’s being legitimately used.
  • Some argue that artists are already paid fairly under the current system.
  • Freedom groups have expressed concerns that not being able to share certain links, and platforms having to filter content could lead to a more closed society, instead of using digital advances to build a more open world where knowledge can create power for the many and not just the few.
  • Some see Article 13 as being little more than a set of ideals and aims that lacks legal detail and offers little guidance on what steps will be enough to comply.

Redresses Balance

EU lawmakers say that the Directive was intended to protect the livelihoods of those artists, musicians and others whose work is copyrighted so they can get paid because that work has been shared widely in the past without its creators being properly paid.

Exemptions

Exemptions to the directive include:

  • The sharing of memes and GIFs (exempted from Article 11).
  • Non-profit online encyclopaedias, open source software development platforms, cloud storage services, online marketplaces and communication services (exempted from Article 13).

What Does This Mean For Your Business?

For ordinary web users, this new change in European laws means that they can upload videos and music to platforms like YouTube without being held liable for copyright.  For journalist and creatives, this law change also looks on the surface to be good news because it means that they may be properly remunerated by big companies, thereby redressing the power balance.

For businesses that have an online platform and/or need to share links and content, this law change could increase costs, increase risks (vulnerability to fines etc), and could make things a lot more complicated e.g. with the need to add filters and checks to any content and link sharing.

1 Million+ UK VAT-Registered Companies Still To Register With Making Tax Digital

A Freedom of Information request has revealed that with a little under a week to go to the deadline for registration, more than 1 million UK VAT-Registered Companies have still not signed up to HMRC’s Making Tax Digital (MTD) programme.

MTD

HMRC’s MTD was announced back in 2015 and requires VAT registered UK companies to keep digital records and file quarterly reports with the taxman. The first phase of the programme, MTD for VAT, is rolling out on 1st April, with the first digital quarterly VAT returns due to be submitted by 7th August.

MTD offers businesses the chance to move to an easier, more convenient, full cloud accounting solution rather than their own (often spreadsheet-based) legacy systems. For HMRC, having everything digitalised should allow them to save costs, time and resources, improve accuracy, and get revenue more quickly. HMRC says that the MTD programme should “make it easier for individuals and businesses to get their tax right and keep on top of their affairs.”

Other Taxes – Not Digital Submission Until 2020

The UK announced in July 2017 that more time would be needed before an MTD-style programme could be mandated for taxes other than VAT until at least April 2020.

Also, the government announced earlier this year that because it is focusing on support for businesses in the transition to MTD it will not be mandating Making Tax Digital for any new taxes or businesses in 2020.

FoI Request

The FoI request that revealed how many businesses still hadn’t registered for MTD was submitted by Float, a cashflow forecasting software company. The information in response to the FoI request showed that as of 18th March 2109 only 55,520 businesses were registered with the scheme. HMRC has since said that 70,000 business have now registered, which means that companies are registering at a rate of around 3,000 per day.

Criticism

HMRC has been criticised for not contacting many companies about the changes.  For example, it was revealed that as recently as last November, only 40% of companies had heard about the new programme.

What Does This Mean For Your Business?

2018 to 2019 has been a challenging year for businesses with the preparations and introduction first of GDPR, followed by the uncertainty surrounding Brexit overshadowing many other issues. It may be true to say that many businesses are reactive and are busy just keeping on top of business most of the time and in a situation like this where the communication from HMRC about MTD has been poor, it’s not surprising that many businesses have still not registered. It may also be fair to say that many accountancy firms haven’t been as proactive as they could have been in informing their customers about MTD and its deadlines.

The introduction of MTD will undoubtedly require work and time in getting figures into a new and unfamiliar digital platform, but if it makes it easier for companies to stay on top of their tax affairs into the future, this will be a good thing, not least for the exchequer.