Data Security

Tech Tip – Encrypting Documents Stored on Google Drive

If you use Google Drive to store files in the cloud but worried that Google doesn’t provide a true password protection feature, you may want to encrypt your files before uploading them.  Here’s how:

If you have Microsoft Office on your PC, it has a built-in encryption feature.

– Go to: File > Protect Document > Encrypt with Password.

– Upload the file to Google Docs.

– Google can’t read the file, but it can be downloaded and opened on any PC with Microsoft Office Installed (using the password).

– If you don’t have Microsoft Office, you could use Boxcryptor.  This is free for syncing one cloud storage service between two PCs.

– Install Boxcryptor (see boxcryptor.com).

– Enable Google Drive in Boxcryptor’s settings.

– Access Boxcryptor from Windows Explorer’s sidebar.

– Go to: Boxcryptor > Encrypt option, and watch the checkbox turn green.

The encrypted files will then be placed in Google Drive, but won’t be accessible unless you have Boxcryptor installed and logged in.

If you’re looking for a solution that’s free and can be used with any cloud storage service and any device, you may want to try Veracrypt (for Windows, macOS, and Linux).  It creates an encrypted container where you can store files you want and put them anywhere for safe keeping.

– Install Veracrypt (see veracrypt.fr).

– Create a new encrypted file container within your Google Drive folder.

– Reach that file from Veracrypt’s main window (it will show as if it were an external hard drive).

– Drag your sensitive files there and unmount the volume.

You will need Veracrypt installed on any PC to access the documents inside that container.

Man Fined After Hiding From Facial Recognition Cameras

A man was given a public order fine after being stopped by police because he covered his face during a trial of facial recognition cameras in Romford, London.

What Facial Recognition Trial?

A deliberately “overt” trial of live facial recognition technology by the Metropolitan Police took place in the centre of Romford, London, on Thursday 31st January.  This was supposed to be the first day of a two-day trial of the technology, but the second day was cancelled due to concerns that the forecast snow would only bring a low level of footfall in the area.

Live facial recognition trials of this kind use vehicle-mounted cameras linked to a police database containing photos from a watchlist of selected images from the police database.  Officers are deployed nearby so that they can stop those persons identified and matched with suspects on the database.

In the Romford trial, the facial recognition filming was reported to have taken place from a parked police van and, according to the Metropolitan Police, the reason for the use of the technology was to reduce crime in the area, with a specific focus on tackling violent crime.

Why The Fine?

The trial also attracted the attention of human rights groups, such as Liberty and Big Brother Watch, members of which were nearby and were monitoring the trial.

It was reported that the man who was fined, who hasn’t been named by police, was observed pulling his jumper over part of his face and putting his head down while walking past the police cameras, possibly in response to having seen placards warning that passers-by were the subjects of filing by police automatic facial recognition cameras.

It has been reported that the police then stopped the man to talk to him about what they may have believed was suspicious behaviour and asked to see his identification. According to police reports, it was at this point that the man became aggressive, made threats towards officers and was issued with a penalty notice for disorder as a result.

8 Hours, 8 Arrests – But Only 3 From Technology

Reports indicate that the eight-hour trial of the technology resulted in eight arrests, but only three of those arrests were as a direct result of facial recognition technology.

Criticism

Some commentators have criticised this and other trials for being shambolic, for not providing value for money, and for resulting in mistaken identity.

Research Questions Reliability

Research by the University of Cardiff examined the use of facial recognition technology across several sporting and entertainment events in Cardiff for over a year, including the UEFA Champion’s League Final and the Autumn Rugby Internationals.  The research found that for 68% of submissions made by police officers in the Identify mode, the image had too low a quality for the system to work. Also, the research found that the locate mode of the FRT system couldn’t correctly identify a person of interest for 76% of the time.

Also, in December 2018, ICO head Elizabeth Dunham was reported to have launched a formal investigation into how police forces use facial recognition technology (FRT) after high failure rates, misidentifications and worries about legality, bias, and privacy.

What Does This Mean For Your Business?

It has been reported that spending over £200,000 on the deployment of facial recognition trials on 6 deployments between August 2016 and July 2018, no arrests were made.  On the surface, these figures suggest that, although the technology has the potential to add value and save costs, and although businesses in town centres are likely to welcome efforts to reduce crime, the trials to date don’t appear to have delivered value-for-money to taxpayers.

There was also criticism of the facial recognition system used in Soho, Piccadilly Circus and Leicester Square over two days in the run-up to Christmas, where freedom campaigners such as Big Brother Watch and Liberty were concerned about mixed messages from police about how those who turn away from facial recognition cameras mounted in/on police vans because they don’t want to be scanned could be treated.

Despite some valid worries and criticism, most businesses and members of the public would probably agree that CCTV systems have a real value in helping to deter criminal activity, locating and catching perpetrators, and providing evidence for arrests and trials.  There are, however, several concerns, particularly among freedom and privacy groups, about how just how facial recognition systems are being (and will be) used as part of policing e.g. overt or covert, issues of consent, possible wrongful arrests due to system inaccuracies, and the widening of the scope of its purpose from the police’s stated aims.  Issues of trust where our personal data is concerned are still a problem, as are worries about a ‘big brother’ situation for many people.

$180 Million Password Taken To The Grave

115,000 customers of the of Canadian digital platform Quadriga are believed to be owed C$250 million, but C$180 ($137.21 million) in cryptocurrencies have been frozen after the platform’s founder, who was the only person with the password to the platform’s stored funds, died in December 2018.

What Is Quadriga?

QuadrigaCX is a Canadian cryptocurrency exchange/platform, which allows the trading of Bitcoin, Litecoin and Ethereum.  QuadrigaCX, was founded by Gerald Cotten and was Canada’s largest cryptocurrency exchange until 2019 and has 363,000 registered users.

Cold Storage

As part of QuadrigaCX’s security measures, ‘Cold Storage’ was used for most of the Bitcoins within their system. Unfortunately for Quadriga, it is this part of the system, where the bulk of their funds are stored that is ultimately protected by one main password that was known only to the late founder, Gerald Cotton.

Dead

Mr Cotton died aged 30 from complications related to Crohn’s disease while he was volunteering at an orphanage in India.

Widow Under Pressure

With so much money owed to customers, Mr Cotton’s widow, Jennifer Robertson is reported to have found herself under pressure to find the password.  It has been reported that Robertson, who was not involved in Cotten’s business while he was alive and does not have business records for QuadrigaCX, has conducted repeated searches for the password.

Although Robertson has Mr Cotten’s laptop, she has (so far) been unable to access the contents because it is encrypted, and no one has the password or recovery key for it. Additional attempts to decrypt the laptop have also been unsuccessful.

It has also been reported that Robertson has consulted an expert to help recover details from Cotten’s other computer and cell phones, although the expert’s attempts have been reported to have had only ‘limited’ success to date.

QuadrigaCX has now filed for “creditor protection” in an attempt to avoid bankruptcy.

Customers Unable to Withdraw Funds

In the meantime, customers have reported online that they have been unable to withdraw their funds from the platform for months, that they have only received limited information, and that the website was also recently taken down for maintenance.

What Does This Mean For Your Business?

This story highlights some of the risks associated with cryptocurrencies, and a how a lack of regulation and a market that’s still in its relatively early stages can leave investors in unusual, worrying situations such as this one. In many other types of financial business where there is that level of funding involved, it would also be highly unlikely that a single password known only to one person would play such an important role. Some would say that it’s ironic that passwords are often considered now to be much less secure than other security tools, and yet this password-controlled system has confounded even the experts so far.  What is also ironic is that the ‘cold storage’ of funds, in this case, was introduced as a security measure to protect customer funds but has ended up being so secure customers have no access to those funds.

Looking at the size of QuadrigaCX and the number of customers it has, cryptocurrencies clearly still provide a useful and valuable opportunity for trading and investment. They have, however, had a turbulent life to date, making the news for many negative reasons.  For example, just for bitcoin, regulations and restrictions in some countries (e.g. China), hacks, its volatility, a negative image from its use by international criminals and from its use in scams, a lack of knowledge about how to use it, and the fact that the high price of just one bitcoin made it (even more) niche, meant that it became a commodity and a fast-buck opportunity rather than an actual, useful currency, and the over-consumption and over-inflated value of bitcoin lead to its spectacular fall in value.  There have also been well-publicised falls in value for crypto-currencies like Ethereum’s ‘eher’ and Ripple’, and Tether found itself being investigated by the U.S. Department of Justice over possible manipulation of bitcoin prices at the end of 2017.

All this said, many governments and banks would still like a ‘piece of the action’ of cryptocurrencies, and many market analysts see a future for them as a part of a wider ecosystem.

Apple’s Video-Calling ‘Eavesdropping’ Bug

Apple Inc has found itself at the centre of a security alert after a bug in group-calling of its FaceTime video-calling feature has been found to allow eavesdropping of a call’s recipient to take place prior to the call being taken.

Sound, Video & Broadcasting

As well as allowing the caller to hear audio from the recipient’s phone even if the recipient has not yet picked up the call, if the recipient has pressed the power button on the side of the iPhone e.g. to silence/ignore the incoming call, the same bug was also found to have allowed callers to see video of the person they were calling before that person had picked up the call. This was because pressing the power button effectively started a broadcast from the recipient’s phone to the caller’s phone.

Data Privacy Day

Unfortunately for Apple, insult was added to injury as news of the bug was announced on Data Privacy Day, a global event that was introduced by the Council of Europe in 2007 in order to raise awareness about the importance of protecting privacy. Shortly before news of the Apple group FaceTime bug was made public, Apple’s Chief Executive, Tim Cook, had taken to Twitter to highlight the importance of privacy protection.

It Never Rains…But It Pours

To make things even worse, news of the bug was made public on the day before Apple was due to announce its reduced revenue forecast figures as part of its quarterly financial results. Apple has publicly reduced its expected revenue forecast by £3.8bn.  Apple’s chief executive put the blame for the revised lower revenue mainly on the unforeseen “magnitude of the economic deceleration, particularly in Greater China”.  He also blamed several other factors such as a battery replacement programme, problems with foreign exchange fluctuations, and the end of carrier subsidies for new phones.

Feature Disabled

In order to close the security and privacy hole that the bug created, Apple announced online that it had disabled the Group FaceTime feature at 3:16 AM on Tuesday.

Fix On The Way

Apple has announced that a fix for the bug will be available later this week as part of Apple’s iOS 12.2 update.

What Does This Mean For Your Business?

Apple has disabled the Group FaceTime feature with the promise of a fix within days, which should provide protection from any new attempts to exploit the bug. Those users who are especially concerned can also decide to disable FaceTime in the iPhone altogether via the phone’s settings.

Even though the feature has been disabled, the potential seriousness of allowing eavesdropping of private conversations and the broadcasting of video from a call recipient’s phone appears to have been a major threat to the privacy and security of some Apple phone users.  This has caused some tech commentators to express their surprise that a bug like this could be discovered in the trusted, trillion-dollar company’s products, and concern to be expressed that those users who, for whatever reason, don’t update their phones to the latest operating system, may not be protected.

Millions of Taxpayers’ Voiceprints Added to Controversial HMRC Biometric Database

The fact that the voiceprints of more than 2 million people have been added to HMRC’s Voice ID scheme since June 2018, to add to the 5 million plus other voiceprints already collected, has led to complaints and challenges to the lawfulness of the system by privacy campaigners.

What HMRC Biometric Database System?

Back in January 2017, HMRC introduced a system whereby customers calling the tax credits and Self-Assessment helpline could enrol for voice identification (Voice ID) as a means of speeding up the security steps. The system uses 100 different characteristics to recognise the voice of an individual and can create a voiceprint that is unique to that individual.

When customers call HMRC for the first time, they are asked to repeat a vocal passphrase up to five times before speaking to a human adviser.  The recorded passphrase is stored in an HMRC database and can be used as a means of verification/authentication in future calls.

Got Voices By The Back Door Said Big Brother Watch

It has been reported that in the 18 months following the introduction of the system, HMRC acquired 5.1 million people’s voiceprints this way.

Back in June 2018, privacy campaigning group ‘Big Brother Watch’ reported that its own investigation had revealed that HMRC had (allegedly) taken 5.1 million taxpayers’ biometric voiceprints without their consent.

Big Brother Watch alleged that the automated system offered callers no choice but to do as instructed and create a biometric voice ID for a Government database.  The only way to avoid creating the voice ID on calling, as identified by Big Brother Watch, was to say “no” three times to the automated questions, whereupon the system still resolved to offer a voice ID next time.

Big Brother Watch were concerned that GDPR prohibits the processing of biometric data for the purpose of uniquely identifying a person, unless the there is a lawful basis under Article 6, and that because voiceprints are sensitive data but are not strictly necessary for dealing with tax issues, HMRC should request the explicit consent of each taxpayer to enrol them in the scheme (Article 9 of GDPR).

This led to Big Brother Watch registering a formal complaint with the ICO, the result of which is still to be announced.

Changes

Big Brother Watch’s complaint may have been the prompt for changes to the Voice ID system. In September 2018, HMRC permanent secretary John Thompson said that HMRC felt it had been acting lawfully, by relying on the implicit consent of users.  Mr Thompson acknowledged, however, that the original messages that were played to callers had not explicitly stated it was possible, or how, to opt out of the voice ID system, and that, in the light of this, the message had been updated (in July 2018) to make this clear.

Mass Deletions?

On the point of whether HMRC would consider deleting the 6 million voiceprint profiles of people who registered before the wording was changed to include ty opt-out option, Mr Thompson has said that HMRC will wait for the completion of the ICO’s investigation.

Backlash

Big Brother Watch has highlighted a backlash against the Voice ID system as indicated by the 162,185 people who have called HMRC to have their Voice IDs deleted.

What Does This Mean For Your Business?

Even though many businesses and organisations are switching/planning to switch to using biometric identification/verification systems in place of less secure password-based systems, it is still important to remember that these are subject to GDPR. For example, images and unique Voiceprint IDs are personal data that require explicit consent to be given, and that people have the right to opt out as well as to opt-in.

It remains to be seen whether the outcome of the ICO investigation will require mass deletions of Voice ID profiles.  Big Brother Watch states on its website that if people are not happy about the HMRC system they can complain to the HMRC directly (via the government website) or file a complaint about the HMRC system to the ICO via the ICO website (the ICO is already investigating HMRC about the matter).  HMRC has said that all the voice data is stored securely and that customers can now opt out of Voice ID or delete their records any time they want.

Google’s £44 Million GDPR Fine

Google has been fined a massive 50 million euros (£44m) for breach of GDPR dating back to May 2018 and relating to how well people were informed about how Google collected data to personalise advertising, and the matter of consent.

Who?

Google (Alphabet Inc) has been fined £44 million by the French data regulator CNIL.  The two complaints that brought about the investigation and the fine were filed in 2018 by privacy rights groups noyb and La Quadrature du Net (LQDN).

Even though the fine is eye-wateringly large, the maximum fine for large companies like Google under GDPR could have been 4% of annual turnover, which could equate to around €4bn.

Ad Personalisation & Google

Google personalises the adverts that are displayed when a person is signed in to their Google account based on ad-personalisation settings. When a person is signed out of their Google account, they are still subject to ad-personalisation across the Web on Google’s partner websites and apps based on their browsing history, and on Google Search based on their previous activity such as previous searches.

What & Why?

The two privacy groups complained that Google didn’t have a valid legal basis to process user data for ad-personalisation because of issues relating to transparency and consent.

The reasons for Google receiving the fine were that:

  1. Google failed to provide its users with transparent and understandable information on its data use policies.  This was because the “essential information” that users would have needed to understand how Google collected data to personalise advertising, and the extent of that information, was too difficult to find because it was spread across several documents.  This meant that it was only fully accessible after several steps e.g. up to five or six actions. Ultimately, this meant that users were unable to exercise their right to opt out of data-processing for personalisation of ads.
  2. It was also found that the option to personalise ads was “pre-ticked” when creating an account.  This meant that users were essentially giving consent in full for all the processing operations purposes carried out by Google based on this consent.  Under GDPR however, consent should be ‘specific’ only if it is given distinctly for each purpose.

Other Complaints

Privacy group noyb has also filed more formal complaints against Amazon, Apple, Google, Netflix, Spotify, and other entertainment streaming services. The reason, according to noyb, is that when people request a copy of the personal data that these companies hold on them, some of it may not be supplied in a format that can be easily understood.  GDPR requires companies to supply users with a copy of their data that is both machine-readable and can be easily understood.

What Does This Mean For Your Business?

Even before GDPR was introduced, many technology and security commentators predicted that the big names e.g. Google and Facebook would be the first to be targeted by privacy campaigners, and that appears to be what is happening here. In this case however, the fact that the complaints have created a record-breaking fine shows that there was genuine concern about a lack of compliance with GDPR from a company that many would have expected to be on top of the legislation and setting an example. It is likely that Google will need to make some significant modifications to some aspects of its services now, and that this may prompt other large tech companies to do the same in order to avoid similar fines and bad publicity.

This case is a reminder to businesses, particularly larger ones, that although GDPR appears to have been buried by concerns about Brexit, the need to stay compliant with GDPR is an ongoing process and should still be high on business agenda.

Biggest Personal Data Breach Puts Password Effectiveness In The Spotlight

Password-based authentication has long been known to be less secure than other methods such as multi-step verification or biometrics, but a massive leak of a staggering 87GB of 772.9 million emails, 21.2 million passwords and 1.1 billion email address and password combinations recently shared on hacking forums has brought the inherent weaknesses of password authentication into sharp focus.

What Leak?

The massive leak of 2.6 billion rows of data from 12,000 files dubbed Collection #1 onto hacking forums was revealed in a blog post by security researcher Troy Hunt, who is most well-known for managing the ‘Have I Been Pwned’ service.

In his post, Mr Hunt said that the leaked personal data is a set of email addresses and passwords totalling 2,692,818,238 rows and is made up of many different data breaches from thousands of different sources. The data contains 772,904,991 unique email addresses, and 21,222,975 unique passwords, all of which can be put into 1,160,253,228 unique combinations.

Risks

Clearly, Mr Hunt has an interest in publicising the existence of Collection #1 and the fact that it has been incorporated into his service to help publicise the ‘Have I Been Pwned’ service, but as Mr Hunt points out, if your password/email combinations are part of the collection and have not been changed since, you could face some serious risks.  For example:

  • Credential stuffing attacks. In this case, 2.7 billion of the username and password combinations could be put into a list and used for credential stuffing.  This is where cyber-criminals rely on the fact that people may use the same username and password combinations for multiple websites, and therefore, the criminals use software to automate the process of trying the breached username/password pairs on many other websites to see if they can gain access.
  • Phishing attacks.  The stolen credentials can be used to automatically send malicious emails to a victim’s list of contacts.
  • Targeted digital identity attacks. The breached credentials can be used in targeted attacks designed to steal a victim’s entire digital identity or steal their money or even to compromise their social media network data.

What Does This Mean For Your Business?

This story highlights the importance of always using strong passwords that you change on a regular basis. Also, it highlights the importance of not using the same usernames and passwords on multiple websites as this can provide an easy route to your data for criminals using credential stuffing.

Managing multiple passwords in a way that is secure, effective, and doesn’t have to rely on memory is difficult, particularly for businesses where there are multiple sites to manage. One tool that can help is a password manager.  Typically, these can be installed as browser plug-ins that are used to handle password capture and replay, and when logging into a secure site, they offer to save your credentials. On returning to that site, they can automatically fill in those credentials. Password managers can also generate new passwords when you need them and automatically paste them into the right places, as well as being able to sync your passwords across all your devices. Examples of popular password managers include Dashline, LastPass, Sticky Password, and Password Boss, and those which are password vaults in other programs and CRMs include Zoho Vault and Keeper Password Manager & Digital Vault.

If you’re worried that people in your organisation may be using passwords that have been stolen, Troy Hunt has provided a list of them here:  https://www.troyhunt.com/pwned-passwords-now-as-ntlm-hashes/  and provides some answers to popular questions about the stolen passwords in the ‘FAQs’ section of his blog post here: https://www.troyhunt.com/the-773-million-record-collection-1-data-reach/

£15K Fine For Ignoring Data Access Requests

SCL Elections, the parent company of the now defunct Cambridge Analytica which was famously involved in the Facebook profile harvesting scandal, has been fined £15,000 for failing to respond to a data access request from a US citizen, and for ignoring an enforcement notice by the UK’s Information Commissioner’s Office (ICO).

Data Protection Act

The fine was made for a breach of the Data Protection Act which was in force for all at the time of the data request, which was originally made back in 2017.  GDPR, which came into force on 25th May 2018 (to replace the Data Protection Directive) covers the data protection rights of EU citizens.

The person who made the data request in this case, however, was US citizen Professor David Carroll, and SCL Elections wrongly believed that because he was not a UK citizen, he had no more right to request access to data “than a member of the Taliban sitting in a cave in Afghanistan”.

What Happened?

Professor David Carroll, who was based in New York in May 2017 at the time of his original data request under UK Data Protection Act, asked SCL Elections’ Cambridge Analytica branch in the UK to provide all the data it had gathered on him. Under that law, SCL Elections should have responded within 40 days with a copy of the data, the source of the data, and stating if the organisation had given / intended to give the data to others.

Professor Carroll, a Democrat, was reported to have been interested from an academic perspective in the practice of political ad targeting in elections and believed that he may have been targeted with messages that criticised Secretary Hillary Clinton with falsified or exaggerated information that may have negatively affected his sentiment about her candidacy.

Sent Basic Information On A Spreadsheet

Some weeks after Professor Carroll’s subject access request in early 2017, SCL Elections sent him a spreadsheet of basic information that it held about him.

However, that information contained accurate predictions of Professor Carroll’s views on some issues and had scored Carroll a nine 9 out of 10 on what it called a “traditional social and moral values importance rank”.

Wanted To Know How

This prompted Professor Carroll to submit a second request to SCL Elections, this time to find out what that ranking meant and what it was based on, and where the data about him came from. This second request was ignored by SCL.

The CEO of Cambridge Analytica at the time, Alexander Nix, told a UK parliamentary committee that his company would not provide American citizens, like David Carroll, all the data it holds on them, or tell them where the data came from, and Nix (mistakenly) said that there was no legislation in the US that allowed individuals to make such a request.

ICO Involved

The ICO then became involved with the UK’s Information Commissioner, Elizabeth Denham, sending a letter to SCL Elections (Cambridge Analytica) asking where the data on Professor Carroll came from, and what had been done with it.  A section 40 enforcement notice was also issued in May 2018 to SCL Elections, thereby making it a criminal matter if they failed to comply by responding to the request and by providing the full records as requested by Carroll. No records were forthcoming, which resulted in the recent prosecution, the first against Cambridge Analytica.

During the case at Hendon Magistrates Court, it was revealed that SCL Elections had a turnover of £25.1m and profits of £2.3m in 2016.  The judge fined SCL Elections £15,000 for failing to comply with the section 40 enforcement notice from the ICO and ordered the company (whose affairs are being handled by administrators, Crowe UK) to pay a contribution of £6,000 to the ICO’s legal costs, and a victim surcharge of £170.

Some Mitigating Circumstances

Although Counsel for SCL Elections’ administrators acknowledged that SCL elections had failed to respond to the section 40 enforcement notice, they did highlight some mitigating circumstances, such as the company’s computer servers being seized by the ICO following a raid on the SCL Elections premises in March 2018.

What Does This Mean For Your Business?

This case shows that ignorance of data protection law is not a defence and that businesses and organisations need to protect their customers, stakeholders, and themselves by making sure that they fully understand and comply with data protection laws. This is particularly relevant in the UK since the introduction of GDPR.

As pointed out by Information Commissioner Elizabeth Denham in this case, companies and organisations that handle personal data need to respect people’s legal privacy rights and to understand that wherever a person lives in the world, if their data is being processed by a UK company, UK data protection laws apply. This case has also highlighted the fact that where there is no compliance with the law, and where ICO enforcement notices are ignored, action will be taken that could be very costly to the subject of that action.

Fake News Fact Checkers Working With Facebook

London-based, registered charity ‘Full Fact’ will now be working for Facebook, reviewing stories, images and videos, in an attempt to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.

Why?

The UK Brexit referendum, the 2017 UK general election, and the U.S. presidential election were both found to have suffered interference in the form of so-called ‘fake news’ / misinformation spread via Facebook which appears to have affected the outcomes by influencing voters.

For example, back in 2018, it was revealed that London-based data analytics company, Cambridge Analytica, which was once headed by Trump’s key adviser Steve Bannon, had illegally harvested 50 million Facebook profiles in early 2014 in order to build a software program that was used to predict and generate personalised political adverts to influence choices at the ballot box in the last U.S. election. Russia was also implicated in trying to influence voters via Facebook.

Chief executive of Facebook, Mark Zuckerberg, was made to appear before the U.S. Congress in April to talk about how Facebook is tackling false reports, and even recently a video that was shared via Facebook (which had 4 million views before being taken down) falsely suggested that smart meters emit radiation levels that are harmful to health. The information in the video was believed by many even though it was false.

Scoring System

Back in August 2018, it was revealed that for 2 years Facebook had been trying to manage some misinformation issues by using a system (operated by its own ‘misinformation team’) that allocated a trustworthiness score to some members.  Facebook is reported to be already working with fact-checkers in more than 20 countries. Facebook is also reported to have had a working relationship with Full Fact since 2016.

Full Fact’s System

This new system from third-party Full Fact will now focus on Facebook in the UK.  When users flag up to Facebook what they suspect may be false content, the Full Fact team will identify and review public pictures, videos or stories and use a rating system that will categorise them as true, false or a mixture of accurate and inaccurate content.  Users will then be told if the story they’ve shared, or are about to share, has been checked by Full Fact, and they’ll be given the option to read more about the claim’s source, but will not be stopped from sharing anything.

Also, the false rating system should mean that false content will appear lower in news feeds, so it reaches fewer people. Satire from a page or domain that is a known satire publication will not be penalised.

Like other Facebook third-party fact-checkers, Full Fact will be able to act against pages and domains that repeatedly share false-rated content e.g. by reducing by their distribution and by reducing their ability to monetise and advertise.  Also, Full Fact should be able to stop repeat offenders from registering as a news page on Facebook.

Assurances

Full Fact has published assurances that among other things, they won’t be given access to Facebook users’ private data for any reason, Facebook will have no control over what they choose to check, and they will operate in a way that is independent, impartial and open.

Political Ad Transparency – New Rules

In October last year, Facebook also announced that a new rule for the UK now means that anyone who wishes a place an advert relating to a live political issue or promoting a UK political candidate, referencing political figures, political parties, elections, legislation before Parliament and past referenda that are the subject of national debate, will need to prove their identity, and prove that they are based in the UK. The adverts they post will also have to carry a “Paid for by” disclaimer to enable Facebook users to see who they are engaging with when viewing the ad.

What Does This Mean For Your Business?

As users of social networks, we don’t want to see false news, and false news that influences the outcome of important issues (e.g. elections and referendums) have a knock-on effect to the economic and trade environment which, in turn, affects businesses.

Facebook appears to have lost a lot of trust over the Cambridge Analytica (SCL Elections) scandal, findings that Facebook was used to distribute posts of Russian origin to influence opinion in the U.S. election, and that the platform was also used by parties wishing to influence the outcome of the UK Referendum. Facebook, therefore, must show that it is taking the kind of action that doesn’t stifle free speech but does go some way to tackling the spread of misinformation via its platform.

There remains, however, some criticism in this case that Facebook may still be acting too slowly and not decisively enough, given the speed by which some false content can amass millions of views.