Legislation

AI and the Fake News War

In a “post-truth” era, AI is one of the many protective tools and weapons involved in the battles that male up the current, ongoing “fake news” war.

Fake News

Fake news has become widespread in recent years, most prominently with the UK Brexit referendum, the 2017 UK general election, and the U.S. presidential election, all of which suffered interference in the form of so-called ‘fake news’ / misinformation spread via Facebook which appears to have affected the outcomes by influencing voters. The Cambridge Analytica scandal, where over 50 million Facebook profiles were illegally shared and harvested to build a software program to generate personalised political adverts led to Facebook’s Mark Zuckerberg appearing before the U.S. Congress to discuss how Facebook is tackling false reports. A video that was shared via Facebook, for example (which had 4 million views before being taken down), falsely suggested that smart meters emit radiation levels that are harmful to health. The information in the video was believed by many even though it was false.

Government Efforts

The Digital, Culture, Media and Sport Committee has published a report (in February) on Disinformation and ‘fake news’ highlighting how “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms”.  The UK government has, therefore, been calling for a shift in the balance of power between “platforms and people” and for tech companies to adhere to a code of conduct written into law by Parliament and overseen by an independent regulator.

Fact-Checking

One way that social media companies have sought to tackle the concerns of governments and users is to buy-in fact-checking services to weed out fake news from their platforms.  For example, back in January London-based, registered charity ‘Full Fact’ announced that it would be working for Facebook, reviewing stories, images and videos to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.

Moderation

A moderator-led response to fake news is one option, but its reliance upon humans means that this approach has faced criticism over its vulnerability to personal biases and perspectives.

Automation and AI

Many now consider automation and AI to be an approach and a technology that are ‘intelligent’, fast, and scalable enough to start to tackle the vast amount of fake news that is being produced and circulated.  For example, Google and Microsoft have been using AI to automatically assess the truth of articles.  Also, initiatives like the Fake News Challenge (http://www.fakenewschallenge.org/) seeks to explore how AI technologies, particularly machine learning and natural language processing, can be leveraged to combat fake news, and supports the idea that AI technologies hold promise for significantly automating parts of the procedure human fact-checkers use to determine if a story is real or a hoax.

However, the human-written rules underpinning AI, and how AI is ‘trained’ can also lead to bias.

Deepfake Videos

Deepfake videos are an example of how AI can be used to create fake news in the first place.  Deepfake videos use deep learning technology and manipulated images of target individuals (found online), often celebrities, politicians, and other well-known people to create an embarrassing or scandalous video. Deepfake audio can also be manipulated in a similar way.  Deepfake videos aren’t just used to create fake news sources, but they can also be used by cyber-criminals for extortion.

AI Voice

There has also been a case in March this year, where a group of hackers were able to use AI software to mimic an energy company CEO’s voice in order to steal £201,000.

What Does This Mean For Your Business?

Fake news is a real and growing threat, as has been demonstrated in the use of Facebook to disseminate fake news during the UK referendum, the 2017 UK general election, and the U.S. presidential election. State-sponsored politically targeted campaigns can have a massive influence on an entire economy, whereas other fake news campaigns can affect public attitudes to ideas and people and can lead to many other complex problems.

Moderation and automated AI may both suffer from bias, but at least they are both ways in which fake news can be tackled, to an extent.  Through adding fact-checking services, other monitoring, and software-based approaches e.g. through browsers, social media and other tech companies can take responsibility for weeding out and guarding against fake news.

Governments can also help in the fight by putting pressure on social media companies and by collaborating with them to keep the momentum going and to help develop and monitor ways to keep tackling fake news.

That said, it’s still a big problem, no solution is infallible, and all of us as individuals would do well to remember that, especially today, you really can’t believe everything you read and an eye to source and bias of news coupled with a degree of scepticism can often be healthy.

Email Signature Legally Binding For Lawyer

A recent ruling by the High Court that an email containing an automated signature is legally binding proved costly to the lawyer who sent such an email on behalf of his client that included the wrong price for a land sale.

£25,000 Below

The unfortunate lawyer, Daniel Tear, who sent an email to another lawyer setting out the terms for an owner’s land/property sale (but with the sale price listed as £25,000 lower than the asking price) the ruling about his email signature at the County Court in Manchester proved to be very costly.

In the case, which related to a dispute over the sale of land near Lake Windermere listed as a “jetty/boat landing plot/mooring”, it has been reported that the land should have been offered for sale at the asking price of £200,000 but (according to published court documents) but Mr Tear’s email to the lawyer of those wishing to purchase the land specified a price of “ £175,000 (one hundred and seventy-five thousand pounds”.

The lawyer acting for the buyer accepted the deal, and despite Mr Tear later emailing all the parties to say the deal had not been finalised by email, the court ruling went against him and his client.

Why?

According to the published court documents which refer to matters related to certain sections of the Law of Property Act of 1989, Mr Tear’s auto-signature (using Microsoft Outlook) which appeared at the bottom of his email, accompanied by the words “Many Thanks” (which link the email’s contents to the signature) were enough to make the contents of the email’s agreement binding.

In a hearing which considered the many difficulties around an email footer possibly being treated as a sufficient act of signing the judge stated that he was “satisfied that Mr Tear signed the relevant email on behalf of the Defendant” and that “the Claimants are entitled to the order for specific performance that is sought”.

Mr Tear’s argument that the case fell under Section 2 (1) of the Law of Property Act of 1989 i.e. “The document incorporating the terms or, where contracts are exchanged, one of the documents incorporating them (but not necessarily the same one) must be signed by or on behalf of each party to the contract” was, therefore, not accepted by the court.

What Does This Mean For Your Business?

As with most legal matters, if you read the court documents (here: https://www.bailii.org/ew/cases/EWHC/Ch/2019/2462.html) there were many different considerations based around the case. One thing that businesses can take away from this case, however, is that if you create and add an email signature section to the footer of your Outlook emails, even though it is automatically added to each of your emails, it may still prove to be enough to legally bind you to the contents of the email, even though you may have made a mistake. It goes without saying, therefore, that businesses need to be very careful to check that prices and quotes emails to clients (where an email signature is included) are correct and that any terms are clearly stated.  This ruling could now and in future have implications for many businesses in disputes relating to the contents of business emails.

Penetration Testing Specialists Who Broke Into US Courthouse Claim It Was Part of Security Assessment

Two security specialists who performed a physical break-in on the US courthouse that hired their company for a penetration test have claimed that their break-in was part of their assessment of security.

What Happened?

Dallas’ State Court Administration (SCA) is reported to have hired security company Coalfire Labs to conduct testing of the security of the court’s electronic records at the Dallas County Courthouse in the town of Adel, around 20 miles west of Des Moines.

The police were called to the courthouse just after midnight on the 11 September where two men, who had been seen walking around on the third floor, came to the door to meet the police.  When the two men, named as Justin Wynn and Gary Demercurio came to the door they were allegedly carrying multiple burglary tools, and allegedly claimed that they had been ‘contracted’ to break into the building and to check courthouse alarm system, and how responsive the police were.  The two men were promptly arrested, jailed and released on a $50,000 bond.

No Knowledge

It has been reported that, at the time, Dallas County claimed to have no knowledge of the security company or their plans, but Iowa’s State Court Administration did later release a statement confirming that it hired the company Coalfire Labs to test the security of the court’s electronic records.

The State Court Administration did, however say that, although it has asked the company to attempt unauthorised access to court records through various means to learn of any potential vulnerabilities, it didn’t intend or expect those means to include forced entry to the building, an act that it couldn’t not condone (certainly for cyber testing!).

Would A Physical Break-In Be Part of a Pen Test?

Some tech commentators have speculated that some cybercrimes require the criminal to be physically close to target devices, which would, therefore, require companies and organisations to perhaps consider investing in physical defences as well as cyber defences.

Coalfire

Coalfire Labs, the global company that was hired to carry out pen testing assessment, and is reported to have carried out hundreds of assessments for government agencies in the past, has been unable to comment on this particular case due to the confidential nature of its work, security and privacy laws, and the fact that a legal case is active.

Similar?

One thing that may not be good news for the two penetration testers is that there have been reports that a break-in at the Polk County Historic Courthouse in nearby Polk County on 9 Sept was apparently similar in nature to the Dallas County Courthouse break-in.

What Does This Mean For Your Business?

Physical security is, of course, an important part of protecting the whole business, but under GDPR data security should not involve leaving personal data anywhere that it could easily be accessed by unauthorised persons, whether its in a physical or virtual location.

Penetration testing is a legitimate and valuable way for companies and organisations to assess where more work needs to be done to ensure the safety of all digital data and information that they hold, but it is unlikely that many UK businesses would consider a physical break-in to be a legitimate part of what is usually and electronic-based assessment.  It remains to be seen what happens in the US court case.

Less Than Half of Small Businesses Ready For No-Deal Brexit

Research from techUK shows that less than half of small UK businesses consider themselves to be ready to face a no-deal Brexit on 31 October, whereas 87% of larger businesses think they are prepared.

Small and Medium

The techUK research shows that only 43% of UK small businesses think they are ready for the prospect of a no-deal Brexit, which is not too different to the mere 50% of medium-sized companies that expressed readiness.

Not Up To Date With Government Guidance

The survey revealed that although most enterprises are aware that the government has given guidance on getting ready for a no-deal Brexit, only 30% of small businesses and 33% of medium-sized businesses regard themselves as being up to date with that guidance.

Popular Concerns

In addition to the impact on the UK economy, some of the popular concerns that many businesses have about a no-deal Brexit include how they stand in terms of regulatory and any extra regulatory barriers that may hinder trade compliance, and difficulty in finding staff after an end to freedom of movement (there is already a tech skills shortage and tech ‘brain drain’).  Also, businesses are clearly worried about post-Brexit relationships with suppliers, whether contracts will need to be updated, and whether they will have enough of the right raw materials and parts to keep production running smoothly and meet their customer demands while keeping their costs and prices down.

Data Protection Guidance For Brexit

As far as being prepared to stay compliant with data protection laws, the ICO has recently stated that if a UK business or organisation already complies with the GDPR and has no contacts or customers in the EEA, that business or organisation doesn’t need to do much more to prepare for data protection compliance after Brexit.

The latest guidance for businesses facing a no-deal Brexit can be found on the website here: https://ico.org.uk/for-organisations/data-protection-and-brexit/data-protection-and-brexit-for-small-organisations/

What Does This Mean For Your Business?

It doesn’t take a study to find out that there is still a great deal of uncertainty about trading post-Brexit, particularly after the impact of a no-deal Brexit. As the businesses in the study indicated, many are aware that there is guidance available from government sources and that SMEs don’t appear to be up to date with that guidance.  It is good, at least, that the ICO has issued clear, easily accessible guidance on its website to help companies prepare to remain GDPR compliant after Brexit. Other Brexit guidance for small businesses can be found on the FSB website here https://www.fsb.org.uk/standing-up-for-you/brexit/resources  and on the main UK government website here https://www.gov.uk/find-eu-exit-guidance-business.

Leaving Your Job? Don’t Take Personal Data With You Warns ICO

The Information Commissioner’s Office (ICO) has warned those retiring or taking a new job that under the Data Protection Act 2018, employees can face regulatory action if they are found to have retained information collected as part of their previous employment.

Old Investigation

The renewed warning was issued following the regulator concluding its dealings in an old investigation of two (former) police officers interviewed (by the media) about an historic case they’d worked on as serving officers involving an MP, and had been accused of disclosing details about the case to the media.

In this case, the investigation appears to have related to police handling of personal data such as notebooks and the fact that measures need to be put in place to ensure that these are not retained when officers leave the service.

The ICO investigation, brought about under the previous Data Protection Act 1998 legislation (because the alleged disclosure occurred before the DPA 2018 and GDPR’s introduction) may have resulted in no enforcement action being taken against the two officers, but prompted the ICO to issue a reminder that data protection laws have been toughened in this area.

“Knowingly or Recklessly Retaining Personal Data”

The warning in the ICO’s recent statement is that the Data Protection Act 1998 has since been strengthened through the Data Protection Act 2018, to include a new element of knowingly or recklessly retaining personal data without the consent of the data controller (see section 170 of the DPA 2018).

The only exceptions to this new part of the new Act are when it is necessary for the purposes of preventing or detecting crime, is required or authorised by an enactment, by a rule of law or by the order of a court or tribunal, or whether it is justified as being in the public interest.

Retiring or Taking a New Job

The ICO has warned that anyone who deals with the personal details of others in the course of their work, private or public sector, should take note of this update to the law, especially when employees are retiring or taking on a new job. Those leaving or retiring should also take note that they will be held responsible if the breach of personal data from their previous employer can be traced to their individual actions.

Examples

Examples of where the ICO has prosecuted for this type of breach of the law include a charity worker who, without the knowledge of the data controller, Rochdale Connections Trust, sent emails from his work email account (in February 2017) containing sensitive personal information of 183 people.  Also, a former Council schools admission department apprentice was found guilty of screen-shotting a spreadsheet that contained information about children and eligibility for free school meals and then sending it to a parent via Snapchat.

What Does This Mean For Your Business?

This latest statement from the ICO should remind all businesses and organisations, whether in the private or public sectors, that reasonable measures or procedures need to be put in place to ensure that anyone retiring or leaving for another job cannot take personal details with them that should be under the care of the data controller i.e. you and your company/organisation.

Failure to take this facet of current data law into account could result in fines from the regulator for those individuals responsible, potential legal action from the victims of any breach against your organisation, some bad and potentially damaging publicity, and costly and long-lasting damage to reputation.

Audible’s ‘Captions’ Subtitles Feature Attracts Lawsuit From Publishers

The Amazon-owned producer of spoken audio entertainment ‘Audible’ is facing a lawsuit from the Association of American Publishers (AAP) on the grounds that its new “Audible Captions” speech-to-text subtitles feature may violate copyright law.

Audible Captions 

Audible Captions, which was announced by the largest producer of audiobooks via a YouTube video back in July, is a feature that allows text captions to appear on-screen and progressively highlights the words as a novel is narrated. The feature also highlights and gives definitions for certain words in the captions and allows the user to translate text into other languages.

Objections – Lawsuit 

Audible’s plans to roll-out the Captions feature attracted almost immediate complaints and concerns by authors, publishers and literary agents on social media over possible copyright law violations, along with accusations that Captions appears to make quite a few mistakes per book. Eventually, a lawsuit was filed at the District Court for the Southern District of New York by the Association of American Publishers (AAP) which includes seven of the top US publishing companies, such as Penguin Random House, and HarperCollins Publishers.

Injunction

The lawsuit, which seeks a preliminary injunction to stop the September launch of Audible Captions argues that the feature could give Audible a competitive advantage over other audio-book providers who aren’t in a position to utilise speech-to-text technology, and that displaying the text of Audible books may amount to illegal reproduction and distribution of those books, thereby potentially breaching copyright laws and adversely affecting publishers’ profits. The AAP members also appear to be angry that the mistakes (transcription errors) made by the AI aspect of Captions could add up to the equivalent of 18 pages of inaccuracies in 300-page book. The AAP’s legal action has also attracted the support of the US Authors Guild. Executive Director, Mary Rasenberger, makes the point that “Text and audio are different book markets, and Audible is licensed only for audio. It has chosen to use its market power to force publishers’ hands by proceeding without permission in clear violation of copyright in the titles.”

What Does Audible Say? 

Amazon-owned Audible has argued that Audible Captions are an educational and accessibility innovation, and that the Captions, which allow listeners to simply follow along with a few lines of machine-generated text as they listen to the audio are not and were never intended to be a book, and therefore, can’t be judged like one (with copyright law).

What Does This Mean For Your Business? 

In addition to their anger over allegedly not being consulted by Audible about using the feature, the big publishers and Authors Guild appear to see Captions as a competitive advantage that represents a threat to their existing benefits, profits, and market positions. For Amazon, a company that has grown and diversified and made major inroads into multiple markets, the lawsuit is not only another dose of bad publicity e.g. following recent concerns by China Labour Watch (CLW) about possible child labour being used in the manufacture of the Amazon Echo, but it’s a reminder that there are still other powerful players in the publishing market and that laws regarding copyright need to be studied and adhered to, no matter how big the market player. It is not clear when Captions will be released but it is unlikely that Amazon’s Audible would want to be delayed too long in releasing a value-adding feature that could provide a competitive advantage.

Using GDPR To Get Partner’s Personal Data

A University of Oxford researcher, James Pavur, has explained how (with the consent of his partner) he was able to exploit rights granted under GDPR to obtain a large amount of his partner’s personal data from a variety of companies.

Right of Access

Mr Pavur reported that he was able to send out 75 Right of Access Requests/Subject Access Requests (SAR) in order to get the first pieces of information from companies, such as his partner’s full name, some email addresses and phone numbers. Mr Pavur reported using a fake email address to make the SARs.

SAR

A Subject Access Request (SAR), which is a legal right for everyone in the UK, is where an individual can ask a company or organisation, verbally or in writing, to confirm whether they are processing their personal data and, if so, can ask the company or organisation for a copy of that data e.g. paper copy or spreadsheet.  With a SAR, individuals have the legal right to know the specific purpose of any processing of their data, what type of data is being processed, who the recipients of that processed data are, how long that data stored, how the data was obtained from them in the first place, and for information about how that processed and stored data is being safeguarded. Under GDPR, individuals can make a SAR for free, although companies and organisations can charge “reasonable fees” if requests are unfounded, excessive (in scope), or where additional copies of data are requested to the original request.

Another 75 Requests

Mr Pavur reported that he was able to use the information that he received back from the first 75 requests to send out another 75 requests.  From the second batch of requests Mr Pavur was able to obtain a large amount of personal data about his partner including her social security number, date of birth, mother’s maiden name, previous home addresses, travel and hotel logs, her high school grades, passwords, partial credit card numbers, and some details about her online dating.

The Results

In fact, Mr Pavur reported that 24% of the targeted firms who responded (72%) accepted an email address (a false one) and a phone number as proof of identity and revealed his partner’s personal details on the strength of these.  One company even revealed the results of a historic criminal background check.

Who?

According to Mr Pavur, the prevailing pattern was that large (technology) companies responded well the requests, small companies ignored the requests, and mid-sized companies showed a lack of knowledge about how to handle and verify the requests.

What Does This Mean For Your Business?

The ICO recognises on its website that GDPR does not specify how to make a valid request and that individuals can make a SAR to a company verbally or in writing, or to any part of your organisation (including by social media) and it doesn’t have to be made to a specific person or contact point.  Such a request also doesn’t have to include the phrase ‘subject access request’ or Article 15 of the GDPR, but any request must be clear that the individual is asking for their own personal data.  This means that although there may be some confusion about whether a request has actually been made, companies should at least ensure that they have identity verification and checking procedures in place before they send out personal data anyone. Sadly, in the case of this experiment, the researcher was able to obtain a large amount of personal and sensitive data about his (very understanding) partner using a fake email address.

Businesses may benefit from looking which members of staff regularly interact with individuals and offering specific training to help those staff members identify requests.

Also, the ICO points out that it is good practice to have a policy for recording details of the requests that businesses receive, particularly those made by telephone or in-person so that businesses can check with the requester that their request has been understood.  Businesses should also keep a log of verbal requests.

Amazon Echo: Child Labour Concerns

Reports of a 2018 investigation by China Labour Watch (CLW) into the Amazon Echo manufacture at the Hengyang Foxconn factory show that the recruiting of young interns from vocational schools could mean that the Amazon devices are made with the help of child labour.

Schools Providing Workers For Night Shifts

The report of the investigation by New York-based non-profit group CLW claims that a number of interns from schools and colleges were brought in to work night shifts and if they were unwilling to work overtime or night shifts, the factory would arrange for teachers to pressure those workers. The report also claims that if those interns refused to work overtime and night shifts, the factory requested teachers from their schools to sack them from the job.

In addition to the night shift work, the report claims that young interns were required to work ten hours a day, including two hours of overtime, and to work six days a week.

Which Schools and Colleges?

The report claims that schools sending interns to work at the Hengyang Foxconn factory which manufactures Amazon Echo devices included Sinosteel Hengyang Heavy Machinery Workers Technical College, Hengyang Technician College, Hengyang Vocational Secondary School, Hengyang Industrial Workers College, and Hengnan County Technical School.

Teachers and Schools Paid

The worrying report also claims that teachers assigned to the factory put immense pressure on interns and sometimes resorted to violence and aggression against interns.  Teachers who helped at the factory are reported to have received a 3000 RMB ($425) subsidy from the factory, with their school receiving 3RMB ($0.42) for every hour an intern worked.

Dispatch Workers

The report also claims that the factory had hired a high number of dispatch workers, violating Chinese labour law.

13 Violations Listed

The report lists 13 violations that Amazon has allegedly made at the factory including interns working night shifts and overtime, and interns having to keep their heads down at their workstation for an extended period while doing repetitive motions.

What Does Amazon Say?

Amazon has been reported as saying that it is investigating the allegations and has sent representatives to the factory site as part of that investigation.  Amazon is also keen to promote the fact that it has a supplier Code of Conduct, and that suppliers are regularly assessed in relation to this.

What Does This Mean For Your Business?

Child labour is generally a feature of the world’s poorest countries, where, according to UNICEF, around one in four children are engaged in work that is potentially harmful to their health.  For example, International Labour Organisation (ILO) figures show that almost half of child labour (72.1 million) is to be found in Africa, 62.1 million in the Asia and the Pacific, and 10.7 million in the Americas.

Sadly, labour laws in China are not as strictly enforced as in other countries, and although Foxconn may be keen to promote the idea that internships at the factory are the way for young people to gain practical work experience, the report’s allegations of children working long hours and nightshifts while being pressured by teachers doesn’t appear to fit in with that picture.

While most of us like to purchase lower-priced goods, we are often unaware of how they were made and at whose expense. Companies need to keep costs down, but child labour is something that most businesses would actively avoid and is something that consumers certainly do not like the idea of.  These allegations, therefore, could have a negative impact on Amazon, thereby adding to some its other recent troubled headlines such as reports last year of Amazon’s profits trebling while its UK tax bill was significantly reduced, and how on Amazon’s Prime Day sale this year, thousands of their workers protested at sites around the world demanding better working conditions.

Opting Out of People Reviewing Your Alexa Recordings

Amazon has now added an opt-out option for manual review of voice recordings and their associated transcripts taken through Amazon’s Alexa but it has not stopped the practice of taking voice recordings to help develop new Alexa features.

Opt-Out Toggle

The opt-out toggle can be found in the ‘Manage How Your Data Improves Alexa’ section of your privacy settings, which you will have to sign-in to Amazon to be able to see.  This section contains a “Help Improve Amazon Services and Develop New Features” section with a toggle switch to the right-hand side of it and moving the toggle from the default ‘yes’ to the ‘no’ position will stop humans reviewing your voice recordings.

Echo owners can see the transcript and hear what Alexa has recorded of their voices by visiting the ‘Review Voice History’ of the privacy section.

Why Take Recordings?

Amazon argues that training its Alexa digital voice assistant using recordings from a diverse range of customers can help to ensure that Alexa works well for all users, and those voice recordings may be used to help develop new features.

Why Manually Review?

Amazon says that manually reviewing recordings and transcripts is another method that the company uses to help improve their services, and that only “an extremely small fraction” of the voice recordings taken are manually reviewed.

Google and Apple Have Stopped

Google has recently been forced to stop the practice of manually reviewing its auto snippets (in Europe) by the Hamburg data protection authority, which threatened to use Article 66 powers of the General Data Protection Regulation (GDPR) to stop Google from doing so.  This followed a leak of more than 1,000 recordings to the Belgian news site VRT by a contractor working as a Dutch language reviewer.  It has been reported that VRT was even able to identify some of the people in the recorded clips.

Apple has also stopped the practice of manual, human reviewing of recordings and transcripts taken via Siri after a (Guardian) report revealed that contractors used by Apple had heard private medical information and even recordings of people having sex in the clips.  This was thought to be the result of the digital assistant mistaking another word for its wake word.

What Does This Mean For Your Business?

If you have an Amazon Echo and you visit the ‘Review Voice History’ section of your privacy page, you may be very surprised to see just how many recordings have been taken, and the dates, times, and what has been said could even be a source of problems to those who have been recorded.  Even though we understand that AI/Machine Learning technology needs training in order to improve its recognition of and response to our questions, the fact that mistakes with wake words could lead to sensitive discussions being recorded and listened to by third-party contractors, and that voices could even be identified from those recordings highlights a real threat to privacy and security, and a trade-off that many users may not be prepared to accept.

It’s a shame that mistakes and legal threats were the catalysts for stopping Google and Apple from using manual reviewing, and it is surprising that in the light of their cases, Amazon is not stopping the practice as a default altogether but is merely including an opt-out toggle switch deep within the Privacy section of its platform.

This story is a reminder that although smart speakers and the AI behind them bring many benefits, attention needs to be paid, as it does by all companies to privacy and security when dealing with what can be very personal data.

Google Plugs Incognito Mode Detection Loophole With Chrome 76

Google has announced that with the introduction of Chrome 76 (at the end of July), it has plugged a loophole that enabled websites to tell when you were browsing in Incognito mode.

Incognito

Incognito mode in Chrome (private browsing) is really designed to protect privacy for those using shared or borrowed devices, and exclude certain activities from being recorded in their browsing histories. Also, less commonly, private browsing can be very important for people suffering political oppression or domestic abuse for example, where there may be important safety reasons for concealing web activity.

Loophole Plugged

The loophole that is being plugged with the introduction of Chrome 76 relates to FileSystem API.  In the case of Google’s Incognito mode, the problem has been that whereas Chrome’s FileSystem API is disabled in Incognito Mode to avoid leaving traces of activity on someone’s device, some websites that have been checking for Incognito mode have still been able to detect that is being used, and have received an error messages to confirm this.  This has meant that Incognito browsing has not been technically incognito.

In Chrome 76, which has just been introduced, the behaviour of the FileSystem API has been modified to ensure that Incognito Mode use can no longer be detected, and Google has stated that it will work to remedy any other future means of Incognito Mode usage in Chrome being detected.

Metered Paywalls Affected

While this change may be a good thing for Chrome users, it is more bad news for web publishers with ‘metered paywalls’. These are web publishers that offer a certain number of free articles to view before a visitor must register and log in. These websites have already suffered from the ability of users to use Incognito mode to circumvent this system, and as a result, many of these publishers resorted to Incognito detection to stop people from circumventing their publishing system.  Stopping the ability to detect Incognito browsing with the introduction of Chrome 76 will, therefore, cause more problems for metered paywall publishers.

Google has said that although its News teams support sites with meter strategies and understand their need to reduce meter circumvention, any approach that’s based on private browsing detection undermines the principles of its Incognito Mode.

What Does This Mean For Your Business?

Plugging this loophole with the new, improved Chrome 76 is good news for users, many of whom may not have realised that Incognito mode was not as incognito as they had thought. Using Incognito mode on your browser, however, will only provide privacy on the devices you browse on and won’t stop many sites from still being able to track you.  If you’d like greater privacy, it may be a case of using another browser e.g. Tor or Brave, or a VPN.

For metered paywall publishers, however, the plugged loophole in Chrome 76 is not good news as, unless these publishers make changes to their current system and/or decide to go through the process of exploring other solutions with Google, they will be open to more meter circumvention.