Is Your Website Sending Scammers’ Emails?

Research by Kaspersky has discovered that cyber-criminals are now hijacking and using the confirmation emails from registration, subscription and feedback forms of legitimate company websites to distribute phishing links and spam content.

How?

Kaspersky has reported that scammers are exploiting the fact that many websites require users to register their details in order to receive content. Some cyber-criminals are now using stolen email addresses to register victims via the contact forms of legitimate websites.  This allows the cyber-criminals to add their own content to the form that will then be sent to the victim in the confirmation email from the legitimate website.

For example, according to Kaspersky, a cyber-criminal uses the victim’s e-mail address as the registration address, and then enters their own advertising message in the name field e.g. “we sell discount electrical goods. Go to http://discountelectricalgoods.uk.” This means that the victim receives a confirmation message that opens with “Hello, we sell discount electrical goods. Go to http:// discountelectricalgoods.uk Please confirm your registration request”.

Where a victim is asked by a website form to confirm their email address, cyber-criminals are also able to exploit this part of the process by ensuring that victims receive an email with a malicious link.

Advantages

The main advantages to cyber-criminals of using messages sent as a response to forms from legitimate websites are that the messages can pass through anti-spam filters and have the status of official messages from a reputable company, thereby making them more likely to be noticed, opened, and responded to.  Also, as well as the technical headers in the messages being legitimate, the amount of actual spam content carried in the message (which is what the filters react to) is relatively small. The spam rating assigned to messages by anti-spam filters is based on a variety of factors, but these kinds of messages command a prevailing overall authenticity which allows them to beat filters, thereby giving cyber-criminals a more credible-looking and effective way to reach their victims.

What Does This Mean For Your Business?

Most businesses and organisations are likely to have a variety of forms on their website which could mean that they are open to having their reputation damaged if cyber-criminals are able to target the forms as a way to initiate attacks or send spam.

The advice of Kaspersky is that companies and organisations should, therefore, consider testing their own forms to see if they could be compromised.  For example, registering on your own company form with your own personal e-mail address and entering a message in the name field such as “I am selling electrical equipment” as well as including a website address and a phone number, and then checking what appears in your e-mail inbox will show if there are any verification mechanisms for that type of information.  If the message you receive begins “Hello, I am selling electrical equipment”, you should contact the people who maintain your website and ask them to create simple input checks that will generate an error if a user tries to register under a name with invalid characters or invalid parts. Kaspersky also suggests that companies and organisations could consider having their websites audited for vulnerabilities.

$1 Million Bounty For Finding iPhone Security Flaws

Apple Inc recently announced at the annual Black Hat security conference in Las Vegas that it is offering security researchers rewards of up to $1 million if they can detect security flaws its iPhones.

Change

This move marks a change in Apple’s bug bounty programme.  Previously, for example, the highest sum offered by Apple was $200,000, and the bounties had only been offered to selected researchers.

The hope appears to be that widening the pool of researchers and offering a much bigger reward could maximise security for Apple mobile devices and protect them from the risk of governments breaking into them.

State-Sponsored Threats

In recent times, state-sponsored interference in the affairs of other countries has become more commonplace with dissidents, journalists and human rights advocates being targeted, and some private companies such as Israel’s NSO Group are even reported to have been selling hacking capabilities to governments. These kinds of threats are thought to be part of the motivation for Apple’s shift in its bug bounty position.

Big Prizes

The $1 million prize appears likely to only apply to remote access to the iPhone kernel without any action from the phone’s user, although it has been reported that government contractors and brokers have paid as much as $2 million for hacking techniques that can obtain information from devices.

Apple is also reported to be making things easier for researchers by offering a modified phone with some security measures disabled.

Updates

If flaws are found in Apple mobile devices by researchers, the plan appears to be that Apple will patch the holes using software updates.

Bug Bounties Not New

Many technology companies offer the promise of monetary rewards and permission to researchers and ethical (white hat) hackers / ethical security testers to penetrate their computer system, network or computing resource in order to find (and fix) security vulnerabilities before real hackers have the opportunity use those vulnerabilities as a way in.  Also, companies like HackerOne offers guidance as to the amounts to set as bug bounties e.g. anywhere from $150 to $1000 for low severity vulnerabilities, and anywhere from $2000 to $10,000 for critical severity vulnerabilities.

Examples of bug bounty schemes run by big tech companies include Google’s ongoing VRB program which offers varying rewards ranging from $100 to $31,337 and Facebook’s white hat program (running since 2011) offering a minimum reward of $500 with over $1 million paid out so far.

What Does This Mean For Your Business?

With the growing number of security threats, a greater reliance on mobile devices, more remote working via mobile devices, mobile security is a very important issue for businesses. A tech company such as Apple offering bigger bug bounties to a wider pool of security researchers could be well worth it when you consider the damage that is done to companies and the reputation of their products and services when a breach or a hack takes place, particularly if it involves a vulnerability that may be common to all models of a certain device.

Apple has made the news more than once in recent times due to faults and flaws in its products e.g. after a bug in group-calling of its FaceTime video-calling feature was found to allow eavesdropping of a call’s recipient to take place prior to the call being taken, and when it had to offer repairs/replacements for problems relating to screen touch issues on the iPhone X and data loss and storage drive failures in 13-inch MacBook Pro computers. Apple also made the news in May this year after it had to recall two different types of plug adapter because of a possible risk of electric shock.

This bug bounty announcement by Apple, therefore, is a proactive way that it can make some positive headlines and may help the company to stay ahead of the evolving risks in the mobile market, particularly at a time when the US President has focused on possible security flaws in the hardware of Apple’s big Chinese rival Huawei.

If the bug bounties lead to better security for Apple products, this can only be good news for businesses.

Using GDPR To Get Partner’s Personal Data

A University of Oxford researcher, James Pavur, has explained how (with the consent of his partner) he was able to exploit rights granted under GDPR to obtain a large amount of his partner’s personal data from a variety of companies.

Right of Access

Mr Pavur reported that he was able to send out 75 Right of Access Requests/Subject Access Requests (SAR) in order to get the first pieces of information from companies, such as his partner’s full name, some email addresses and phone numbers. Mr Pavur reported using a fake email address to make the SARs.

SAR

A Subject Access Request (SAR), which is a legal right for everyone in the UK, is where an individual can ask a company or organisation, verbally or in writing, to confirm whether they are processing their personal data and, if so, can ask the company or organisation for a copy of that data e.g. paper copy or spreadsheet.  With a SAR, individuals have the legal right to know the specific purpose of any processing of their data, what type of data is being processed, who the recipients of that processed data are, how long that data stored, how the data was obtained from them in the first place, and for information about how that processed and stored data is being safeguarded. Under GDPR, individuals can make a SAR for free, although companies and organisations can charge “reasonable fees” if requests are unfounded, excessive (in scope), or where additional copies of data are requested to the original request.

Another 75 Requests

Mr Pavur reported that he was able to use the information that he received back from the first 75 requests to send out another 75 requests.  From the second batch of requests Mr Pavur was able to obtain a large amount of personal data about his partner including her social security number, date of birth, mother’s maiden name, previous home addresses, travel and hotel logs, her high school grades, passwords, partial credit card numbers, and some details about her online dating.

The Results

In fact, Mr Pavur reported that 24% of the targeted firms who responded (72%) accepted an email address (a false one) and a phone number as proof of identity and revealed his partner’s personal details on the strength of these.  One company even revealed the results of a historic criminal background check.

Who?

According to Mr Pavur, the prevailing pattern was that large (technology) companies responded well the requests, small companies ignored the requests, and mid-sized companies showed a lack of knowledge about how to handle and verify the requests.

What Does This Mean For Your Business?

The ICO recognises on its website that GDPR does not specify how to make a valid request and that individuals can make a SAR to a company verbally or in writing, or to any part of your organisation (including by social media) and it doesn’t have to be made to a specific person or contact point.  Such a request also doesn’t have to include the phrase ‘subject access request’ or Article 15 of the GDPR, but any request must be clear that the individual is asking for their own personal data.  This means that although there may be some confusion about whether a request has actually been made, companies should at least ensure that they have identity verification and checking procedures in place before they send out personal data anyone. Sadly, in the case of this experiment, the researcher was able to obtain a large amount of personal and sensitive data about his (very understanding) partner using a fake email address.

Businesses may benefit from looking which members of staff regularly interact with individuals and offering specific training to help those staff members identify requests.

Also, the ICO points out that it is good practice to have a policy for recording details of the requests that businesses receive, particularly those made by telephone or in-person so that businesses can check with the requester that their request has been understood.  Businesses should also keep a log of verbal requests.

Fingerprints Replacing Passwords for Some Google Services

Google has announced that users can verify their identity by using their fingerprint or screen lock instead of a password when visiting certain Google services, starting with Pixel devices and coming to all Android 7+ devices in the next few days.

How?

Google says that years of collaboration between itself and many other organizations in the FIDO Alliance and the W3C have led to the development of the FIDO2 standards, W3C WebAuthn and FIDO CTAP that allow fingerprint verification.

The key game-changer in how these new technologies can help users is that unlike the native fingerprint APIs on Android, FIDO2 biometric capabilities are available on the Web which means that the same credentials be used by both native apps and web services. The result is that users only need to register their fingerprint with a service once and the fingerprint will then work for both the native application and the web service.

Fingerprint Not Sent To Google’s Servers

Google is keen to point out that the FIDO2 design is extra-secure because it means that a user’s fingerprint is never sent to Google’s servers but is securely stored on the user’s device.  Only a cryptographic proof that a user’s finger was scanned is actually sent to Google’s servers.

Try It Out

In order to try the new fingerprint system out, you will need a phone that’s running Android 7.0 (Nougat) or later, make sure that your personal Google Account is added to your Android device, and make sure that a valid screen lock is set up on your Android device.

Next, open the Chrome app on your Android device, go to https://passwords.google.com, choose a site to view or manage a saved password, and follow the instructions to confirm that it’s you trying signing in.

Google has provided more detailed instructions here: https://support.google.com/accounts/answer/9395014?p=screenlock-verif-blog&visit_id=637012128270413921-962899874&rd=1

More Places

Google says that this is just the start of the embracing of the FIDO2 standard and that more places will soon be able to accept local alternatives to passwords as an authentication mechanism for Google and Google Cloud services.

What Does This Mean For Your Business?

Not having to use a password but to be able to rely upon fingerprint (biometric) verification (or screen lock) instead should mean greater convenience and security for users of Google’s services, and should also reduce the risk to Google of having to face the results of breaches.

The development and wider use of the FIDO2 standard is, therefore, good news for businesses and consumers alike, particularly considering that Google (at 8% share) is one of the top 10 vendors that account for 70% of the world’s cloud infrastructure services market.

Back in May, Microsoft’s Corporate Vice President and Chief Information Officer Bret Arsenault signalled (in a CBNC interview) that Microsoft was looking also to move away from passwords on their own as a means of authentication towards (biometrics) and a “passwordless future”.  For example, 90% of Microsoft’s 135,000 workforce can now log into the company’s corporate network without using passwords but instead using biometric technology such as facial recognition and fingerprint scanning via apps such as ‘Windows Hello’ and the ‘Authenticator’ app.

Amazon Echo: Child Labour Concerns

Reports of a 2018 investigation by China Labour Watch (CLW) into the Amazon Echo manufacture at the Hengyang Foxconn factory show that the recruiting of young interns from vocational schools could mean that the Amazon devices are made with the help of child labour.

Schools Providing Workers For Night Shifts

The report of the investigation by New York-based non-profit group CLW claims that a number of interns from schools and colleges were brought in to work night shifts and if they were unwilling to work overtime or night shifts, the factory would arrange for teachers to pressure those workers. The report also claims that if those interns refused to work overtime and night shifts, the factory requested teachers from their schools to sack them from the job.

In addition to the night shift work, the report claims that young interns were required to work ten hours a day, including two hours of overtime, and to work six days a week.

Which Schools and Colleges?

The report claims that schools sending interns to work at the Hengyang Foxconn factory which manufactures Amazon Echo devices included Sinosteel Hengyang Heavy Machinery Workers Technical College, Hengyang Technician College, Hengyang Vocational Secondary School, Hengyang Industrial Workers College, and Hengnan County Technical School.

Teachers and Schools Paid

The worrying report also claims that teachers assigned to the factory put immense pressure on interns and sometimes resorted to violence and aggression against interns.  Teachers who helped at the factory are reported to have received a 3000 RMB ($425) subsidy from the factory, with their school receiving 3RMB ($0.42) for every hour an intern worked.

Dispatch Workers

The report also claims that the factory had hired a high number of dispatch workers, violating Chinese labour law.

13 Violations Listed

The report lists 13 violations that Amazon has allegedly made at the factory including interns working night shifts and overtime, and interns having to keep their heads down at their workstation for an extended period while doing repetitive motions.

What Does Amazon Say?

Amazon has been reported as saying that it is investigating the allegations and has sent representatives to the factory site as part of that investigation.  Amazon is also keen to promote the fact that it has a supplier Code of Conduct, and that suppliers are regularly assessed in relation to this.

What Does This Mean For Your Business?

Child labour is generally a feature of the world’s poorest countries, where, according to UNICEF, around one in four children are engaged in work that is potentially harmful to their health.  For example, International Labour Organisation (ILO) figures show that almost half of child labour (72.1 million) is to be found in Africa, 62.1 million in the Asia and the Pacific, and 10.7 million in the Americas.

Sadly, labour laws in China are not as strictly enforced as in other countries, and although Foxconn may be keen to promote the idea that internships at the factory are the way for young people to gain practical work experience, the report’s allegations of children working long hours and nightshifts while being pressured by teachers doesn’t appear to fit in with that picture.

While most of us like to purchase lower-priced goods, we are often unaware of how they were made and at whose expense. Companies need to keep costs down, but child labour is something that most businesses would actively avoid and is something that consumers certainly do not like the idea of.  These allegations, therefore, could have a negative impact on Amazon, thereby adding to some its other recent troubled headlines such as reports last year of Amazon’s profits trebling while its UK tax bill was significantly reduced, and how on Amazon’s Prime Day sale this year, thousands of their workers protested at sites around the world demanding better working conditions.

Tech Tip – Crono App

If you’d like to get better integration between your PC and phone, the Crono app enables you to get all your notifications straight from Chrome.

If you spend a lot of time using Chrome on your computer, the Crono app lets you see all your notifications and calendar events without looking at your phone i.e. you get mobile notifications on your browser and you can respond to those notifications through your browser.

The app, which requires a Chrome extension to work also allows clipboard sharing between your browser and device with a single click, and if you can’t find your phone you can ring it directly from your browser.

Crono is available for Android from the Google Play Store.

One-Third of Major VPNs Owned By Chinese

A recent survey by VPNpro has revealed that almost one-third of the most popular VPN services are secretly owned by Chinese companies that may be subject to weak privacy laws.

VPN

A ‘Virtual Private Network’ (VPN) is used to keep internet activity private, evade censorship / maintain net neutrality and use public Wi-Fi securely e.g. avoid threats such as ‘man-in-the-middle’ attacks.  A VPN achieves this by diverting a user’s traffic via a remote server in order to replace their IP address while offering the user a secure, encrypted connection (like a secure tunnel) between the user’s device and the VPN service.

Based In China

The VPNpro research found that the top 97 VPNs are run by only 23 parent companies and that although 6 of these companies are based in China and offer 29 VPN services between them, information on their parent company is often hidden to users.

Metric Labs Research Last Year

The results of the VPNPro research support the findings of an investigation by Metric Labs last year which found that of the top free VPN (Virtual Private Network) apps in Apple’s App Store and Google Play, more than half are run by companies with Chinese ownership.

What’s The Problem?

The worry about VPN services being based in China is that China not only tightly controls access to the Internet from within the country, but has clamped down on VPN services, and many of the free VPN services with links to China, for example, offer little or no privacy protection and no user support.  Weak privacy laws in China, coupled with strong state control could mean that data held by VPN providers could be accessed and could enable governments or other organisations to identify users and their activity online, thereby putting human rights activists, privacy advocates, investigative journalists, whistle-blowers, and anyone criticising the state in danger.  For other users of China-based VPN services, it could also simply mean that they could more easily be subject to a range of privacy and security risks such as having their personal data stolen to be used in other criminal activity or could even be subject to industrial espionage.

China, Russia, Pakistan and other states whose activities are causing concerns to Western governments all appear to be less trusted when it comes to hosting VPN services or redirecting Internet traffic through their countries.  For example, in February this year, US Senators Marco Rubio (Republican) and Ron Wyden (Democrat) asked the Department of Homeland Security to investigate governmental employees’ use of VPNs because of concerns that many VPNs that use foreign servers to redirect traffic through China and Russia could intercept sensitive US data.

What Does This Mean For Your Business?

The reason for using a VPN is to ensure privacy and security in communications so it’s a little worrying that some of the top VPN services are based in countries that have weaker privacy laws than the UK and are known for strong state control of communications.

Fears about security and privacy of our data and communications have been heightened by reports of Russia’s interference in the last US election and the UK referendum, and by the current poor relations between the Trump administration (which the UK has intelligence links with) and warnings about possible espionage, privacy and security threats from the use of equipment from Chinese communications company Huawei in western communications infrastructure.   Also, in the UK, there is a need by businesses and organisations to remain GDPR compliant, part of which involves ensuring that personal data is stored on servers based in places that can ensure privacy and security.

It appears, therefore, that for businesses and organisations seeking VPN services, some more desk research needs to be done to ensure that those services have all the signs of offering the highest possible levels of security and privacy i.e. opting for a trusted paid-for service that isn’t owned by or a subsidiary of a company in a state that has weak privacy laws.

Opting Out of People Reviewing Your Alexa Recordings

Amazon has now added an opt-out option for manual review of voice recordings and their associated transcripts taken through Amazon’s Alexa but it has not stopped the practice of taking voice recordings to help develop new Alexa features.

Opt-Out Toggle

The opt-out toggle can be found in the ‘Manage How Your Data Improves Alexa’ section of your privacy settings, which you will have to sign-in to Amazon to be able to see.  This section contains a “Help Improve Amazon Services and Develop New Features” section with a toggle switch to the right-hand side of it and moving the toggle from the default ‘yes’ to the ‘no’ position will stop humans reviewing your voice recordings.

Echo owners can see the transcript and hear what Alexa has recorded of their voices by visiting the ‘Review Voice History’ of the privacy section.

Why Take Recordings?

Amazon argues that training its Alexa digital voice assistant using recordings from a diverse range of customers can help to ensure that Alexa works well for all users, and those voice recordings may be used to help develop new features.

Why Manually Review?

Amazon says that manually reviewing recordings and transcripts is another method that the company uses to help improve their services, and that only “an extremely small fraction” of the voice recordings taken are manually reviewed.

Google and Apple Have Stopped

Google has recently been forced to stop the practice of manually reviewing its auto snippets (in Europe) by the Hamburg data protection authority, which threatened to use Article 66 powers of the General Data Protection Regulation (GDPR) to stop Google from doing so.  This followed a leak of more than 1,000 recordings to the Belgian news site VRT by a contractor working as a Dutch language reviewer.  It has been reported that VRT was even able to identify some of the people in the recorded clips.

Apple has also stopped the practice of manual, human reviewing of recordings and transcripts taken via Siri after a (Guardian) report revealed that contractors used by Apple had heard private medical information and even recordings of people having sex in the clips.  This was thought to be the result of the digital assistant mistaking another word for its wake word.

What Does This Mean For Your Business?

If you have an Amazon Echo and you visit the ‘Review Voice History’ section of your privacy page, you may be very surprised to see just how many recordings have been taken, and the dates, times, and what has been said could even be a source of problems to those who have been recorded.  Even though we understand that AI/Machine Learning technology needs training in order to improve its recognition of and response to our questions, the fact that mistakes with wake words could lead to sensitive discussions being recorded and listened to by third-party contractors, and that voices could even be identified from those recordings highlights a real threat to privacy and security, and a trade-off that many users may not be prepared to accept.

It’s a shame that mistakes and legal threats were the catalysts for stopping Google and Apple from using manual reviewing, and it is surprising that in the light of their cases, Amazon is not stopping the practice as a default altogether but is merely including an opt-out toggle switch deep within the Privacy section of its platform.

This story is a reminder that although smart speakers and the AI behind them bring many benefits, attention needs to be paid, as it does by all companies to privacy and security when dealing with what can be very personal data.

Goodbye Skype for Business, Hello Teams

Microsoft has announced that Skype for Business Online will be giving way to ‘Teams’, with support for Skype for Business ending on 31 July 2021, and all new Microsoft 365 customers due to get Microsoft Teams by default from 1 September 2019.

What Is Teams?

Introduced back in November 2016, ‘Teams’ is a platform designed to help collaborative working and combines features such as workplace chat, meetings, notes, and attachments. Described by Microsoft as a “complete chat and online meetings solution”, it normally integrates with the company’s Office 365 subscription office productivity suite, and Teams is widely considered to be Microsoft’s answer to ‘Slack’.

Slack is a popular, multi-channel collaborative working hub that offers chat channels with companies and businesses you regularly work with, direct voice or video calls and screen-sharing, integrated drag-and-drop file sharing, and an App Directory with over 1,500 apps that can be integrated into Slack.

Back in July 2018, Microsoft introduced a free, basic features version of Teams which did not require an Office 365 account, in order to increase user numbers and tempt users away from Slack.

According to Microsoft figures announced in July, Teams now has 13 million users which are more than Slack’s 10 million users.  Microsoft is keen to promote Teams as a new communications tool rather than just an upgrade to Skype for Business.

End of Skype For Business
Microsoft originally announced at the end of 2017 that Teams was set to replace Skype for Business as Microsoft’s primary client for intelligent communications in Office 365.

With this in mind, Microsoft ended support for Skype for Business at the end of July, will be giving all new 365 customers Teams by default from 1 September and has said that current Skype for Business Online customers won’t notice any change in service in the meantime.

Migration and Interoperability

Microsoft has announced investment and interoperability that will ensure a painless migration to Teams for Skype for Business Online.  For example, from the first quarter of 2020 customers on both platforms will be able to communicate via calls and text chats, DynamicE911 will work in Teams, and Teams also includes contact centre integration and compliance recording solutions.

What Does This Mean For Your Business?

Microsoft is succeeding in challenging and overtaking its competitor Slack in the business collaborative working communications tools market.  Brand reach and power coupled with a free version, and now compulsory migration for existing and default for new users has seen Teams reach the point where, as planned by Microsoft more than two years ago, it can ably replace Skype for Business.

It appears that Microsoft is making efforts and investing to ensure that the migration is as smooth for (and attractive to) existing Skype business customers as possible and that the voice and video capabilities, cognitive and data services and insights that Teams offers should add value that could translate into advantages and extra efficiencies for users.

Google Plugs Incognito Mode Detection Loophole With Chrome 76

Google has announced that with the introduction of Chrome 76 (at the end of July), it has plugged a loophole that enabled websites to tell when you were browsing in Incognito mode.

Incognito

Incognito mode in Chrome (private browsing) is really designed to protect privacy for those using shared or borrowed devices, and exclude certain activities from being recorded in their browsing histories. Also, less commonly, private browsing can be very important for people suffering political oppression or domestic abuse for example, where there may be important safety reasons for concealing web activity.

Loophole Plugged

The loophole that is being plugged with the introduction of Chrome 76 relates to FileSystem API.  In the case of Google’s Incognito mode, the problem has been that whereas Chrome’s FileSystem API is disabled in Incognito Mode to avoid leaving traces of activity on someone’s device, some websites that have been checking for Incognito mode have still been able to detect that is being used, and have received an error messages to confirm this.  This has meant that Incognito browsing has not been technically incognito.

In Chrome 76, which has just been introduced, the behaviour of the FileSystem API has been modified to ensure that Incognito Mode use can no longer be detected, and Google has stated that it will work to remedy any other future means of Incognito Mode usage in Chrome being detected.

Metered Paywalls Affected

While this change may be a good thing for Chrome users, it is more bad news for web publishers with ‘metered paywalls’. These are web publishers that offer a certain number of free articles to view before a visitor must register and log in. These websites have already suffered from the ability of users to use Incognito mode to circumvent this system, and as a result, many of these publishers resorted to Incognito detection to stop people from circumventing their publishing system.  Stopping the ability to detect Incognito browsing with the introduction of Chrome 76 will, therefore, cause more problems for metered paywall publishers.

Google has said that although its News teams support sites with meter strategies and understand their need to reduce meter circumvention, any approach that’s based on private browsing detection undermines the principles of its Incognito Mode.

What Does This Mean For Your Business?

Plugging this loophole with the new, improved Chrome 76 is good news for users, many of whom may not have realised that Incognito mode was not as incognito as they had thought. Using Incognito mode on your browser, however, will only provide privacy on the devices you browse on and won’t stop many sites from still being able to track you.  If you’d like greater privacy, it may be a case of using another browser e.g. Tor or Brave, or a VPN.

For metered paywall publishers, however, the plugged loophole in Chrome 76 is not good news as, unless these publishers make changes to their current system and/or decide to go through the process of exploring other solutions with Google, they will be open to more meter circumvention.