1 – 0 In England Vs World Cup Hackers

It has been reported that the England football team have been briefed before flying out to their World Cup base in St Petersburg about how they and UK fans can avoid falling victim to Russian hackers.

NCSC Advice

The briefing has been delivered by The National Cyber Security Centre (NCSC), which is part of GCHQ. The advice will focus upon cyber security e.g. for mobile devices and using Wi-Fi connections safely while in Russia.

The same advice has been included in an NCSC blog post that is aimed at anyone travelling to Russia to watch any of the World Cup game, and is entitled ‘Avoid scoring a cyber security own goal this summer”.

The NCSC suggests that is it should be read alongside other UK government online advice pages such as the “FCO Travel Advice” page relating to Russia (, and the “Be on the Ball: World Cup 2018” pages (


Many security experts and commentators have noted that sporting events have become a real target for cyber criminals in Russia in recent times. Russia-based security company, Kaspersky, reported seeing spikes in the number of phishing pages during match ticket sales for this year’s World Cup. Kaspersky reported that every time tickets went on sale, fraudsters mailed out spam and activated clones of official FIFA pages and sites offering fake giveaways, all claiming to be from partner companies.

Kaspersky says that criminals register domain names combining the words e.g. ‘world,’ ‘worldcup,’ ‘FIFA,’ ‘Russia,’ etc, and that if fans look closely they can see that the domains look unnatural and have a non-standard domain extension. The Security Company advises that fans should take a close look at the link in the email or the URL after opening the site to avoid falling victim to scammers.

The general advice from Kaspersky is to give cheap tickets a wide berth, not to buy goods from spammers in the run-up to kickoff (because the goods may not even exist), not to fall for spam about lotteries and giveaways because they may be used for phishing, not to visit dubious sites offering cheap accommodations or plane tickets, and only to watch broadcasts on official FIFA partner websites.

Kaspersky also advises visitors to use a VPN to connect to the Internet, because, in the aftermath of the government’s attempt to block Telegram, popular sites in Russia are either unavailable or unstable.

England Team’s Briefing

England team Manager, Gareth Southgate, has noted that the England team players are young people who will look for things to occupy their time while in hotel rooms e.g. playing video games, and using multiple devices such as smartphones, tablets and gaming devices. The fact that technology will play a big part in the England team’s downtime throughout the tournament is the main reason why the FA is taking cyber security so seriously.

It is understood, therefore, that the NCSC has been advising the players on the rules to follow on e.g. which devices they can safely use and where. Also, the devices belonging to players and staff will be thoroughly screened to make sure they have the right security software installed.

What Does This Mean For Your Business?

Anyone travelling abroad for business or pleasure, particularly to countries where certain cyber security threat levels are known to be high should read the UK government’s advice pages relating to cyber security while travelling.

In the case of travelling to Russia for the World Cup, some of the measures people can take before travelling are to check which network you will be using and what the costs are, to make sure all software and apps are up to date and antivirus is turned on, to turn on the ability to wipe your phone should it be lost, and to make sure all devices are password protected and use other security features e.g. fingerprint recognition.

On arriving in Russia, the advice is to remember that public and hotel Wi-Fi connections may not be safe and to be very careful about what information you share over these connections e.g. banking. Also, don’t share phones, laptops or USBs with anyone and be cautious with any IT related gifts e.g. USB sticks, and to keep your devices with you at all times if possible rather than leave them unattended.

The full UK government advice can be found here

Two More Security Holes In Voice Assistants

Researchers from Indiana University, the Chinese Academy of Science, and the University of Virginia have discovered 2 new security vulnerabilities in voice-powered assistants, like Amazon Alexa or Google Assistant, that could lead to the theft of personal information.

Voice Squatting

The first vulnerability, outlined in a recent white paper by researchers has been dubbed ‘voice squatting’ i.e. a method which exploits the way a skill or action is invoked. This method takes advantage of the way that VPAs like smart speakers work. The services used in smart speakers operate using apps called “skills” (by Amazon) or “actions” (by Google). A skill or an action is what gives a VPA additional features, so that a user can interact with a smart assistant via a virtual user interface (VUI), and can run that skill or action using just their voice.

The ‘voice squatting’ method essentially involves tricking VPAs by using simple homophones – words that sound the same but have different meanings. Using an example from the white paper, if a user gives the command “Alexa, open Capital One” to run the Capital One skill / action a cyber criminal could create a malicious app with a similarly pronounced name e.g. “Capital Won”. This could mean that a voice command for Capital One skill is then hijacked to run the malicious Capital Won skill instead.

Voice Masquerading

The second vulnerability identified by the research has been dubbed ‘voice masquerading’. This method of exploiting how VPAs operate involves using a malicious skill / action to impersonate a legitimate skill / action, with the intended result of tricking a user into reading out personal information / account credentials, or to listen-in on private conversations.

For example, the researchers were able to register 5 new fake skills with Amazon, which passed Amazon’s vetting process, used similar invocation names, and were found to have been invoked by a high proportion of users.

Private Conversation Sent To Phone Contact

These latest revelations come hot on the heels of recent reports of how a recording the private conversation of a woman in Portland (US) was sent to one of her phone contacts without her authorisation after her Amazon Echo misinterpreted what she was saying.

What Does This Mean For Your Business?

VPAs are popular but are still relatively new, and one positive aspect of this story is that at least these vulnerabilities have been identified now by researchers so that changes can (hopefully) be made to counter the threats. Amazon has said that it conducts security reviews as part of its skill certification process, and it is hoped that the researchers’ abilities to pass-off fake skills successfully may make Amazon, Alexa and others look more carefully at their vetting processes.

VPA’s are now destined for use in the workplace e.g. business-focused versions of popular models and bespoke versions. In this young market, there are however, genuine fears about the security of IoT devices, and businesses may be particularly nervous about VPAs being used by malicious players to listen-in on sensitive business information which could be used against them e.g. for fraud or extortion. The big producers of VPAs will need to reassure businesses that they have installed enough security features and safeguards in order for businesses to fully trust their use in sensitive areas of the workplace.

Google Accused of Being ‘Unethical’ Over Cryptocurrency Ad Ban

Some industry commentators have suggested that Google’s motives for introducing a blanket ban on cryptocurrency ads may not be all they seem, and could make the company appear unethical.

What Ban?

Back in March, Google followed Facebook’s lead (from January) and imposed a blanket ban on all cryptocurrency adverts on its platforms. The ban, which starts from this month, was announced following reports of scammers using adverts on popular platforms to fraudulently take money from people who believed they could cash in on the massive rise in the value of cryptocurrencies such as Bitcoin.

A popular con has been to use scam ad campaigns to sell units of a cryptocurrency ahead of its launch – known as initial coin offerings (ICO). Research has found that 80 per cent of ICOs have been fraudulent.

Also, the cryptocurrency value bubble led to the rise of ‘crypto-jacking’, where devices are taken over by people trying to mine crypto-currencies e.g. using Android phone-wrecking Trojan malware ‘Loapi’.

Why Unethical?

Online tech commentators have been quick to point out that even though Google has said that it made the move to ban cryptocurrency ads to confront criminality, protect web users, and to regulate what their users are reading, Google is also believed to have an interest in cryptocurrencies itself.

For example, back in May, Google is reported to have approached the founder of the world’s second most popular cryptocurrency, Ethereum, to explore possible market opportunities for the two companies. In fact, some commentators believe that Google may be acting unethically by banning cryptocurrency adverts because it is planning to launch its own cryptocurrency and, therefore, wants to give its own product the best chance in the marketplace.

This idea has been strengthened by the fact that Google continues to show adverts with links to gambling websites and other services which some would describe as unethical. It has been suggested that Google appears willing to ban cryptocurrency adverts, but still allows job postings, and adverts for anti-virus software or charities, all of which can also be known entry points for scammers.

Blockchain Ambitions

Google is also thought to have ambitions to make use of blockchain, which is among other things, the underlying technology behind the bitcoin currency. It is interesting that this interest follows Facebook, which is reported to be setting up a blockchain group that will report directly to the company’s CTO, Mike Schroepfer.


Putting a blanket ban on cryptocurrency adverts does not appear to have been an entirely successful strategy for others i.e. Facebook. For example, some advertisers have been able to circumvent Facebook’s cryptocurrency ad ban by abbreviating words like cryptocurrency to c-currency, and by simply switching the letter ‘o’ in the word bitcoin to a zero.

What Does This Mean For Your Business?

Google is a powerful private company, and with other big players in the market, it is looking to make the most of market opportunities e.g. Facebook, and it is only natural that Google is likely to also want to explore the potential of those opportunities, even if it has made an ethical stand in public about cryptocurrency adverts.

This story does illustrate, however, that ethics play an important part in business, and can play an important role in supporting the value of a brand, particularly in a digital world where inconsistencies can be spotted and widely reported immediately.
When you think about it, Google has a trusted brand and is well placed in the market to perhaps get involved in, or even produce its own cryptocurrency, particularly where there are profits to be made and when cryptocurrencies appear to have an important future beyond the initial bubble of bitcoin-mania. The important thing for Google is that it, along with Facebook, was seen to be doing the right thing when cryptocurrency scam adverts began making the news, and there is still no real, firm proof that Google will commit itself to its own cryptocurrency yet.

It is also not surprising that companies such as Google and Facebook would want to explore the huge potential opportunities that blockchain offers. It is worth remembering that blockchain has shown itself to have many great uses beyond just cryptocurrecies e.g. enabling students to share their qualifications with employers, recording the temperature of sensitive medicines being transported from manufacturer to hospital in hot climates, as a ledger to record data about wine certification, as a ledger for ownership and storage history, as a system for tracking consignments that addresses visibility and efficiency, and for sharing information between energy suppliers to speed the supplier switching process. Dubai has also invested in using blockchain to put all its documents on blockchain’s shared open database system by 2020 in order to help to cut through Middle Eastern bureaucracy, speed up civic transactions and processes, and bring a positive transformation to the whole region.

Both cryptocurrencies and blockchain have a long way to run yet, and Google and Facebook will certainly not be the only web giants exploring their potential.

834% Rise in TSB Customer Attacks

Following the IT ‘meltdown’ at TSB last month which led to chaos for customers who were locked out of their own accounts, research has found that the number of phishing attacks targeting TSB customers leapt by 843% in May compared with April.

Fraudsters Taking Advantage

The statistics, reported recently in Computer Weekly, appear to indicate that fraudsters may have been quick to take advantage of the bank’s IT meltdown.

For example, an investigation by Wandera security found that in May, TSB was the second most used bank brand by scammers attempting to obtain customer details. In April, for 100,000 UK devices using Wandera security, there were only 28 TSB-themed phishing attacks. In May, the number jumped to 236 such attacks.

According to Wandera’s figures, in April TSB appeared in the top five financial services apps to be impersonated for attacks for the first time this year, and this may be an indication that TSB wasn’t a major target for phishers prior to the systems meltdown incident.

All of this information has led security commentators to conclude that the rise in fraud against TSB customers is likely to be linked to the systems problem that the banks experienced May.

What Happened?

Back in May, 1.9 million TSB customers were affected when a migration to a new system didn’t go to plan and resulted in what some commentators have described as a ‘meltdown’ of its banking systems.

Some of the problems experienced by customers included : not being able to access their own money, having no access to any mobile and online services, problems with direct debits, and amounts of money appearing and disappearing. It was even reported that one customer was mistakenly credited with £13,000.

What Does This Mean For Your Business?

This information should give businesses some idea of the ruthless and opportunistic nature of cyber criminals, and how quickly they can focus their efforts when vulnerabilities are spotted. Weaknesses in banking systems would, of course, have been a particularly attractive target.

In the case of TSB, as in the aftermath of many IT system problems, scammers were quick to use the bank’s IT problems as an opportunity to target its desperate customers with mobile phishing attacks. Customers would have been hoping / expecting to hear from the bank at the time, and so would have let their guard down when emails and any communication that looked as though it was from the bank, asking them for personal details / login details.

Visa Crash In Europe Causes ‘Cash Only’ Chaos

On Friday 1st from 2.30pm, a Europe-wide system failure at Visa that left shoppers embarrassed as their card payments were declined and stores switched to ‘cash only’.

Not Just Visa Customers

To make matters worse, because a range of different banks and other financial institutions use Visa’s payment system, even those making transactions using non-Visa branded cards were affected and were unable to make purchases.

The problem was compounded by the fact that it happened at a time when many people were leaving work on a Friday. There have also been reports circulating that even if some card purchases were declined, the money may still have been taken from accounts, and customers have been urged to check.

What Happened?

There are no precise details as to the reason for the system crash other than Visa’s explanation as a “hardware failure”.
Visa has also been quick to announce that it has no reason to believe that the system crash was associated with any unauthorised access or malicious events.

ATMs Still Working in UK

In the UK, although many customers found themselves in extremely awkward situations e.g. unable to pay for meals or petrol, customers were still able to take cash out of ATMs (if there was one nearby). This led to large queues forming at ATMs in towns and cities across the country.


Whereas many customers faced the embarrassment and inconvenience of having their cards declined in shops across Europe, others found themselves being forced to wait in queues because of the disruption. For example, in Berlin’s Alexanderplatz, it was reported that Primark customers had to queue for 20 minutes to pay, and staff were unable to note the reasons why transactions were failing. Also, it was reported that the Visa system failure caused a 45 minute wait for those trying to use the Severn Bridge as drivers were unable to pay the toll by card.


Not surprisingly, many people took to social media to vent their anger at Visa for the embarrassment and inconvenience caused. In Spain, the Guardia Civil tried to calm and re-assure people by sending a tweet urging everyone to stay calm, and used a picture of Captain Jack Sparrow to help explain that if they couldn’t pay, it wasn’t because they had been robbed or hacked.
Visa has apologised, and has stated that its payment system is operating at “full capacity”.

What Does This Mean For Your Business?

Even though the problems only lasted a day, it is only a matter of weeks since TSB’s catastrophic computer meltdown caused misery to customers after the bank tried to migrate its computer systems from its old Lloyds Bank systems to its new core banking system, Proteo4UK.

We are now a society that is moving away from cash, in favour of cards and particularly contactless payments. Also, this move away from cash has meant the closing of many ATMs. Both of these factors mean that system failures of this kind can be particularly disruptive.

For businesses, customers not being able to pay meant that profits were hit, their premises experienced disruption with some staff being left to face angry customers, and unable to offer a clear explanation.

The incident has, no doubt, also illustrated to any potential hackers how interconnected payment systems are across Europe and how many countries could be brought to a virtual standstill if they were able to breach the systems of major payment processing companies such as Visa.

7-Fold Rise in Mobile Fraud

It seems that as we spend more time using mobile devices, the fraudsters are following us as a new RSA Security report shows a massive rise in mobile fraud over the last 3 years.

Up Nearly 700%!

The latest quarterly report by fraud and risk intelligence experts at RSA Security shows that as the volume of mobile app transactions has risen by 200% since 2015, accordingly the growth rate for fraudulent transactions has increased to a massive 680%.

New Accounts and ‘Burner Phones’

One of the key trends at the heart of the rise in mobile fraud is the apparent rise of the use of fake new accounts and ‘burner / burn phones’ to commit fraud.

A burner / burn phone is a mobile phone handset that is acquired for temporary use, is usually prepaid / without a contract in order to retain the user’s anonymity, and can be discarded if necessary.

Alongside the burner phone, fraudsters are also known to use stolen identities to set up fake ‘money mule’ accounts, purely for the purpose of collecting the cash from their fraudulent activities.

The RSA report shows that new accounts and new devices have been used in this way in 32% of all the fraudulent transactions in the last quarter.

Phishing Still Top

The report shows that phishing is still the top fraudulent activity accounting for 48% of all fraud attacks in Q1 of 2018.

Trojan Malware & Payment Card Compromise

Other popular frauds involve the use of Trojan malware to steal financial credentials. This method was used in one in four fraud attacks in Q1 2018.

Also, using details from compromised cards is still a very common activity among fraudsters, and the RSA researchers who compiled the report claim to have recovered more than 3.1 million unique compromised cards and card details (which included verification numbers) on offer from online sources in Q1.

Mobile App Security

It is believed that poor security in mobile apps is allowing many criminals to hijack mobile applications and siphon off credentials and funds from many unwitting users.

What Does This Mean For Your Business?

These figures show that our increasing use of mobile devices and apps has opened the door to even more channels for fraudsters. There is clearly a responsibility among mobile app developers and those commissioning mobile apps to deliver their services to ensure that security is built-in from the ground up. This should mean making sure that all source code is secure and known bug-free, all data exchanged over app should be encrypted, caution should be exercised when using third-party libraries for code, and only authorised APIs should be used. Also, developers should be building-in high levels of authentication, using tamper-detection technologies, using tokens instead of device identifiers to identify a session, using the best cryptography practices e.g. store keys in secure containers, and conducting regular, thorough testing.

As users of mobile devices and apps, we also need to pay attention to our own levels of security. For example, we can take precautions to stop ourselves from falling victim to mobile fraud by using mobile security and antivirus scan apps, only using trusted apps / trusted app sources, uninstalling old apps and turning off connections when not using them, locking our phones when not in use, using 2-factor authentication, and using a VPN rather than just the free Wi-Fi when out and about.

Instant GDPR Complaints For Web Giants

In an almost inevitable turn of events, the social media and tech giants Facebook, Google, Instagram and WhatsApp faced a barrage of accusations that they were not compliant within hours of GDPR being introduced on May 25th.

What’s Wrong?

The complaints, spearheaded by Privacy group led by Max Schrems centred around the idea that the tech and social media giants may be breaking the new data protection and privacy guidelines by forcing users to consent to targeted advertising in order to use their services i.e. by bundling a service with the requirement to consent (Article 7(4) GDPR).

Not Necessary?

It has been reported that the crux of the privacy group’s argument is that, according to GDPR, any data processing that is strictly necessary to use a service is allowed and doesn’t require opting in. If a company then decides to adopt a “take it or leave it approach” by forcing customers to agree to have additional, more wide-reaching data collected, shared and used for targeted advertising, or delete their accounts, the argument is that this goes against GDPR which requires opt-in consent for anything other than any data processing that is strictly necessary for the service.

Austria, Belgium, France and Germany

It is alleged in this case that the four tech giants may be doing just that, and, therefore, could be in breach of the Regulation, and possibly liable to fines if the accusations are upheld after investigation by data protection authorities in Austria, Belgium, France and Germany.

A breakdown of the four complaints over “forced consent” made by shows that in France the complaint has been made to CNIL about Google (Android), in Belgium the complaint has been made to the DPA about Instagram (Facebook), in Germany the complaint has been made to the HmbBfDI about WhatsApp, and in Austria the complaint has been made to DSB about Facebook. Under GDPR, the maximum penalties for this issue could be billions of Euros.

What Does This Mean For Your Business?

Many commentators had predicted that popular tech and social media giants would be among the first organisations to be targeted by complaints upon the introduction of GDPR, and some see these complaints as being the first crucial test of the new law.

GDPR should prohibit companies from forcing customers to accept the bundling of a service with the requirement to consent to giving / sharing more data than is necessary, but it remains to be seen and proven whether these companies are guilty.

As pointed out in their statement, GDPR does not mean that companies can no longer use customer data because GDPR explicitly allows any data processing that is strictly necessary for a service. The complaint, in this case, is that using the data additionally for advertisements or to sell it on, needs the users’ free opt-in consent. has also pointed out that, if successfully upheld, their complaints could also mean an end to the kind of annoying and obtrusive pop-ups which are used to claim a person’s consent, but don’t actually lead to valid consent.

Another benefit (if the complaints are upheld) against the tech giants could be that corporations can’t force users to consent, meaning that monopolies should have no advantage over small businesses in this area. seem set to keep the pressure on the tech giants, and has stated that its next round of complaints will centre around the alleged illegal use of user data for advertising purposes or “fictitious consent’ e.g. such as when companies recognise “consent” to other types of data processing by solely using their web page.

Now You Can Opt-Out Of Having Your Medical Data Shared

The introduction of GDPR on 25th May has brought with it a new national data opt-out service which enables people to use an online tool to opt out of their confidential patient information being used beyond their own individual care for research and planning.


The new ‘Manage Your Choice’ online tool that is a part of the national data opt-out service, follows recommendations by the National Data Guardian (NDG) Dame Fiona Caldicott, and is a replacement for the previous ‘type 2’ opt-out that was introduced on 29th April 2016. That opt-out service meant that NHS Digital would remove certain patient records from data provided where a patient had requested an opt-out.

About The New National Opt-Out Service

The new service applies to those patients in England who are aged 13 or over, and have an NHS number e.g. from previous treatment. Opting out using the new service will not apply to your health data where you have accessed health or care services outside of England, such as in Scotland and Wales.

The opt-out service covers data-sharing by any organisation providing publicly-funded care in England. This includes private and voluntary organisations, and only children’s social care services are not covered.

Using The Online Tool

The online tool for opting-out can be accessed at:

To use the online tool, you will (obviously) need access to the Internet, and access to your email or mobile phone to go through the necessary steps.

What Else Is Your Data Used For?

According to the NHS, as well as being used for patient care purposes, confidential patient information is also used to plan and improve health and care services, and to research and develop cures for serious illnesses. The NHS has stressed that, for much of the time, anonymised data is used for research and planning, so your confidential patient information often isn’t needed anyway.

The NHS currently collects health and care data from all NHS organisations, trusts and local authorities. Data is also collected from private organisations e.g. private hospitals providing NHS funded care. Research bodies and organisations can also request access to this data. These bodies and organisations include university researchers, hospital researchers, medical royal colleges, and even pharmaceutical companies researching new treatments.
Past Controversy

The new service is likely to be welcomed after several past data-sharing controversies dented trust in the handling of personal data by the NHS. For example, NHS Digital were criticised after agreeing to share non-clinical information, such as addresses or dates of birth, with the Home Office, and a report highlighted how the Home Office used patient data for immigration enforcement purposes.

Also, there were serious public concerns and an independent panel finding a “lack of clarity” in a data-sharing agreement after it was announced that Royal Free Hospital in London shared the data of 1.6 million people with Google’s DeepMind project without the consent of those data subjects.

What Does This Mean For Your Businesses?

The introduction of GDPR has been an awareness raising, shake-up exercise for many businesses and organisations, and has driven the message home that data privacy and security for clients / service users is an important issue. Where our medical data is concerned, however, we regard this as being particularly private and sensitive, and the fact that it could be either shared with third-parties without our consent, or stolen / accessed due to poor privacy / security systems and practices is a source of genuine worry. For example, many people fear that whether shared or stolen, their medical data could be used by private companies to deny them services or to charge more for services e.g. insurance companies. Data breaches and sharing scandals in recent times mean that many people have lost trust in how many companies and organisations handle their everyday personal data, let alone their medical data.

The introduction of this new service is likely to be welcomed by many in England, and it is likely that the opt-out tool will prove popular. For the NHS, however, if too many people choose to opt-out, this could have some detrimental effect on its research and planning.

GDPR will continue to make many companies and organsiations focus on which third-parties they share data with, and how these relationships could affect their own compliance.

Alexa Records and Sends Private Conversation

A US woman has complained of feeling “invaded” after a private home conversation was recorded by her Amazon’s voice assistant, and then sent it to a random phone contact … who happened to be her husband’s employee.


As first reported by US news outlet KIRO 7, the woman identified only as ‘Danielle’ had a conversation about hardwood flooring in the privacy of her own home in Portland, Oregon. Unknown to her, however, her Amazon’s voice assistant Alexa via her Amazon Echo not only recorded a seemingly ‘random’ conversation, but then sent the recording to a random phone contact without being expressly asked to do so.

The woman was only made aware that she had been recorded when she was contacted by her husband’s employee, who lives over 100 miles away in Seattle, who was able to tell her the subject of her recent conversation.

How Could It Have Happened?

Last year Amazon introduced a service whereby Amazon Echo users could sign up to the Alexa Calling and Messaging Service from the Alexa app. This means that all of the contacts saved to your mobile phone are linked to Alexa automatically, and you can call and message them using voice commands via your Echo.

In the case of the woman from Portland, Amazon has reportedly explained the incident as being the result of an “unlikely” string of events which were that:

  • Her Alexa started recording after it registered as hearing its name or another “wake word” (chosen by users).
  • Subsequently, in the following conversation (about hardwood floors), Alexa registered part of the conversation as being a ‘send message’ request.
  • Alexa would / should have said at that point, out loud, ‘To whom?’
  • It is believed that Alexa then interpreted part of the background conversation as a name in the woman’s phone contact list.
  • The selected contact was then sent a message containing the recoding of the private conversation.


The woman requested a refund for her voice assistant device, saying that she felt invaded.

Amazon has reportedly apologised for the incident, has investigated what happened, and has determined that was an extremely rare occurrence. Amazon is, however, reported to be “taking steps” to avoid this from happening in the future.

Not The First Time

Amazon’s intelligent voice assistant has made the news in the past for some unforeseen situations that helped to perpetuate the fears of users that their home devices could have a more sinister dimension and / or could malfunction or be used to invade privacy. For example, back in 2016, US researchers found that they could hide commands in white noise played over loudspeakers and through YouTube videos in order to get smart devices to turn on flight mode or open a website. The researchers also found that they could embed commands directly into recordings of music or spoken text.

Also, although Amazon was cleared by an advertising watchdog, there was the case of the television advert for its Amazon’s Echo Dot smart speaker activating a viewer’s device and placing an order for cat food.

What Does This Mean For Your Business?

Although it may have been a series of events resulting in a ‘rare’ occurrence, the fact is that this appears to be a serious matter relating to the privacy of users that is likely to re-ignite many of the fears of home digital assistants being used as listening devices, or could be hacked and used to gather personal information that could be used to commit crime e.g. fraud or burglary.
If the lady in this case was an EU citizen, it is likely that Amazon could have fallen foul of the new GDPR and, therefore, potentially liable to a substantial fine if the ICO thought it right and necessary.

Adding the Alexa Calling and Messaging service to these devices was really just the beginning of Amazon’s plans to add more services until we are using our digital assistants to help with many different and often personal aspects of our lives e.g. from ordering goods and making appointments, to interacting with apps to control the heating in the house, and more. News of this latest incident could, therefore, make some users nervous about taking the next steps to trusting Amazon’s Alexa with more personal details and important aspects of their daily lives.

Amazon may need to be more proactive and overt in explaining how it is addressing the important matters of privacy and security in its digital assistant and devices in order to gain the trust that will enable it to get an even bigger share in the expanding market, and successfully offer a wider range of services via Alexa and Echo devices.

Facial Recognition In The Classroom

A school in Hangzhou, capital of the eastern province of Zhejiang, is reportedly using facial recognition software to monitor pupils and teachers.

Intelligent Classroom Behaviour Management System

The facial recognition software is part of what has been dubbed The “intelligent classroom behaviour management system”. The reason for the use of the system is reported to be to supervise both the students’ learning, and the teachers’ teaching.


The system uses cameras to scan classrooms every 30 seconds. These cameras are part of a facial recognition system that is reported to be able to record students’ facial expressions, and categorize them into happy, angry, fearful, confused, or upset.

The system, which acts as a kind of ‘virtual teaching assistant’, is also believed to be able to record students’ actions such as writing, reading, raising a hand, and even sleeping at a desk.

The system also measures levels of attendance by using a database of pupils’ faces and names to check who is in the classroom.

As well as providing the school with added value monitoring of pupils, it may also prove to be a motivator for pupils to modify their behaviour to suit the rules of the school and the expectations of staff.

Teachers Watched Too

In addition to monitoring pupils, the system has also been designed to monitor the performance of teachers in order to provide pointers on how they could improve their classroom technique.

Safety, Security and Privacy

One other reason why these systems are reported to be increasing in popularity in China is to provide greater safety for pupils by recording and deterring violence and questionable practices at Chinese kindergartens.

In terms of privacy and security, the vice principal of the Hangzhou No.11 High School is reported to have said that the privacy of students is protected because the technology doesn’t save images from the classroom, and stores data on a local server rather than on the cloud. Some critics have, however, said that storing images on a local server does not necessarily make them more secure.


If the experiences of the facial recognition software that has been used by UK police forces is anything to go by, there may be questions about the accuracy of what the Chinese system records. For example, an investigation by campaign group Big Brother Watch, the UK’s information Information Commissioner, Elizabeth Denham, has recently said that the Police could face legal action if concerns over accuracy and privacy with facial recognition systems are not addressed.

What Does This Mean For Your Business?

There are several important aspects to this story. Many UK businesses already use their own internal CCTV systems as a softer way of monitoring and recording staff behaviour, and as a way to modify their behaviour i.e. simply by knowing their being watched. Employees could argue that this is intrusive to an extent, and that a more positive way of getting the right kind of behaviour should (also) have a system that rewards positive / good behaviour and good results.

Using intelligent facial recognition software could clearly have a place in many businesses for monitoring customers / service users e.g. in shops and venues. It could be used to enhance security. It could also, as in the school example, be used to monitor staff in any number of situations, particularly those where concentration is required and where positive signals need to be displayed to customers. These systems could arguably increase productivity, improve behaviour and reduce hostility / violence in the workplace, and provide a whole new level of information to management that could be used to add value.

However, it could be argued that using these kinds of systems in the workplace could make people feel as though ‘big brother’ is watching them, could lead to underlying stress, and could have big implications where privacy and security rights are concerned. It remains to be seen how these systems are justified, regulated and deployed in future, and how concerns over accuracy, cost-effectiveness, and personal privacy and security are dealt with.