Data Security

Despite Patches, Researchers Warn That Intel Chips Are Still Vulnerable

The New York Times has reported that despite Intel issuing patches for security flaws (that were discovered last year) in its processors, security researchers are alleging that the processors still have some serious vulnerabilities.

What Flaws?

In January 2018, it was discovered that nearly all computer processors made in the last 20 years contained two flaws known as ‘Meltdown’ and ‘Spectre’. The 2 flaws could make it easier for something like a malicious program to steal data that is stored in the memory of other running programs.

Meltdown, discovered by researchers from Google’s Project Zero, the Technical University of Graz in Austria and the security firm Cerberus Security in Germany, affects all Intel, ARM, and other processors that use ‘speculative execution’ to improve their performance; i.e. when a computer performs a task that may not be actually needed in order to reduce overall delays for the task (a kind of optimisation).

Meltdown could, for example, leave passwords and personal data vulnerable to attacks, and could be applied to different cloud service providers as well as individual devices. It is believed that Meltdown could affect every processor since 1995, except for Intel Itanium and Intel Atom before 2013.

Spectre, which affects Intel, AMD and ARM (mainly Cortex-A) processors, allows applications to be fooled into leaking confidential information. Spectre affects almost all systems including desktops, laptops, cloud servers, and smartphones.

8 More Flaws Discovered

Then, in May 2018, 8 more security flaws in chips/processors were discovered by several different security teams.  The new ‘family’ of bugs were dubbed Spectre Next Generation (Spectre NB).

September 2018

According to reports by The New York Times, the Dutch researchers (at Vrije Universiteit Amsterdam) also reported a range of security issues about Intel’s processors to the company in September 2018 and provided Intel with a proof-of-concept code to help them to develop fixes

14 Months On – Only Some Fixes

It has been reported that after waiting 8 months to allow Intel enough time to develop fixes (of which only some have issued), and more than a year after providing Intel with a proof-of-concept code, Intel has only just announced the issue of more security updates earlier this week.

More Vulnerabilities

Unfortunately for Intel, just as they announced the issue of new security fixes last week, the researchers notified them of more unfixed flaws, and it has been alleged that Intel asked the researchers to alter the report about the flaws and to effectively stay quiet about them.

MDS

The latest unpatched flaw in Intel processors that the researchers from Amsterdam, Belgium, Germany and Austria have gone public about is a hacking technique, which is a variant of ZombieLoad or RIDL (Rogue In-Flight Data Load). The technique which exploits a flaw in Intel processors is known as microarchitectural data sampling (MDS) and it can enable hackers to carry out several different exploits e.g. running code on the victim’s computer that forces the processor to leak data.

Criticism

The news that there may still be flaws in Intel’s processors after the company appears to have had a long time to fix them has prompted some criticism of Intel online, some of it reported in the New York Times e.g. allegations  that there has been a lack of transparency about the issue from Intel, that the company has tried to downplay the problems, and allegations that Intel may not decide to do much to fix the problem until its reputation is at stake.

What Does This Mean For Your Business?

Bearing in mind that these flaws are likely to exist at the architectural level in the majority of processors, this story is bad news for businesses that have been legitimately trying to make themselves totally compliant with GDPR and as secure as possible from attack.

For the time being, in the short term, and unless processor companies try to completely re-design processors to eliminate the flaws, closing hardware flaws using software patches is the only realistic way to tackle the problem and this can be a big job for manufacturers, software companies, and other organisations that choose to take that step. It is good practice anyway for businesses to install all available patches and make sure that they are receiving updates for all systems, software and devices.

The hope is now that researchers can put enough pressure on processor manufacturers e.g. through bad publicity to make them speed up their efforts to tackle the known security flaws in their products.

Research Says Memes Can Tell Between Humans and Bots

Researchers from the University of Delaware have concluded that when it comes to authentication for logins, Memes may be one of the strongest techniques to distinguish between a human and a bot.

The Bot Challenge

One of the great challenges to websites when it comes to authentication for logins is that software bots can fool relatively simple tests such as ticking a box to say, ‘I’m not a robot’ and CAPTCHA (both words and images). Also, neural networks and machine learning have helped to train bots to behave more like humans.  With more than half of web traffic believed to be made of bots and to stop bots gaining easy access to sensitive data, correct authentication needs to be based upon a system that can effectively tell humans and bots apart.

Memes Could Be The Answer

According to the University of Delaware researchers, the dynamic nature of memes and the fact that bots don’t get cultural references and online humour, and that humans are familiar with and can understand memes with a greater depth than bots could mean that memes could be the answer to the ‘bot or human’ authentication challenge.

Memes are activities, concepts, catchphrases, or pieces of media, often humorous and/or mimicking, and commonly in the form of an image, gif or video that have cultural meaning and tend to be shared widely on social media platforms.

How Could Memes Work For Authentication?

According to the researchers, after the correct username and password have been verified on login to a website, a meme could be displayed with a question about the meme that relates to something that bots wouldn’t be able to spot.  For example, this question could relate to the facial expression of the person in the meme or to the action taking place in the meme (bots wouldn’t be able to accurately tell what the facial expression is or what it means in relation to that meme). Several possible answers relating to that meme could be given and clicking on the right option will mean that a person is granted entry to the website.

The fact that there is a vast number of memes available online means that the meme and its answer options used for one authentication process can then be deleted from the database to ensure that no answers are stored and learned by bots.

What Does This Mean For Your Business?

With more than half of web traffic being made up of bots, and with bots being able to fool many existing systems and with the data security, privacy and fraud risks that bots pose, businesses need to know that their websites have an effective system that can accurately distinguish between humans and bots at the login stage, but not make the process of authentication too complicated or lengthy for registered users.

The cultural references, humour, and subtleties in memes could, therefore, make them an effective way to make that distinction, and could keep businesses ahead of the game until AI/machine learning in bots necessitates another change.

Scale of Police Computer Misuse Uncovered

A Freedom of Information (FoI) request made by think tank Parliament Street has revealed that 237 serving officers and members of staff have been disciplined for computer misuse in the last two financial years.

Sackings and Resignations

The FOI request, which was responded to by 23 forces also revealed that 6 employees resigned and 11 were sacked over failures in adhering to IT best practices e.g. for disclosing personal information.

Took Photos of Screen and Shared

In Hertfordshire, two incidents out of 16 disciplinary cases involved employees taking photographs of the screen of a (confidential) police computer system and sharing those photos via social media.

Most Cases

The most individual computer misuse incidents were recorded by Surrey Police with 50. Second in the misuse ranking was the Metropolitan police where 18 people were disciplined (4 were accused of misusing social media) and one staff member was sacked for misusing the Crime Reporting Information System.

Greater Manchester Police managed to take the third position in the incidents rankings with 17 for misuse of force systems.

Other Incidents

Other incidents uncovered by the FoI request included 3 officers getting sacked from Gwent Police (for researching the crime database for a named person, disclosing confidential information, and for unlawful access to information) and 3 getting sacked form Wiltshire Police force for using the police databases without lawful access to the information. Also, one member of Nottinghamshire Police was disciplined for using the police computer system to search for information about a civil dispute they were involved in.

Case In July

These incidents were reminiscent of the case from July this year whereby a serving Metropolitan police officer was given 150 hours of community service and ordered to pay £540 after pleading guilty to crimes under the UK’s Computer Misuse Act, which included using a police database to monitor a criminal investigation into his own conduct.

What Does This Mean For Your Business?

We all must adhere to data protection laws (GDPR) and best practices to ensure that company computer systems are used responsibly and legally.  The irony of the information uncovered with the FoI request is that hundreds of those persons who are entrusted to uphold and enforce the law appear to be prepared to risk their jobs, break the law and betray public trust.  The fact that hundreds of police have been caught (there may be many more who haven’t) misusing police systems which contain large amounts of sensitive personal data raises serious questions about privacy and security.

This may indicate that police forces need to offer more education and training to employees about data protection and the correct (and legal) use of police computer systems as well as tightening up on monitoring, access control and validation/authorisation.

Office 365 Voicemail Phishing Scam Warning

Security company McAfee has reported observing a phishing scam which uses a fake voicemail message to lure victims into entering their Office 365 email credentials into a phishing page.

How The Attack Works

According to McAfee’s blog, the first step in the phishing scam is the victim being sent an email informing them that they have missed a phone call.  The email includes a request to login to their account to access their voicemail.

The email message actually contains an HTML attachment which, when loaded, re-directs the victim to a phishing website. Although there are slightly different versions of the attachment, the most recent examples are reported to contain an audio recording which is designed to make the victim believe they are listening to the beginning of a legitimate voicemail.

Once re-directed to the bogus Microsoft account login page, the victim will see that their email address has already been loaded in the login field, thereby helping to create the illusion that this is their real Microsoft login page.

If the victim enters their password, the deception continues as they are shown a page saying that their login has been successful, and they are being re-directed to the home page.

Three Different Phishing Kits

Cybercriminals frequently buy-in phishing kits to launch their attacks. These are collections of software tools, created by professional phishers, that can be purchased and downloaded as a set. These phishing kits make it much easier for those with limited technical and coding skills or phishing experience to launch a phishing attack.

McAfee reports that as many as three different phishing kits are being used to make the fake websites involved in this scam. These are:

  1. Voicemail Scmpage 2019 – being sold on an ICQ channel, and used to harvest your email, password, IP Address and location details.
  2. Office 365 Information Hollar – similar to Voicemail Scmpage 2019 and used to harvest the same data.
  3. A third unnamed kit, which McAfee says is the most prevalent malicious page they have observed in the tracking of this particular campaign.  McAfee says that this kit appears to use code from 2017 malicious kit that was used to target Adobe users.

File Names For The Attachments

To help you spot this phishing attack McAfee has listed list the file names for attachments in the phishing email as being:

  • 10-August-2019.wav.html [Format: DD-Month-YYYY.wav.html]
  • 14-August-2019.html [Format: DD-Month-YYYY.html]
  • Voice-17-July2019wav.htm [Format: Voice- DD-MonthYYYYwav.htm]
  • Audio_Telephone_Message15-August-2019.wav.html [Format: Audio_Telephone_MessageDD-Month-YYYY.wav.html]

What Does This Mean For Your Business?

Reports indicate that this phishing attack has proved quite successful up until now, partly because the pages and steps appear authentic (and load the users email address as real login page does), and it uses social engineering and urgency (with audio) in a way that may prompt may people to suspend their critical faculty long enough complete the few short actions that it takes to give their details away.

The advice to businesses is, therefore, to be vigilant and to not open emails from unfamiliar sources or with unfamiliar attachments.  You may also want to use Two-Factor Authentication (2FA) where possible, and enterprise users may wish to block .html and .htm attachments at the email gateway level so that they don’t reach members of staff, some of whom may not be up to speed with their Internet security knowledge.

There is also a strong argument for not using the same password for multiple platforms and websites (password sharing).  This is because credentials stolen in one breach are likely to be tried on many other websites by other cybercriminals (credential stuffing) who have purchased/acquired them e.g. on the dark web.

Keeping anti-virus and software patches up to date and making sure that staff receive training and education about cybersecurity risks and what procedures should be followed if suspicious emails or other messages are spotted can also help companies to maintain good levels of cybersecurity.

ICO Warns Police on Facial Recognition

In a recent blog post, Elizabeth Denham, the UK’s Information Commissioner, has said that the police need to slow down and justify their use of live facial recognition technology (LFR) in order to maintain the right balance in reducing our privacy in order to keep us safe.

Serious Concerns Raised

The ICO cited how the results of an investigation into trials of live facial recognition (LFR) by the Metropolitan Police Service (MPS) and South Wales Police (SWP) led to the raising of serious concerns about the use of a technology that relies on a large amount of sensitive personal information.

Examples

In December last year, Elizabeth Denham launched the formal investigation into how police forces used FRT after high failure rates, misidentifications and worries about legality, bias, and privacy.  For example, the trial of ‘real-time’ facial recognition technology on Champions League final day June 2017 in Cardiff, by South Wales and Gwent Police forces was criticised for costing £177,000 and yet only resulting in one arrest of a local man whose arrest was unconnected.

Also, after trials of FRT at the 2016 and 2017 Notting Hill Carnivals, the Police faced criticism that FRT was ineffective, racially discriminatory, and confused men with women.

MPs Also Called To Stop Police Facial Recognition

Back in July this year, following criticism of the Police usage of facial recognition technology in terms of privacy, accuracy, bias, and management of the image database, the House of Commons Science and Technology Committee called for a temporary halt in the use of the facial recognition system.

Stop and Take a Breath

In her blog post, Elizabeth Denham urged police not to move too quickly with FRT but to work within the model of policing by consent. She makes the point that “technology moves quickly” and that “it is right that our police forces should explore how new techniques can help keep us safe. But from a regulator’s perspective, I must ensure that everyone working in this developing area stops to take a breath and works to satisfy the full rigour of UK data protection law.”

Commissioners Opinion Document Published

The ICO’s investigations have now led her to produce and publish an Opinion document on the subject, as is allowed by The Data Protection Act 2018 (DPA 2018), s116 (2) in conjunction with Schedule 13 (2)(d).  The opinion document has been prepared primarily for police forces or other law enforcement agencies that are using live facial recognition technology (LFR) in public spaces and offers guidance on how to comply with the provisions of the DPA 2018.

The key conclusions of the Opinion Document (which you can find here: https://ico.org.uk/media/about-the-ico/documents/2616184/live-frt-law-enforcement-opinion-20191031.pdf) are that the police need to recognise the strict necessity threshold for LFR use, there needs to be more learning within the policing sector about the technology, public debate about LFR needs to be encouraged, and that a statutory binding code of practice needs to be introduced by government at the earliest possibility.

What Does This Mean For Your Business?

Businesses, individuals and the government are all aware of the positive contribution that camera-based monitoring technologies and equipment can make in terms of deterring criminal activity, locating and catching perpetrators (in what should be a faster and more cost-effective way with live FRT), and in providing evidence for arrests and trials.  The UK’s Home Office has also noted that there is general public support for live FRT in order to (for example) identify potential terrorists and people wanted for serious violent crimes.  However, the ICO’s apparently reasonable point is that moving too quickly in using FRT without enough knowledge or a Code of Practice and not respecting the fact that there should be a strict necessity threshold for the use of FRT could reduce public trust in the police and in FRT technology.  Greater public debate about the subject, which the ICO seeks to encourage, could also help in raising awareness about FRT, how a balanced approach to its use can be achieved and could help clarify matters relating to the extent to which FRT could impact upon our privacy and data protection rights.

“Stalkerware” Partner-Spying Software Use Rises By 35% In One Year

Kaspersky researchers have reported a 35 per cent rise in the number of people who have encountered the use of so-called ‘stalkerware’ or ‘spouseware’ software in the first 8 months of this year.

What is Stalkerware?

Stalkerware (or ‘spouseware’) is surveillance software that can be purchased online and loaded onto a person’s mobile device. From there, the software can record all of a person’s activity on that device, thereby allowing another person to read their messages, see screen activity, track the person through GPS location, access their social media, and even spy on the mobile user through the cameras on their device.

Covert, Without Knowledge or Consent

The difference between parental control apps and stalkerware is that stalkerware programs are promoted as software for spying on partners and they run covertly in the background without a person’s knowledge or consent.

Unlike legitimate parental control apps, such programs run hidden in the background, without a victim’s knowledge or consent. They are often promoted as software for spying on people’s partners.

Most Stalkerware needs to be installed manually on a victim’s phone which means that the person who intends to carry out the surveillance e.g. a partner, needs physical access to the mobile device.

Figures from Kaspersky show that there are now 380 variants of stalkerware ‘in the wild’ this year, which is 31% more than last year.

Most In Russia

Kaspersky’s figures show that this kind of surveillance software is most popular in Russia, with the UK in eighth place in Kaspersky’s study.

What Does This Mean For Your Business?

Unlike parental control apps which serve a practical purpose to help parents to protect their children from the many risks associated with Internet and mobile phone use, stalkerware appears to be more linked to abuse because of how it has been added to a device without a user’s consent to covertly and completely invade their privacy.  This kind of software could also be used for industrial espionage by a determined person who has access to a colleague’s mobile phone.

If you’d like to avoid being tracked by stalkerware or similar software, Kaspersky advises that you block the installation of programs from unknown sources in your smartphone’s settings, never disclose the passwords/passcode for your mobile device, and never store unfamiliar files or apps on your device.  Also, those leaving a relationship may wish to change the security settings on their mobile device.

Kaspersky also suggests that you should check the list of applications on your device to find out if suspicious programs have been installed without your consent.

If, for example, you find out that someone e.g. a partner/ex-partner has installed surveillance software on your devices, and/or does appear to be stalking you, the advice is, of course, to contact the police and any other relevant organisation.

Google Leadership Accused Of Developing Internal Surveillance Tool

Some Google employees have accused the company’s leadership of developing a browser-based file extension for all of Google’s in-house computers that could flag-up signs of workers trying to organise meetings and protests.

Google Employees

The story came to light in a memo written by a Google employee that is reported to have been seen and verified by 3 other anonymous Google employees and Bloomberg News.  In the memo it was alleged that a team within the company had developed a surveillance tool, disguised as a calendar, that could be added to the custom Chrome browser used on Google’s computers.

How?

The employee’s memo alleged that the browser extension would be able to report any staff who booked a calendar event which involved the need for more than 10 rooms, or scheduled an event with more than 100 people, and the alleged reason for flagging up these details was to warn the company’s leadership about any attempt to organise workers for the purposes of industrial action e.g. meetings and protests related to labour rights.

Reviewed

Reported employee memos have suggested that work on the tool started in September and that Google’s privacy team approved the tool’s release but also expressed some concerns about the culture at Google.

According to Google, however, the tool was developed over several months and was subject to Google’s standard privacy, security and legal reviews.

Rollout In October

According to reports of a memo posted on an internal staff message board, the surveillance tool is due to be rolled out this month (October), and there is a report of two Google workers in California saying that the tool has already been added to their browsers.

‘Trouble at Mill’

There has been speculation by some commentators that the tool may have been developed in response to recent outbreaks of organised activity by workers concerned about the company’s attitude to their rights, the ethics of some of the company’s projects, and how Google may have handled some complaints.  For example, some workers in the company’s Zurich office held an event about workers’ rights and unionisation, and some Google employees have protested about products such as the ‘Project Dragonfly’ search engine that could allow Google to re-enter the Chinese market by censoring certain terms.  Human rights groups had also been vocal in criticising this idea saying that it appeared to support state censorship.

What Does This Mean For Your Business?

For Google employees, many of whom are used to working in an environment of relative freedom where creativity and collaboration are encouraged, an apparent cultural shift (if indeed that is what is happening) towards a more authoritarian and less trusting approach where ethics could come lower down the list of priorities in the search for profits would be likely to be a shock, and could possibly damage the relationship and the trust between management and workers.  It is unlikely that workers anywhere would respond positively to being subjected to a kind of covert surveillance and internal censorship, particularly if they believed that it was being carried out to curtail certain aspects of their labour rights.  The resulting bad publicity could also damage a company’s brand and therefore, the company’s competitiveness and customer perceptions of the company.

It should be said, however, that the reports of the development of the browser tool in Google rest upon the alleged details of memos, and it is unclear to date how accurate the reports are.

Facebook ‘News’ Tab on Mobile App

Facebook has launched the ‘News’ tab on its mobile app which directs users to unbiased, curated articles from credible sources in a bid to publicly combat fake news and help restore trust in its own brand.

Large US Cities For Now

The ‘News’ tab on the Facebook mobile app, which will initially only be available to an estimated 200,000 people in select, large US cities, is expected by Facebook to become so popular that it could attract millions of users.

What?

The News tab will attempt to show users stories from local publishers as well as the big national news sources.  The full list of publishers who will contribute to the News tab stories has not yet been confirmed, although online speculation points to the likes of (U.S. publishers initially) Time, The Washington Post, CBS News, Bloomberg, Fox News and Politico.  It has not yet been announced when the service will be available to UK Facebook users. It has been reported that Facebook is also prepared to pay many millions for some of the content included in the tab.

Why?

Facebook has been working hard to restore some of the trust lost in the company when it was found to be the medium by which influential fake news stories were distributed during the UK Brexit referendum, the 2017 UK general election, and the U.S. presidential election.  There is also the not-so-small matter of 50 million Facebook profiles being shared/harvested (in conjunction with Cambridge Analytica) back 2014 in order to build a software program that was used to predict and generate personalised political adverts to influence choices at the ballot box in the last U.S. election.

Facebook CEO, Mark Zuckerberg, was made to appear before the U.S. Congress in April to talk about how Facebook is tackling false reports, and even recently a video that was shared via Facebook (which had 4 million views before being taken down) falsely suggested that smart meters emit radiation levels that are harmful to health. The information in the video was believed by many even though it was false.

Helping Smaller Publishers Too

Also, Facebook acknowledges that smaller news outlets have struggled to gain exposure with its algorithms, and that there is an opportunity to deliver more local news, personalised news experiences, and more modern digital-age, independent news.  It is also likely that, knowing that young people get most of their news from online sources but have been moving away to other platforms, this could be a good way for Facebook to retain younger users.

Working With Fact-Checkers

Back in January, for example, Facebook tried to help restore trust in its brand and publicly show that it was trying to combat fake news by announcing that it was working with London-based, registered charity ‘Full Fact’ who would be reviewing stories, images and videos, in an attempt to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.

Personalisation

The News tab will also allow users to see a personalised selection of articles, the choice of which is based upon the news they read. This personalisation will also include the ability to hide articles, topics and publishers that users choose not to see.

The Human Element

One of the key aspects of the News tab service that Facebook sees as adding value, keeping quality standards high, and providing a further safeguard against fake news is that many stories will be reviewed and chosen by experienced journalists acting as impartial and independent curators.  For example, Facebook says that “Unlike Google News, which is controlled by algorithms, Facebook News works more like Apple News, with human editors making decisions.”

Not The First Time

This is not the first time that Facebook has tried offering a news section, and it will hopefully be more successful and well-received than the ‘Trending News’ section that was criticised for bias in the 2016 presidential election and has since been phased out.

What Does This Mean For Your Business?

Only last week, Mark Zuckerberg found himself in front of the U.S. Congress answering questions about whether Facebook can be trusted to run a new cryptocurrency, and it is clear that the erosion of trust caused by how Facebook shared user data with Cambridge Analytica and how the platform was used to spread fake news in the U.S. election have cast a long shadow over the company.  Facebook has since tried many ways to regain trust e.g. working with fact-checkers, adding the ‘Why am I seeing this post?’ tool, and launching new rules for political ad transparency.

Users of social networks clearly don’t want to see fake news, the influences of which can have a damaging knock-on effect on the economic and trade environment which, in turn, affects businesses.

The launch of this News service with its human curation and fact-checking could, therefore, help Facebook kill several birds with one stone. For example, as well as going some way to helping to restore trust, it could increase the credibility of Facebook as a go-to trusted source of quality content, enable Facebook to compete with its rivals e.g. Google News, show Facebook to be a company that also cares about smaller news publishers, and act as a means to help retain younger users on its platform.

Amazon Echo and Google Home ‘Smart Spies’

Berlin-based Security Research Labs (SRL) discovered possible hacking flaws in Amazon Echo (Alexa) and Google Home speakers and installed their own voice applications to demonstrate hacks on both device platforms that turned the assistants into ‘Smart Spies’.

What Happened?

Research by SRL led to the discovery of two possible hacking scenarios that apply to both Amazon Alexa and Google Home which can enable a hacker to phish for sensitive information in voice content (vishing) and eavesdrop on users.

Knowing that some of the apps offered for use with Amazon Echo and Google Home devices are made by third parties with the intention of extending the capability of the speakers, SRL was then able to create its voice apps designed to demonstrate both hacks on both device platforms. Once approved by both device platforms, the apps were shown to successfully compromise the data privacy of users by using certain ‘Skills and actions’ to both request and collect personal data including user passwords by eavesdropping on users after they believed the smart speaker has stopped listening.

Amazon and Google Told

SRL’s results and the details of the vulnerabilities were then shared with Amazon and Google through a responsible disclosure process. Google has since announced that it has removed SRL’s actions and is putting in place mechanisms to stop something similar happening in future.  Amazon has also said that it has blocked the Skill inserted by SRL and has also put in preventative mechanisms of the future.

What Did SRL’s Apps Do?

The apps that enabled the ‘Smart Spy’ hacks took advantage of the “fallback intent”, in a voice app (the bit that says I’m sorry, I did not understand that. Can you please repeat it?”), the built-in stop intent which reacts to the user saying “stop” (by changing the functionality of that command after the apps were accepted), and leveraged a quirk in  Alexa’s and Google’s Text-to-Speech engine that allows inserting long pauses in the speech output.

Examples of how this was put to work included:

  • Requesting the user’s password through a simple back-end change by creating a password phishing Skill/Action. For example, a seemingly innocent application was created such as a horoscope.  When the user asked for it, they were given a false error message e.g. “it’s not available in your country”.  This triggered a minute’s silence which led to the user being told “An important security update is available for your device. Please say start update followed by your password.” Anything the user said after “start” was sent to the hacker, in this case, thankfully, SRL.
  • Faking the Stop Intent to allow eavesdropping on users. For example, when a user gave a ‘stop’ command and heard the ‘Goodbye’ message, the app was able to continue to secretly run and to pick up on certain trigger words like “I” or words indicating that personal information was about to follow, i.e. “email”, “password” or “address”. The subsequent recording was then transcribed and sent back to SRL.

Not The First Time

This is not the first time that concerns have been raised about the spying potential of home smart speakers.  For example, back in May 2018, A US woman reported that a private home conversation had been recorded by her Amazon’s voice assistant, and then sent it to a random phone contact who happened to be her husband’s employee. Also, as far back as 2016, US researchers found that they could hide commands in white noise played over loudspeakers and through YouTube videos in order to get smart devices to turn on flight mode or open a website. The researchers also found that they could embed commands directly into recordings of music or spoken text.

Manual Review Opt-Out

After the controversy over the manual, human reviewing of recordings and transcripts taken via the voice assistants of Google, Apple and Amazon, Google and Apple had to stop the practice and Amazon has now added an opt-out option for manual review of voice recordings and their associated transcripts taken through Alexa.

What Does This Mean For Your Business?

Digital Voice Assistants have become a popular feature in many home and home-business settings because they provide many value-adding functions in personal organisation, as an information point and for entertainment and leisure.  It is good news that SRL has discovered these possible hacking flaws before real hackers did (earning SRL some good PR in the process), but it also highlights a real risk to privacy and security that could be posed by these devices by determined hackers using relatively basic programming skills.

Users need to be aware of the listening potential of these devices, and of the possibility of malicious apps being operated through them.  Amazon and Google may also need to pay more attention to the reviewing of third party apps and of the Skills and Actions made available in their voice app stores in order to prevent this kind of thing from happening and to close all loopholes as soon as they are discovered.

Why You May Be Cautious About Installing The Latest Windows 10 Update

Some of Microsoft’s enterprise-based customers may be feeling cautious about installing the latest Windows 10 update because Microsoft warns that it could stop the Microsoft Defender Advanced Threat Protection (ATP) service from running.

The Update and Warning

The update in question is the October 15, 2019 KB4520062 (OS Build 17763.832).  The update contains a long list of improvements and fixes (see here for full details: https://support.microsoft.com/en-us/help/4520062/windows-10-update-kb4520062), but also three known issues, one of which concerns the Microsoft Defender Advanced Threat Protection (ATP) service.

What Is The ATP?

The ATP is a paid-for service, for Microsoft Enterprise customers (not Home or Pro customers) that’s designed to help enterprise networks prevent, detect, investigate, and respond to advanced threats. It offers features like endpoint behavioural sensors embedded in Windows 10, Cloud security analytics and access to threat intelligence generated by Microsoft hunters, security teams, and augmented by threat intelligence provided by Microsoft’s partners.

What’s The Issue With the Update?

In the update’s release notes Microsoft says, “We suggest that devices in an affected environment do not install this optional non-security update”.

The reason given for the warning is that installing the update could mean that the ATP service could stop running and may fail to send reporting data.  This could mean that certain enterprise customers are more exposed to security threats until a solution has been found.

Microsoft also warns that an error (0xc0000409) may be received in MsSense.exe.

Not Fixed Until November

Microsoft says that although it’s working on a resolution it estimates that it won’t have a solution to the problem until November.

One of Several Update Problems Recently

This is one of several updates from Microsoft recently that have come with problems.  For example, an update on the 16th of September was reported to have caused issues with Windows Defender.  Later in September, Microsoft had to issue two emergency Windows updates to protect against some serious vulnerabilities relating to Internet Explorer and Windows Defender (anti-virus software).

Also, the October 3 update is reported to have adversely affected the Start Menu and print spooler, and the Start Menu issues were reported to be still present following the 8 October update.

What Does This Mean For Your Business?

Although Home and Pro customers need not worry about this particular issue, Microsoft’s valued Enterprise customers, who have paid for the ATP service to help stay ahead of the game in security may be a little worried and frustrated at having to either wait until November to enjoy the improvements of the new (optional) update in safety, or install it now and risk the loss of their ATP service and face the associated potential security risks.

Microsoft customers seem to have suffered several problems related to updates in recent months, and Enterprise customers are likely to be those that Microsoft particularly does not want to upset.  It is likely, therefore, that Microsoft will be focusing of getting an appropriate solution to the new update issues before November if possible.