Software

Tech Tip – Minimise Distractions With ‘Focus Assist’

If you’re using Windows 10 and you’d like to maintain productivity and minimise distractions from your operating system e.g. notifications, sounds and alerts, ‘Focus Assist’ can help you to achieve this and can now be turned on automatically for full-screen apps.

With Focus Assist you can choose which notifications you’d like to see and hear when working, and you can choose the automatic rules for these (using on/off toggles) so that you can minimise distractions at certain times and during certain activities.  You can also ask Focus Assist (with a simple tick box) to give you a summary of what you missed while it was on.

To use Focus Assist:

Type ‘Focus Assist’ in your Windows 10 search box (bottom left)

Select ‘Focus Assist Settings’ or ‘Focus Assist Rules’

Make your notifications choices: ‘Off’, ‘Priority Only’, or ‘Alarms Only’

Use the On/Off toggles to set your ‘Automatic Rules’.

Amazon Echo and Google Home ‘Smart Spies’

Berlin-based Security Research Labs (SRL) discovered possible hacking flaws in Amazon Echo (Alexa) and Google Home speakers and installed their own voice applications to demonstrate hacks on both device platforms that turned the assistants into ‘Smart Spies’.

What Happened?

Research by SRL led to the discovery of two possible hacking scenarios that apply to both Amazon Alexa and Google Home which can enable a hacker to phish for sensitive information in voice content (vishing) and eavesdrop on users.

Knowing that some of the apps offered for use with Amazon Echo and Google Home devices are made by third parties with the intention of extending the capability of the speakers, SRL was then able to create its voice apps designed to demonstrate both hacks on both device platforms. Once approved by both device platforms, the apps were shown to successfully compromise the data privacy of users by using certain ‘Skills and actions’ to both request and collect personal data including user passwords by eavesdropping on users after they believed the smart speaker has stopped listening.

Amazon and Google Told

SRL’s results and the details of the vulnerabilities were then shared with Amazon and Google through a responsible disclosure process. Google has since announced that it has removed SRL’s actions and is putting in place mechanisms to stop something similar happening in future.  Amazon has also said that it has blocked the Skill inserted by SRL and has also put in preventative mechanisms of the future.

What Did SRL’s Apps Do?

The apps that enabled the ‘Smart Spy’ hacks took advantage of the “fallback intent”, in a voice app (the bit that says I’m sorry, I did not understand that. Can you please repeat it?”), the built-in stop intent which reacts to the user saying “stop” (by changing the functionality of that command after the apps were accepted), and leveraged a quirk in  Alexa’s and Google’s Text-to-Speech engine that allows inserting long pauses in the speech output.

Examples of how this was put to work included:

  • Requesting the user’s password through a simple back-end change by creating a password phishing Skill/Action. For example, a seemingly innocent application was created such as a horoscope.  When the user asked for it, they were given a false error message e.g. “it’s not available in your country”.  This triggered a minute’s silence which led to the user being told “An important security update is available for your device. Please say start update followed by your password.” Anything the user said after “start” was sent to the hacker, in this case, thankfully, SRL.
  • Faking the Stop Intent to allow eavesdropping on users. For example, when a user gave a ‘stop’ command and heard the ‘Goodbye’ message, the app was able to continue to secretly run and to pick up on certain trigger words like “I” or words indicating that personal information was about to follow, i.e. “email”, “password” or “address”. The subsequent recording was then transcribed and sent back to SRL.

Not The First Time

This is not the first time that concerns have been raised about the spying potential of home smart speakers.  For example, back in May 2018, A US woman reported that a private home conversation had been recorded by her Amazon’s voice assistant, and then sent it to a random phone contact who happened to be her husband’s employee. Also, as far back as 2016, US researchers found that they could hide commands in white noise played over loudspeakers and through YouTube videos in order to get smart devices to turn on flight mode or open a website. The researchers also found that they could embed commands directly into recordings of music or spoken text.

Manual Review Opt-Out

After the controversy over the manual, human reviewing of recordings and transcripts taken via the voice assistants of Google, Apple and Amazon, Google and Apple had to stop the practice and Amazon has now added an opt-out option for manual review of voice recordings and their associated transcripts taken through Alexa.

What Does This Mean For Your Business?

Digital Voice Assistants have become a popular feature in many home and home-business settings because they provide many value-adding functions in personal organisation, as an information point and for entertainment and leisure.  It is good news that SRL has discovered these possible hacking flaws before real hackers did (earning SRL some good PR in the process), but it also highlights a real risk to privacy and security that could be posed by these devices by determined hackers using relatively basic programming skills.

Users need to be aware of the listening potential of these devices, and of the possibility of malicious apps being operated through them.  Amazon and Google may also need to pay more attention to the reviewing of third party apps and of the Skills and Actions made available in their voice app stores in order to prevent this kind of thing from happening and to close all loopholes as soon as they are discovered.

Why You May Be Cautious About Installing The Latest Windows 10 Update

Some of Microsoft’s enterprise-based customers may be feeling cautious about installing the latest Windows 10 update because Microsoft warns that it could stop the Microsoft Defender Advanced Threat Protection (ATP) service from running.

The Update and Warning

The update in question is the October 15, 2019 KB4520062 (OS Build 17763.832).  The update contains a long list of improvements and fixes (see here for full details: https://support.microsoft.com/en-us/help/4520062/windows-10-update-kb4520062), but also three known issues, one of which concerns the Microsoft Defender Advanced Threat Protection (ATP) service.

What Is The ATP?

The ATP is a paid-for service, for Microsoft Enterprise customers (not Home or Pro customers) that’s designed to help enterprise networks prevent, detect, investigate, and respond to advanced threats. It offers features like endpoint behavioural sensors embedded in Windows 10, Cloud security analytics and access to threat intelligence generated by Microsoft hunters, security teams, and augmented by threat intelligence provided by Microsoft’s partners.

What’s The Issue With the Update?

In the update’s release notes Microsoft says, “We suggest that devices in an affected environment do not install this optional non-security update”.

The reason given for the warning is that installing the update could mean that the ATP service could stop running and may fail to send reporting data.  This could mean that certain enterprise customers are more exposed to security threats until a solution has been found.

Microsoft also warns that an error (0xc0000409) may be received in MsSense.exe.

Not Fixed Until November

Microsoft says that although it’s working on a resolution it estimates that it won’t have a solution to the problem until November.

One of Several Update Problems Recently

This is one of several updates from Microsoft recently that have come with problems.  For example, an update on the 16th of September was reported to have caused issues with Windows Defender.  Later in September, Microsoft had to issue two emergency Windows updates to protect against some serious vulnerabilities relating to Internet Explorer and Windows Defender (anti-virus software).

Also, the October 3 update is reported to have adversely affected the Start Menu and print spooler, and the Start Menu issues were reported to be still present following the 8 October update.

What Does This Mean For Your Business?

Although Home and Pro customers need not worry about this particular issue, Microsoft’s valued Enterprise customers, who have paid for the ATP service to help stay ahead of the game in security may be a little worried and frustrated at having to either wait until November to enjoy the improvements of the new (optional) update in safety, or install it now and risk the loss of their ATP service and face the associated potential security risks.

Microsoft customers seem to have suffered several problems related to updates in recent months, and Enterprise customers are likely to be those that Microsoft particularly does not want to upset.  It is likely, therefore, that Microsoft will be focusing of getting an appropriate solution to the new update issues before November if possible.

Any Thumbprint Unlocks a Galaxy A10

Samsung’s so-called “revolutionary” fingerprint authentication system for the Galaxy A10 phone appears to be offering less than satisfactory results as it is discovered that any thumbprint can unlock one.

Biometric ‘Fail’

South Korean phone giant Samsung has received some unwanted bad publicity for its new Galaxy A10 phone after an article appeared in the Sun newspaper highlighting how a British couple discovered that, after putting a low-priced screen protector (purchased from eBay) on the phone, each other’s thumb print could unlock the phone.

The thumbprint scanner, which uses ultrasound to detect 3D ridges in fingerprints and only is supposed to recognise the thumbprint that has been registered by the user is reported to have recognised both of the thumbprints of user Lisa Neilson and both of her husband.

Patch

Samsung is reported to have acknowledged the fault and to be in the process of preparing a software patch to fix it.

Google Pixel ‘Face Unlock’ Issue

It seems that Samsung isn’t the only company struggling to produce a biometric phone security system that works properly.

The BBC has recently reported that after testing Google’s Pixel 4 phone’s Face Unlock system, it was discovered that with normal default settings on, the phone could be unlocked even if the user’s eyes were closed. The problem with this is that the phone could potentially be unlocked by another unauthorised person while the user is asleep simply by holding the phone in front of the user’s face.

The phone does, however, offer a ‘lockdown’ mode which users can switch to in order to deactivate the facial recognition system altogether.

Biometrics – The Way Forward?

Even though multi-factor authentication is more secure than relying on just a password for authentication, a continued reliance on weak passwords and password sharing by users, coupled with more sophisticated cyber and phone crime techniques mean that there is a strong argument for biometric methods of authentication, and a move towards what Microsoft has recently described as a “passwordless future”.

What Does This Mean For Your Business?

Even though biometrics has been shown to make things much more difficult for cyber-criminals to crack, as the A10 and the Pixel 4 security systems illustrate, biometrics have not been 100% successful to date and is still needs some work.  In fact, this is not the first time that a Samsung Galaxy has been in the news for a biometric issue. For example, a Reddit user recently claimed to have used a 3D printer to clone a fingerprint and then use that fake fingerprint to beat the in-display fingerprint reader on the Galaxy S10. Also, there was the report of the Twitter user who claimed to have fooled Nokia 9 PureView’s fingerprint scanner by using somebody else’s finger, and then just a packet of chewing gum, and of the incident back in May 2017 where a BBC reporter said that he’d been able to fool HSBC’s biometric voice recognition system by passing his brother’s voice off as his own.

There is no doubt that the move away from passwords to biometrics is now underway, but we are still in the relatively early stages.

Tech Tip – Create Calendar Events Directly From the Taskbar

One of the new features added to Windows 10 with the September (1909) update was to enable Calendar users to be able to simply create a Calendar event directly from the Calendar flyout on the Taskbar.

To add quickly and easily add your Calendar event:

– Click on the date and time at the lower right corner of the Taskbar to open the Calendar flyout.

– Pick your desired date and type your text box to identify your event.

– Use the Inline options to set a time and location.

Ex-Employee Claims Your G Suite Data Is Not Encrypted

A report by a former Google employee on the ‘Freedom of the Press Foundation’ website warns organisations that any data stored on Google’s G Suite is not encrypted, can be accessed by administrators and can be shared with law enforcement on request.

G Suite

G Suite is Google’s set of cloud-based computing, productivity and collaboration tools including Gmail, Drive (for your company documents) and Calendar.

Privacy Risk

Former Google employee Martin Shelton alleges that files stored within Google’s G Suite have no end-to-end encryption as other Google services do, thereby potentially leaving business data vulnerable to being viewed by Google and by other persons such as Administrators.  Mr Shelton reports that:

  • While Google leverages your G Suite user data for e.g. filtering for spam, malware or targeted attack detection, it can also scan a user’s Google account for content that is illegal, or in violation of Google’s policies.
  • U.S. agencies can compel Google to hand over relevant user data from G Suite accounts to aid in investigations.
  • Business versions of G Suite, such as G Suite Enterprise, offer administrators the tools to monitor users and search device data within the G Suite domain thereby giving them remarkable levels of transparency to users’ (employees’) Google activities,  For example, Administrators can search for Gmail and Google Drive content, and metadata (e.g. dates, subject lines, recipients), and can log and retain this data.
  • Administrators can monitor Gmail, Calendar, Drive, Sheets, Slides, and more, from desktop and mobile devices and can receive push alerts for certain (suspicious) behaviours.
  • Administrators can use audit logs to see who has looked at or modified each document within the organisation.

Not The First Time

This is not the first time that Google has made the news over G Suite privacy.  Back in July 2018, The Wall Street Journal highlighted how third-party developers could view Gmail users’ messages.

What Does This Mean For Your Business?

This is clearly some unwanted publicity for Google, particularly when there is fierce competition in the business Cloud services market.

The advice for those worried about G Suite’s privacy and security suggested by former Google employee Martin Shelton is to use G Suite mindfully and give yourself a G Suite audit (Gmail, Drive, and Google-connected activity on mobile devices).  This way, if you can see certain data you can assume that the administrator and Google are likely to also be able see it.

Also, if you are concerned about unknown administrators seeing your G Suite data you could consider trying to identify who your G Suite administrators are, what G Suite version you have, whether your organisation is using G Suite Business or Enterprise, finding out what rules have been set in Google Vault and audit logs, and what policies exist for administrative data retention and access.

Mr Shelton also suggests that users may wish to find another cloud service provider that has end-to-end encrypted format to store any particularly sensitive data, or to simply keep data offline or off a computer entirely.

Digital ‘Pressure’ For Accountants

A report by IT company Prism Solutions has highlighted how traditional accountancy firms are having to change rapidly to meet challenges such as Cloud computing, GDPR and HMRC pressing quickly ahead with ‘Making Tax Digital’ (MTD).

MTD

According to the report, the whole accountancy profession is now on the verge of an evolutionary change and accountancy firms will need to develop into digital practices in order to compete and survive.

One of the key change drivers and challenges for accountancy firms is HMRC’s ongoing ‘Making Tax Digital’(MTD) initiative which has been designed to eradicate paper from the tax filing process and to make the UK tax system more effective, efficient and easier for taxpayers to use.

The fact that an estimated 1.2 million businesses are subject to the MTD VAT rules (for VAT periods starting on or after 1 April 2019 or 1 October 2019 for organisations which are more complex), must now keep VAT records in a digital format and submit their VAT returns to HMRC using MTD compatible software (yet can’t do so using HMRC’s website) means that they are turning to accountancy firms to submit the returns on their behalf.  This leaves accountancy firms with new challenges such as having to adapt quickly to a different type of interaction with their clients who are looking for accountants to be experts on the digital process and to provide instant service and issue resolution. Accountancy firms are also facing possible problems if HMRC doesn’t do enough to communicate MTD to relevant businesses.

Always On

The Prism Solutions report highlights how accountancy clients now expect technology to be ‘always on’ 24/7 and that the ability of an accountancy firms’ productivity to be able to connect with their clients in real-time, and offer access to real-time data that’s always on is an important way in which they can deliver an exceptional client experience.

Other Challenges

The Prism report also notes that, just as Cloud computing, GDPR, and MTD are already having an impact on accountancy, other emerging challenges to the profession include the development of AI technologies, blockchain and crypto-currencies.

What Does This Mean For Your Business?

Having to digitise accounts is providing challenges to both businesses and accountancy firms and looks set to change aspects of the relationship between the two.  Accountancy firms are realising that embracing all forms of ‘digital’ is a key enabler to enhancing productivity, and that becoming part of the digital revolution with their clients will enable them to not just offer a better service, but also to grow as they take advantage of new revenue-generating opportunities and position themselves as the go-to adviser for their clients.

As well as expecting ‘always-on’ service and digital expertise from accountancy firms, business customers will still want to use their accountants as a source of business advice for business planning, strategy, and market development (for example), and getting better at using digitisation to do this could be another way in which accountants could keep delivering value to businesses.

AI and the Fake News War

In a “post-truth” era, AI is one of the many protective tools and weapons involved in the battles that male up the current, ongoing “fake news” war.

Fake News

Fake news has become widespread in recent years, most prominently with the UK Brexit referendum, the 2017 UK general election, and the U.S. presidential election, all of which suffered interference in the form of so-called ‘fake news’ / misinformation spread via Facebook which appears to have affected the outcomes by influencing voters. The Cambridge Analytica scandal, where over 50 million Facebook profiles were illegally shared and harvested to build a software program to generate personalised political adverts led to Facebook’s Mark Zuckerberg appearing before the U.S. Congress to discuss how Facebook is tackling false reports. A video that was shared via Facebook, for example (which had 4 million views before being taken down), falsely suggested that smart meters emit radiation levels that are harmful to health. The information in the video was believed by many even though it was false.

Government Efforts

The Digital, Culture, Media and Sport Committee has published a report (in February) on Disinformation and ‘fake news’ highlighting how “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms”.  The UK government has, therefore, been calling for a shift in the balance of power between “platforms and people” and for tech companies to adhere to a code of conduct written into law by Parliament and overseen by an independent regulator.

Fact-Checking

One way that social media companies have sought to tackle the concerns of governments and users is to buy-in fact-checking services to weed out fake news from their platforms.  For example, back in January London-based, registered charity ‘Full Fact’ announced that it would be working for Facebook, reviewing stories, images and videos to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.

Moderation

A moderator-led response to fake news is one option, but its reliance upon humans means that this approach has faced criticism over its vulnerability to personal biases and perspectives.

Automation and AI

Many now consider automation and AI to be an approach and a technology that are ‘intelligent’, fast, and scalable enough to start to tackle the vast amount of fake news that is being produced and circulated.  For example, Google and Microsoft have been using AI to automatically assess the truth of articles.  Also, initiatives like the Fake News Challenge (http://www.fakenewschallenge.org/) seeks to explore how AI technologies, particularly machine learning and natural language processing, can be leveraged to combat fake news, and supports the idea that AI technologies hold promise for significantly automating parts of the procedure human fact-checkers use to determine if a story is real or a hoax.

However, the human-written rules underpinning AI, and how AI is ‘trained’ can also lead to bias.

Deepfake Videos

Deepfake videos are an example of how AI can be used to create fake news in the first place.  Deepfake videos use deep learning technology and manipulated images of target individuals (found online), often celebrities, politicians, and other well-known people to create an embarrassing or scandalous video. Deepfake audio can also be manipulated in a similar way.  Deepfake videos aren’t just used to create fake news sources, but they can also be used by cyber-criminals for extortion.

AI Voice

There has also been a case in March this year, where a group of hackers were able to use AI software to mimic an energy company CEO’s voice in order to steal £201,000.

What Does This Mean For Your Business?

Fake news is a real and growing threat, as has been demonstrated in the use of Facebook to disseminate fake news during the UK referendum, the 2017 UK general election, and the U.S. presidential election. State-sponsored politically targeted campaigns can have a massive influence on an entire economy, whereas other fake news campaigns can affect public attitudes to ideas and people and can lead to many other complex problems.

Moderation and automated AI may both suffer from bias, but at least they are both ways in which fake news can be tackled, to an extent.  Through adding fact-checking services, other monitoring, and software-based approaches e.g. through browsers, social media and other tech companies can take responsibility for weeding out and guarding against fake news.

Governments can also help in the fight by putting pressure on social media companies and by collaborating with them to keep the momentum going and to help develop and monitor ways to keep tackling fake news.

That said, it’s still a big problem, no solution is infallible, and all of us as individuals would do well to remember that, especially today, you really can’t believe everything you read and an eye to source and bias of news coupled with a degree of scepticism can often be healthy.

Google’s Chrome To Block Mixed Content Pages Without HTTPS

Google has announced that in a series of steps starting in Chrome 79, all mixed content will gradually be blocked by default.

What Is Mixed Content?

Mixed content refers to the insecure http:// sub-resources that load into https:// pages, thereby creating a possible way in for attackers to compromise what appears to be a secure web page.  For example, this could be any audio, video, and images that are loaded insecurely from HTTP but appear as part of an HTTPS page when it loads.  Many browsers are already able to block other types of mixed content by default such as scripts and iframes.

Why Worry?

Mixed content from a non-secure source poses privacy and security risks and could provide a way for attackers to spread misinformation.  For example, an attacker could alter a chart to mislead viewers or could hide a tracking cookie in a mixed resource load.  Also, the mix of secure and insecure content in a page could confuse browser security UX.  Google’s own research shows that Mobile devices account for the majority of unencrypted end-user traffic.

What Does HTTPS Do?

HTTPS provides a secure, encrypted channel for web connections that can protect users against issues such as eavesdroppers, man-in-the-middle attacks and hijackers spoofing a trusted website. The kind of encryption offered by HTTPS stops interception of your information and ensures the integrity of the information that you send and receive.

Older hardware and software can pose a privacy and security risk because it often doesn’t support modern encryption technologies.

Progress

Progress has been made to make web browsing more secure with the move towards the full introduction of HTTPS, and Google is keen to point out that Chrome users now spend over 90% of their browsing time on HTTPS on all major platforms.

Google now sees its next task as ensuring that HTTPS configurations across the web are secure and up to date.

Roll-Out In Steps

Google says that the roll-out of its blocking of mixed content will happen in a series of steps starting with the release of Chrome 79 (in December 2019) with its new setting to unblock mixed content on specific sites.  Next, Chrome 80 (due for release in January 2020) will auto-upgrade mixed audio and video resources to https://.  Chrome 80 will display a “Not Secure” chip in the Omnibox for mixed images.

What Does This Mean For Your Business?

The introduction of measures to display warnings about and to block mixed content will put pressure on some businesses to clean up their web pages and make it more difficult for cyber-criminals to find a way through browser security.  This is good news for businesses and web users alike.

It should be remembered, however, that secure websites with encrypted connections can still be harmed by certain cryptographic weaknesses e.g. due to external or related-domain hosts, so it’s important for businesses and individuals to keep up to date with software patches and fixes.

Tech Tip – Twobird

New email client app ‘Twobird’ allows you to put all your emails in one place and create notes and reminders on the fly (and attaches the notes on emails).

Twobird has been billed as “a new kind of email app” that offers email at the speed of live chat.  It includes all your everyday tools – writes emails, creates notes, set reminders and assign to-dos — all in your inbox. If, for example, if you’ve scheduled an appointment it will alert you at just the right time.

Features include:

– Remind: allowing you to schedule an email or note to appear in your inbox later.

– Low Priority: so you can set aside automated messages so you don’t get distracted.

– Pinned and Recent: this lets you keep important notes and conversations easily accessible.

– Tidy Up: archives any inactive conversations so your inbox stays fresh.

Twobird is available in the Google Play store.