Security

Crypto-Mining Apps Discovered in Microsoft Store

Security researchers at Symantec claim to have discovered eight apps in the Microsoft Store which, if downloaded, can use the victim’s computer to mine crypto-currency.

Only There For A Short Time Last Year

The suspect apps are reported to have only been on the Microsoft Store for a short time between April and December 2018, but it is thought that they still managed to achieve significant download numbers, as indicated by nearly 1,900 ratings posted for the apps.

Which Apps?

The suspect apps, in this case, are Fast-search Lite, Battery Optimizer (Tutorials), VPN Browsers+, Downloader for YouTube Videos, Clean Master+ (Tutorials), FastTube, Findoo Browser 2019, and Findoo Mobile & Desktop Search apps.  These apps have now been removed from the Microsoft Store,

What Is Crypto-currency Mining?

‘Crypto-currency mining’ involves installing ‘mining script’ code such as Coin Hive into multiple web pages without the knowledge of the web page visitor or often the website owner. Multiple computers then join their networks so that the combined computing power can enable mathematical problems to be solved. Whichever scammer is first to solve these problems is then able to claim/generate cash in the form of crypto-currency, hence mining for crypto-currency.

Crypto-currency mining software tends to be written in JavaScript and sends any coins mined by the browser to the owner of the web site. If you visit a website where it is being used (embedded in the web page), you may notice that power consumption and CPU usage on your browser will increase, and your computer will start to lag and become unresponsive. These slowing, lagging symptoms will end when you leave the web page.

Mining For Monero

In the case of the eight suspect apps, they had been loaded with a script that had been designed to mine the ‘Monero’ crypto-currency.  Monero, which was created in April 2014 is a decentralised cryptocurrency that uses an obfuscated public ledger.  This means that anybody can broadcast or send transactions, but no one outside can tell the source.

How?

The secret mining element of the eight suspect apps worked by triggering Google Tag Manager (GTM) in their domain servers as soon as they were downloaded.  The GTM, which was shared across all eight apps, enabled them to fetch a coin-mining JavaScript library, and the mining script was then able to use most of the computer’s CPU cycles to mine Monero.

GTM – Legitimate

GTM is usually a legitimate tool that is designed to enable developers to inject JavaScript dynamically into their applications.  In this case, however, it had been used as a cloak to conceal the malicious purpose of the apps.

Not The First Time

This is not the first time that suspect apps have been discovered lurking in popular, legitimate app stores. Back in January, for example, security researchers discovered 36 fake and malicious apps for Android that can harvest a user’s data and track their location, masquerading as security tools in the trusted Google Play Store. The apps, which had re-assuring names such as Security Defender and Security Keeper, were found to be hiding malware, adware and even tracking software.

Also, back in November 2017, a fake version of WhatsApp, the free, cross-platform instant messaging service for smartphones, was downloaded from the Google Play store by more than one million unsuspecting people before it was discovered to be fake.

What Does This Mean For Your Business?

This is not the first time that apps which perform legitimate functions of the surface and are available from trusted sources such as Microsoft store have been found to have hidden malicious elements, in this case, mining scripts.  The increased CPU usage and slowing down of computers caused by mining scripts waste time and money for businesses, and the increasingly sophisticated activities of crypto-jackers and other cyber-criminals, combined with a global shortage of skilled cyber-security professionals to handle detection and response have left businesses vulnerable to this kind of hidden app-based threat.

Although the obvious advice is to always check what you are downloading and the source of the download, the difference between fake apps and real apps can be subtle, and even Microsoft and Google don’t always seem to be able to detect the hidden aspects of some apps.

The fact that many of us now store most of our personal and business lives on our smartphones makes reports such as these more alarming. It also undermines our confidence in (and causes potentially costly damage to) the brands that are associated with such incidents e.g. the reputation of Microsoft Store.

Some of the ways that we can try to protect ourselves and our businesses from this kind of threat include checking the publisher of an app, checking which permissions the app requests when you install it, deleting apps from your phone that you no longer use, and contacting your phone’s service provider or visit the High Street store if you think you’ve downloaded a malicious/suspect app.

Also, if you are using an ad blocker on your computer, you can set it to block a specific JavaScript URLs related to crypto-mining, and some popular browsers also have extensions that can help e.g. a browser extension called ‘No Coin’ is available for Chrome, Firefox and Opera (to stop Coin Hive mining code being used through your browser).  Maintaining vigilance for unusual computer symptoms, keeping security patches updated, and raising awareness within your company of current crypto-currency mining threats and scams and what to do to prevent them, are just some of the other ways that you can maintain a basic level of protection for your business.

Scooter Hack Threat

An investigation by researchers at Zimperium® found a security flaw in the Xiaomi M365 electric scooter (the same model that is used by ridesharing companies) which could allow hackers to take control of the scooter’s acceleration and braking.

Xiaomi M365

The Xiaomi M365 is a folding, lightweight, stand-on ‘smart’ scooter with an electric motor that retails online for around £300 to £400. It is battery-powered, with a maximum speed of 15 mph, and features a “Smart App” that can track a user’s cycling habits, and riding speed, as well as the battery life, and more.

What Security Flaw?

The security flaw identified by the Zimperium® researchers is that the ‘smart’ scooter has a Bluetooth connection so that users can interact with the scooter’s features e.g. its Anti-Theft System or to update the scooter’s firmware, via an app. Each scooter is protected by a password, but the researchers discovered that the password is only needed for validation and authentication by the app, but commands can still be executed to the actual scooter without the password.

The researchers found that they could use the Bluetooth connection as a way in.  Using this kind of hack, it is estimated that an attacker only needs to be within 100 meters of the scooter to be able to launch a denial-of-service attack via Bluetooth which could enable them to install malicious firmware.  This firmware could be used by the attacker to take control of the scooter’s acceleration and braking capacities. This could mean that the rider could be in danger if an attacker chose to suddenly and remotely cause the scooter to brake or accelerate without warning.  Also, the researchers found that they could use this kind of attack to lock a scooter by using its anti-theft feature without authentication or the user’s consent.

Told The Company

The researchers made a video of their findings as proof, contacted Xiaomi and informed the company about the nature of the security flaw. It has been reported that Xiaomi confirmed that it is a known issue internally, but that no announcement has been made yet about a fix.  The researchers at Zimperium® have stated online that the scooter’s security can’t be fixed by the user and still needs to be updated by Xiaomi or any 3rd parties they work with.

Suggestion From The Researchers

The researchers have suggested that, in the absence of a fix to date, users can stop attackers from connecting to the scooter remotely by using Xiaomi’s app from their mobile before riding and connecting to the scooter.  Once the user’s mobile is connected and kept connected to the scooter an attacker can’t remotely flash malicious firmware or lock the scooter.

What Does This Mean For Your Business?

This is another example of how smart products/IoT products of all kinds can be vulnerable to attack via their Bluetooth or Internet connections, and particularly where there are password issues.  Usually, the risk comes from smart products from the same manufacturer all being given the same default password which the user doesn’t change.  In this case, the password works with the app, but in this case it appears as though the password isn’t being used properly to protect the product itself.

There have been many examples to date of smart products being vulnerable to attack.  For example, back in November 2017, German Telecoms regulator the Federal Network Agency banned the sale of smartwatches to children and asked parents to destroy any that they already have over fears that they could be hacked, and children could be spied-upon.  Also, back in 2016, cyber-criminals were able to take over many thousands of household IoT devices (white goods, CCTV cameras and printers), and use them together as a botnet to launch an online DDoS attack (Mirai) on the DNS service ‘Dyn’ with global consequences i.e. putting Twitter, Spotify, and Reddit temporarily out of action.

Manufacturers of smart products clearly need to take great care in the R&D process to make sure that the online security aspects have been thoroughly examined. Any company deploying IoT devices in any environment should also require the supply chain to provide evidence of adherence to a well-written set of procurement guidelines that relate to specific and measurable criteria.  In the mobile ecosystem and in adjacent industries, for example, the GSMA provides guidelines to help with IoT security.

As buyers of smart products, making sure that we change default passwords, and making sure that we stay up to date with any patches and fixes for smart products can be ways to reduce some of the risks.   Businesses may also want to conduct an audit and risk assessment for known IoT devices that are used in the business.

Potential Jail For Clicking on Terror Links

The new UK Counter-Terrorism and Border Security Act 2019 means that you could face up to 15 years in jail if you visit web pages where you can obtain information that’s deemed to be useful to ‘committing or preparing an act of terrorism’.

Really?

The government states that the Act is needed to “make provision in relation to terrorism; to make provision enabling persons at ports and borders to be questioned for national security and other related purposes; and for connected purposes”.

As shown online in at legislation.gov.uk, Chaper1, Section 3 of the Act, which relates to the amended Section 58 of the Terrorism Act 2000 (collection of information) for example, states that unless you’re carrying out work as a journalist, or for academic research, if a person “views, or otherwise accesses, by means of the internet a document or record containing information of that kind” i.e. (new subsection) information of a kind likely to be useful to a person committing or preparing an act of terrorism, you can be punished under the new Act.

Longer Sentences

The new Act increases the sentences from The Terrorism Act 2000, so that a sentence of 15 years is now possible in some circumstances.

The Most Terror Deaths in Europe in 2017

A Europol Report showed that the UK suffered more deaths as a result of terror attacks than any other country in Europe in 2017.  The bill which has now become the new law was first introduced on 6th June 2018 after calls to for urgent action to deal with terrorism, following three terrorist attacks on the UK within 3 months back in 2017.

Online Problem

One of the key areas that it is hoped the law will help to tackle is how the internet and particularly social media can be used to recruit, radicalise and raise money.

Criticism

The new Act, which received royal assent on 12th February, has been criticised by some as being inflexible, based too much upon ‘thought crime’, and being likely to affect more of those at the receiving end of information rather than those producing and distributing it.  The new law has also been criticised for infringing upon the privacy and freedom of individuals to freely browse the internet in private without fear of criminal repercussion, as long as that browsing doesn’t contribute to the dissemination of materials that incite violent or intolerant behaviour.

The new Act has been further criticised by MPs for breaching human rights and has been criticised by legal experts such as Max Hill QC, the Independent Reviewer of Terrorism Legislation, who is reported as saying that the new law may be likely to catch far too many people, and that a 15-year prison is “difficult to countenance when nothing is to be done with the material, it is not passed to a third party, and it is not being collected for a terrorist purpose.”

What Does This Mean For Your Business?

We may assume that most people will be unlikely to willingly view the kind of material that could result in a prison sentence, and many in the UK are likely to welcome a law that provides greater protection against those who plan and commit terror attacks or who are seeking to use online means to recruit, radicalise and raise money.  The worry is that such a law should not be so stringent and inflexible as to punish those who are not viewing or collecting material for terrorist purposes, and there are clearly many prominent commentators who believe that this law may do this.

Businesses, organisations and venues of all kinds are often caught up in (or are the focus of) terror attacks and/or must ensure that they invest in security and other measures to make sure that their customers, staff and other stakeholders are protected.  A safer environment for all in the UK is, of course, welcome, but many would argue that this should not be at the expense of the levels of freedom and privacy that we currently enjoy.

Tech Tip – Encrypting Documents Stored on Google Drive

If you use Google Drive to store files in the cloud but worried that Google doesn’t provide a true password protection feature, you may want to encrypt your files before uploading them.  Here’s how:

If you have Microsoft Office on your PC, it has a built-in encryption feature.

– Go to: File > Protect Document > Encrypt with Password.

– Upload the file to Google Docs.

– Google can’t read the file, but it can be downloaded and opened on any PC with Microsoft Office Installed (using the password).

– If you don’t have Microsoft Office, you could use Boxcryptor.  This is free for syncing one cloud storage service between two PCs.

– Install Boxcryptor (see boxcryptor.com).

– Enable Google Drive in Boxcryptor’s settings.

– Access Boxcryptor from Windows Explorer’s sidebar.

– Go to: Boxcryptor > Encrypt option, and watch the checkbox turn green.

The encrypted files will then be placed in Google Drive, but won’t be accessible unless you have Boxcryptor installed and logged in.

If you’re looking for a solution that’s free and can be used with any cloud storage service and any device, you may want to try Veracrypt (for Windows, macOS, and Linux).  It creates an encrypted container where you can store files you want and put them anywhere for safe keeping.

– Install Veracrypt (see veracrypt.fr).

– Create a new encrypted file container within your Google Drive folder.

– Reach that file from Veracrypt’s main window (it will show as if it were an external hard drive).

– Drag your sensitive files there and unmount the volume.

You will need Veracrypt installed on any PC to access the documents inside that container.

Russia Plans Disconnect From Rest of World Internet For Cyber-Defence Test

Russia has set itself a deadline of 1st April to test “unplugging” the entire country from the global Internet for reasons relating to defence and control.

Giant Intranet Dubbed “Runet”

The impending test of a complete ‘pulling up of the drawbridge’ from the rest of the world is being planned in order to ensure compliance with a new (draft) law in Russia called the Digital Economy National Program.  This will require Russia’s ISPs to show that they can operate in the event of any foreign powers acting to isolate the country online with a “targeted large-scale external influence” i.e. a cyber-attack.

The plan, which is being overseen by Natalya Kasperskaya, co-founder of Kaspersky the antivirus company and former wife of CEO Eugene, will mean that Russia can unplug from the wider Internet, and create its own internal ‘Intranet’ (the ‘Runet’) where data can still pass between Russian citizens and organisations from inside the nation rather than being routed internationally.

Moving Router Points Inside Russia

A move of this scale involves attempting to move the country’s key router points inside Russia. This means that ISPs will have to show that they can direct all Internet traffic entering and leaving Russia through state-controlled routing points, whereby traffic can be filtered so that, if required, traffic destined for outside Russia is discarded, and attempts to launch cyber-attacks on Russia can be more easily detected and thwarted.

Own Version of DNS

Other measures needed to give Russia the ability to completely unplug include building its own version of the net’s DNS address system. This is currently overseen by 12 organisations outside Russia, but copies of the net’s core address book now exist inside Russia.

Why?

Russia has been implicated in many different international incidents that could provoke cyber-attack reprisals and misinformation interference. For example, the alleged interference in US presidential election campaign and UK referendum, and the Novichok attack in Salisbury.  There has also been deterioration of the relationship between the US and Russia, and widespread criticism of Russia in the western media.

Censorship and Control?

Even though the word from Russia is that the ability to ‘unplug’ is for defence from external aggression, many commentators see it as a move to be able to exert more state control in a way that is perhaps similar that seen in China with its extensive firewall.

In Russia, control of social media could, for example, thwart attempts from the people to organise mass protests against Putin, such as those seen in 2011-13.

Also, the ability to control what people can see and say online can mean that websites that promote anti-state views and information can be blacklisted. It has been reported that there is already an extensive blacklist of banned websites and that Russia now requires popular bloggers to register with the state.  There have also been reports of Russians facing fines and jail for social media posts that have been judged to have ridiculed the Kremlin or Orthodox Church.

What Does This Mean For Your Business?

Business and trade tend to benefit from open channels of communication, and when states move to shut down communication channels in this way, it prevents the promotion and advertising of products, creates costs and bureaucracy, and damages the prospects and competitiveness of those organisations exporting to and from Russia. This kind of communications shutdown may be useful for the purposes of the state, but it can only really be harmful for international trade, and for those businesses within Russia itself looking to sell overseas.

Anything that portrays the image of a controlling and/or inward-looking state can also damage industries such as tourism and can make companies in those states appear to be risky to deal with.

Man Fined After Hiding From Facial Recognition Cameras

A man was given a public order fine after being stopped by police because he covered his face during a trial of facial recognition cameras in Romford, London.

What Facial Recognition Trial?

A deliberately “overt” trial of live facial recognition technology by the Metropolitan Police took place in the centre of Romford, London, on Thursday 31st January.  This was supposed to be the first day of a two-day trial of the technology, but the second day was cancelled due to concerns that the forecast snow would only bring a low level of footfall in the area.

Live facial recognition trials of this kind use vehicle-mounted cameras linked to a police database containing photos from a watchlist of selected images from the police database.  Officers are deployed nearby so that they can stop those persons identified and matched with suspects on the database.

In the Romford trial, the facial recognition filming was reported to have taken place from a parked police van and, according to the Metropolitan Police, the reason for the use of the technology was to reduce crime in the area, with a specific focus on tackling violent crime.

Why The Fine?

The trial also attracted the attention of human rights groups, such as Liberty and Big Brother Watch, members of which were nearby and were monitoring the trial.

It was reported that the man who was fined, who hasn’t been named by police, was observed pulling his jumper over part of his face and putting his head down while walking past the police cameras, possibly in response to having seen placards warning that passers-by were the subjects of filing by police automatic facial recognition cameras.

It has been reported that the police then stopped the man to talk to him about what they may have believed was suspicious behaviour and asked to see his identification. According to police reports, it was at this point that the man became aggressive, made threats towards officers and was issued with a penalty notice for disorder as a result.

8 Hours, 8 Arrests – But Only 3 From Technology

Reports indicate that the eight-hour trial of the technology resulted in eight arrests, but only three of those arrests were as a direct result of facial recognition technology.

Criticism

Some commentators have criticised this and other trials for being shambolic, for not providing value for money, and for resulting in mistaken identity.

Research Questions Reliability

Research by the University of Cardiff examined the use of facial recognition technology across several sporting and entertainment events in Cardiff for over a year, including the UEFA Champion’s League Final and the Autumn Rugby Internationals.  The research found that for 68% of submissions made by police officers in the Identify mode, the image had too low a quality for the system to work. Also, the research found that the locate mode of the FRT system couldn’t correctly identify a person of interest for 76% of the time.

Also, in December 2018, ICO head Elizabeth Dunham was reported to have launched a formal investigation into how police forces use facial recognition technology (FRT) after high failure rates, misidentifications and worries about legality, bias, and privacy.

What Does This Mean For Your Business?

It has been reported that spending over £200,000 on the deployment of facial recognition trials on 6 deployments between August 2016 and July 2018, no arrests were made.  On the surface, these figures suggest that, although the technology has the potential to add value and save costs, and although businesses in town centres are likely to welcome efforts to reduce crime, the trials to date don’t appear to have delivered value-for-money to taxpayers.

There was also criticism of the facial recognition system used in Soho, Piccadilly Circus and Leicester Square over two days in the run-up to Christmas, where freedom campaigners such as Big Brother Watch and Liberty were concerned about mixed messages from police about how those who turn away from facial recognition cameras mounted in/on police vans because they don’t want to be scanned could be treated.

Despite some valid worries and criticism, most businesses and members of the public would probably agree that CCTV systems have a real value in helping to deter criminal activity, locating and catching perpetrators, and providing evidence for arrests and trials.  There are, however, several concerns, particularly among freedom and privacy groups, about how just how facial recognition systems are being (and will be) used as part of policing e.g. overt or covert, issues of consent, possible wrongful arrests due to system inaccuracies, and the widening of the scope of its purpose from the police’s stated aims.  Issues of trust where our personal data is concerned are still a problem, as are worries about a ‘big brother’ situation for many people.

$180 Million Password Taken To The Grave

115,000 customers of the of Canadian digital platform Quadriga are believed to be owed C$250 million, but C$180 ($137.21 million) in cryptocurrencies have been frozen after the platform’s founder, who was the only person with the password to the platform’s stored funds, died in December 2018.

What Is Quadriga?

QuadrigaCX is a Canadian cryptocurrency exchange/platform, which allows the trading of Bitcoin, Litecoin and Ethereum.  QuadrigaCX, was founded by Gerald Cotten and was Canada’s largest cryptocurrency exchange until 2019 and has 363,000 registered users.

Cold Storage

As part of QuadrigaCX’s security measures, ‘Cold Storage’ was used for most of the Bitcoins within their system. Unfortunately for Quadriga, it is this part of the system, where the bulk of their funds are stored that is ultimately protected by one main password that was known only to the late founder, Gerald Cotton.

Dead

Mr Cotton died aged 30 from complications related to Crohn’s disease while he was volunteering at an orphanage in India.

Widow Under Pressure

With so much money owed to customers, Mr Cotton’s widow, Jennifer Robertson is reported to have found herself under pressure to find the password.  It has been reported that Robertson, who was not involved in Cotten’s business while he was alive and does not have business records for QuadrigaCX, has conducted repeated searches for the password.

Although Robertson has Mr Cotten’s laptop, she has (so far) been unable to access the contents because it is encrypted, and no one has the password or recovery key for it. Additional attempts to decrypt the laptop have also been unsuccessful.

It has also been reported that Robertson has consulted an expert to help recover details from Cotten’s other computer and cell phones, although the expert’s attempts have been reported to have had only ‘limited’ success to date.

QuadrigaCX has now filed for “creditor protection” in an attempt to avoid bankruptcy.

Customers Unable to Withdraw Funds

In the meantime, customers have reported online that they have been unable to withdraw their funds from the platform for months, that they have only received limited information, and that the website was also recently taken down for maintenance.

What Does This Mean For Your Business?

This story highlights some of the risks associated with cryptocurrencies, and a how a lack of regulation and a market that’s still in its relatively early stages can leave investors in unusual, worrying situations such as this one. In many other types of financial business where there is that level of funding involved, it would also be highly unlikely that a single password known only to one person would play such an important role. Some would say that it’s ironic that passwords are often considered now to be much less secure than other security tools, and yet this password-controlled system has confounded even the experts so far.  What is also ironic is that the ‘cold storage’ of funds, in this case, was introduced as a security measure to protect customer funds but has ended up being so secure customers have no access to those funds.

Looking at the size of QuadrigaCX and the number of customers it has, cryptocurrencies clearly still provide a useful and valuable opportunity for trading and investment. They have, however, had a turbulent life to date, making the news for many negative reasons.  For example, just for bitcoin, regulations and restrictions in some countries (e.g. China), hacks, its volatility, a negative image from its use by international criminals and from its use in scams, a lack of knowledge about how to use it, and the fact that the high price of just one bitcoin made it (even more) niche, meant that it became a commodity and a fast-buck opportunity rather than an actual, useful currency, and the over-consumption and over-inflated value of bitcoin lead to its spectacular fall in value.  There have also been well-publicised falls in value for crypto-currencies like Ethereum’s ‘eher’ and Ripple’, and Tether found itself being investigated by the U.S. Department of Justice over possible manipulation of bitcoin prices at the end of 2017.

All this said, many governments and banks would still like a ‘piece of the action’ of cryptocurrencies, and many market analysts see a future for them as a part of a wider ecosystem.

Apple’s Video-Calling ‘Eavesdropping’ Bug

Apple Inc has found itself at the centre of a security alert after a bug in group-calling of its FaceTime video-calling feature has been found to allow eavesdropping of a call’s recipient to take place prior to the call being taken.

Sound, Video & Broadcasting

As well as allowing the caller to hear audio from the recipient’s phone even if the recipient has not yet picked up the call, if the recipient has pressed the power button on the side of the iPhone e.g. to silence/ignore the incoming call, the same bug was also found to have allowed callers to see video of the person they were calling before that person had picked up the call. This was because pressing the power button effectively started a broadcast from the recipient’s phone to the caller’s phone.

Data Privacy Day

Unfortunately for Apple, insult was added to injury as news of the bug was announced on Data Privacy Day, a global event that was introduced by the Council of Europe in 2007 in order to raise awareness about the importance of protecting privacy. Shortly before news of the Apple group FaceTime bug was made public, Apple’s Chief Executive, Tim Cook, had taken to Twitter to highlight the importance of privacy protection.

It Never Rains…But It Pours

To make things even worse, news of the bug was made public on the day before Apple was due to announce its reduced revenue forecast figures as part of its quarterly financial results. Apple has publicly reduced its expected revenue forecast by £3.8bn.  Apple’s chief executive put the blame for the revised lower revenue mainly on the unforeseen “magnitude of the economic deceleration, particularly in Greater China”.  He also blamed several other factors such as a battery replacement programme, problems with foreign exchange fluctuations, and the end of carrier subsidies for new phones.

Feature Disabled

In order to close the security and privacy hole that the bug created, Apple announced online that it had disabled the Group FaceTime feature at 3:16 AM on Tuesday.

Fix On The Way

Apple has announced that a fix for the bug will be available later this week as part of Apple’s iOS 12.2 update.

What Does This Mean For Your Business?

Apple has disabled the Group FaceTime feature with the promise of a fix within days, which should provide protection from any new attempts to exploit the bug. Those users who are especially concerned can also decide to disable FaceTime in the iPhone altogether via the phone’s settings.

Even though the feature has been disabled, the potential seriousness of allowing eavesdropping of private conversations and the broadcasting of video from a call recipient’s phone appears to have been a major threat to the privacy and security of some Apple phone users.  This has caused some tech commentators to express their surprise that a bug like this could be discovered in the trusted, trillion-dollar company’s products, and concern to be expressed that those users who, for whatever reason, don’t update their phones to the latest operating system, may not be protected.

Research Reveals Top-Selling Car Keyless Theft Risk

Research by consumer Group Which? has revealed that hundreds of popular models of car are vulnerable to “keyless theft”.

Keyless Car Theft

Keyless car entry systems enable owners to unlock the doors of their car with the brush of a hand if the key fob is nearby. If the car has keyless start-stop, once inside the car, the keyless system allows the user to simply press a button to start and stop the engine.

These systems work by using an identity chip in the fob that constantly listens out for radio signals broadcast by the car. These radio signals can only travel short distances, usually less than five metres.

The Which? Research

The Which? research involved the analysis of data on keyless/relay attacks of tests held by the General German Automobile Club (ADAC), a roadside recovery organisation.

Top-Selling Cars At Risk

The ADAC test highlighted by Which? showed that, of the 237 keyless cars tested, all but three were susceptible to keyless theft.

The 237 keyless cars tested and found to be vulnerable to this type of attack included many of the UK’s top-selling cars such as the Ford Fiesta, Volkswagen Golf, Nissan Qashqai and Ford Focus.  Of the top-selling cars in the UK, only the Vauxhall Corsa was found to be safe, only because it isn’t available with keyless entry and ignition.

Jaguar Land Rover’s latest models of the Discovery, Range Rover, and 2018 Jaguar i-Pace, were all found to be secure.

Car Theft Figures – Rising

England and Wales police figures show that the highest number of offences of theft of (or unauthorised taking of) a motor vehicle since 1990 were reported in the year to March 2018 (106,000).  This worrying rise in the level of car theft comes despite improvements in vehicle security aided by the use of new technology.

Less Than 0.3% Stolen

Mike Hawes, head of the Society of Motor Manufacturers & Traders (SMMT), is reported as saying that, aided by technology, new cars are more secure than ever with, on average, less than 0.3% of the cars on the roads stolen.

Not The First Time Concerns Raised

This is certainly not the first time that concerns have been raised about keyless security in cars.  For example, as far back as 2011, Zurich-based researchers highlighted how radio signals emitted by a car could be boosted, thereby tricking systems into thinking the key fob was nearby.

Also, in 2014, many Range Rover thefts led to police advising owners to fit a steering wheel lock as the second line of defence, after keyless security had been breached by thieves.

There have also been reports of Police investigating cases of criminals blocking the signals from keyless devices, so that car doors never lock, and of thieves using blockers in service station car parks in order to steal items from cars.

What Does This Mean For Your Business?

For car manufacturers, there is likely to be an ongoing battle with thieves, and the need for continuous investment to ensure that car entry and ignition systems are as secure as possible. It is likely that this may even require a move into biometrics.

The SMMT has also been calling for action to stop the open sale of equipment which serves no legal purpose but that helps criminals steal cars e.g. grabbers and jammers, which can be purchased online for as little as £40.

The advice from security experts to owners of cars with keyless systems is to keep keyless entry keys away from doors and windows and in a shielded protection case.  This is because some thieves are known to be able to steal the signal to replicate an owner’s key wirelessly, from outside of their house.

Millions of Taxpayers’ Voiceprints Added to Controversial HMRC Biometric Database

The fact that the voiceprints of more than 2 million people have been added to HMRC’s Voice ID scheme since June 2018, to add to the 5 million plus other voiceprints already collected, has led to complaints and challenges to the lawfulness of the system by privacy campaigners.

What HMRC Biometric Database System?

Back in January 2017, HMRC introduced a system whereby customers calling the tax credits and Self-Assessment helpline could enrol for voice identification (Voice ID) as a means of speeding up the security steps. The system uses 100 different characteristics to recognise the voice of an individual and can create a voiceprint that is unique to that individual.

When customers call HMRC for the first time, they are asked to repeat a vocal passphrase up to five times before speaking to a human adviser.  The recorded passphrase is stored in an HMRC database and can be used as a means of verification/authentication in future calls.

Got Voices By The Back Door Said Big Brother Watch

It has been reported that in the 18 months following the introduction of the system, HMRC acquired 5.1 million people’s voiceprints this way.

Back in June 2018, privacy campaigning group ‘Big Brother Watch’ reported that its own investigation had revealed that HMRC had (allegedly) taken 5.1 million taxpayers’ biometric voiceprints without their consent.

Big Brother Watch alleged that the automated system offered callers no choice but to do as instructed and create a biometric voice ID for a Government database.  The only way to avoid creating the voice ID on calling, as identified by Big Brother Watch, was to say “no” three times to the automated questions, whereupon the system still resolved to offer a voice ID next time.

Big Brother Watch were concerned that GDPR prohibits the processing of biometric data for the purpose of uniquely identifying a person, unless the there is a lawful basis under Article 6, and that because voiceprints are sensitive data but are not strictly necessary for dealing with tax issues, HMRC should request the explicit consent of each taxpayer to enrol them in the scheme (Article 9 of GDPR).

This led to Big Brother Watch registering a formal complaint with the ICO, the result of which is still to be announced.

Changes

Big Brother Watch’s complaint may have been the prompt for changes to the Voice ID system. In September 2018, HMRC permanent secretary John Thompson said that HMRC felt it had been acting lawfully, by relying on the implicit consent of users.  Mr Thompson acknowledged, however, that the original messages that were played to callers had not explicitly stated it was possible, or how, to opt out of the voice ID system, and that, in the light of this, the message had been updated (in July 2018) to make this clear.

Mass Deletions?

On the point of whether HMRC would consider deleting the 6 million voiceprint profiles of people who registered before the wording was changed to include ty opt-out option, Mr Thompson has said that HMRC will wait for the completion of the ICO’s investigation.

Backlash

Big Brother Watch has highlighted a backlash against the Voice ID system as indicated by the 162,185 people who have called HMRC to have their Voice IDs deleted.

What Does This Mean For Your Business?

Even though many businesses and organisations are switching/planning to switch to using biometric identification/verification systems in place of less secure password-based systems, it is still important to remember that these are subject to GDPR. For example, images and unique Voiceprint IDs are personal data that require explicit consent to be given, and that people have the right to opt out as well as to opt-in.

It remains to be seen whether the outcome of the ICO investigation will require mass deletions of Voice ID profiles.  Big Brother Watch states on its website that if people are not happy about the HMRC system they can complain to the HMRC directly (via the government website) or file a complaint about the HMRC system to the ICO via the ICO website (the ICO is already investigating HMRC about the matter).  HMRC has said that all the voice data is stored securely and that customers can now opt out of Voice ID or delete their records any time they want.