Data Security

Featured Article – Proposed New UK Law To Cover IoT Security

The UK government’s Department for Digital, Culture, Media and Sport (DCMS), has announced that it will soon be preparing new legislation to enforce new standards that will protect users of IoT devices from known hacking and spying risks.

IoT Household Gadgets

This commitment to legislate leads on from last year’s proposal by then Digital Minister Margot James and follows a seven-month consultation with GCHQ’s National Cyber Security Centre, and with stakeholders including manufacturers, retailers, and academics.

The proposed new legislation will improve digital protection for users of a growing number of smart household devices (devices with an Internet connection) that are broadly grouped together as the ‘Internet of Things’ (IoT).  These gadgets, of which there is an estimated 14 billion+ worldwide (Gartner), include kitchen appliances and gadgets, connected TVs, smart speakers, home security cameras, baby monitors and more.

In business settings, IoT devices can include elevators, doors, or whole heating and fire safety systems in office buildings.

What Are The Risks?

The risks are that the Internet connection in IoT devices can, if adequate security measures are not in place, provide a way in for hackers to steal personal data, spy on users in their own homes, or remotely take control of devices in order to misuse them.

Default Passwords and Link To Major Utilities

The main security issue of many of these devices is that they have pre-set, default unchangeable passwords, and once these passwords have been discovered by cyber-criminals, the IoT devices are wide open to being tampered with and misused.

Also, IoT devices are deployed in many systems that link to and are supplied by major utilities e.g. smart meters in homes. This means that a large-scale attack on these IoT systems could affect the economy.

Examples

Real-life examples of the kind of IoT hacking that the new legislation will seek to prevent include:

– Hackers talking to a young girl in her bedroom via a ‘Ring’ home security camera (Mississippi, December 2019).  In the same month, a Florida family were subjected to vocal, racial abuse in their own home and subjected to a loud alarm blast after a hacker took over their ‘Ring’ security system without permission.

– In May 2018, A US woman reported that a private home conversation had been recorded by her Amazon’s voice assistant, and then sent it to a random phone contact who happened to be her husband’s employee.

– Back in 2017, researchers discovered that a sex toy with an in-built camera could also be hacked.

– In October 2016, the ‘Mirai’ attack used thousands of household IoT devices as a botnet to launch an online distributed denial of service (DDoS) attack (on the DNS service ‘Dyn’) with global consequences.

New Legislation

The proposed new legislation will be intended to put pressure on manufacturers to ensure that:

– All internet-enabled devices have a unique password and not a default one.

– There is a public point of contact for the reporting of any vulnerabilities in IoT products.

– The minimum length of time that a device will receive security updates is clearly stated.

Challenges

Even though legislation could make manufacturers try harder to make IoT devices more secure, technical experts and commentators have pointed out that there are many challenges to making internet-enabled/smart devices secure because:

  • Adding security to household internet-enabled ‘commodity’ items costs money. This would have to be passed on to the customer in higher prices, but this would mean that the price would not be competitive. Therefore, it may be that security is being sacrificed to keep costs down-sell now and worry about security later.
  • Even if there is a security problem in a device, the firmware (the device’s software) is not always easy to update. There are also costs involved in doing so which manufacturers of lower-end devices may not be willing to incur.
  • With devices which are typically infrequent and long-lasting purchases e.g. white goods, we tend to keep them until they stop working, and we are unlikely to replace them because they have a security vulnerability that is not fully understood. As such, these devices are likely to remain available to be used by cyber-criminals for a long time.

Looking Ahead

Introducing legislation that only requires manufacturers to make relatively simple changes to make sure that smart devices come with unique passwords and are adequately labelled with safety and contact information sounds as though it shouldn’t be too costly or difficult.  The pressure of having to display a label, by law, that indicates how safe the item is, could provide that extra motivation for manufacturers to make the changes and could be very helpful for security-conscious consumers.

The motivation for manufacturers to make the changes to the IoT devices will be even greater if faced with the prospect of retailers eventually being barred from selling products that don’t have a label, as was originally planned for the proposed legislation.

The hope from cyber-security experts and commentators is that the proposed new legislation won’t be watered down before it becomes law.

Life After End-of-Life For Windows 7 Updates

Pressure from die-hard and disgruntled Windows 7 users may have been a factor in Microsoft issuing a second update to its old Windows 7 Operating System, only two weeks after the official end-of-life date of Wednesday 14 January.

(Almost) No More Support

Microsoft had already made many announcements that support for its Windows 7 Operating system and Windows Server 2008 would (and we thought, did) formally and finally end on 14 January as part of the final push to move users over to the SaaS Windows 10 OS.  There are still opportunities for those with Windows Virtual Desktop to get an extra three years of extended support (of critical and important security updates) as part of that package, and for customers with active Software Assurance to get ‘Extended Security Updates’ for subscription licenses for 75% of the on-premises annual license cost.

First of the ‘Afterlife’ Updates

The first of the two surprise updates to be issued, just for extended security updates (ESU) users, after the end of support was a patch to fix a wallpaper issue, whereby a blank screen was being shown on Windows re-start instead of the stretch option for the background desktop for some users.  Comments by some disgruntled users on social media may have contributed to Microsoft releasing an update to fix the issue.

The Second Update

A second update announced by Microsoft really relates to an extension of the same issue. This time, Microsoft says it’s working on a fix to this issue for all, and not just for those who subscribed to its ESU program.  On Microsoft’s Support pages it says that an update to resolve the issue will be released to all customers running Windows 7 and Windows Server 2008 R2 SP1.

In the meantime, Microsoft suggests that customers can mitigate the issue either by setting their custom image to an option other than Stretch, e.g. Fill, Fit, Tile, or Centre, or customers can choose a custom wallpaper that matches the resolution of their desktop.

What Does This Mean For Your Business?

Even though the widely publicised end of support date for Windows 7 has been and gone, it should be remembered that there are an estimated 40 million people still using Windows 7 which means there is no shortage of people to complain publicly, via social media when things go wrong.  Microsoft is, therefore, in that difficult period before users are unsupported before they finally switch to Windows 10 where there is likely to be more bad publicity to come for Microsoft as more issues start to affect the remaining Windows 7 users.

There is also now the very real risk that Windows 7 will be targeted more by cybercriminals, leaving those who still use it in a much more vulnerable position.  At least in the case of the recent updates, Microsoft has been seen to do something beyond the call of duty to help users after the date that it officially ended support, although it’s unlikely that Microsoft will not make a habit of doing so in future.

Police Images of Serious Offenders Reportedly Shared With Private Landlord For Facial Recognition Trial

There have been calls for government intervention after it was alleged that South Yorkshire Police shared its images of serious offenders with a private landlord (Meadowhall shopping centre in Sheffield) as part of a live facial recognition trial.

The Facial Trial

The alleged details of the image-sharing for the trial were brought to the attention of the public by the BBC radio programme File on 4, and by privacy group Big Brother Watch.

It has been reported that the Meadowhall shopping centre’s facial recognition trial ran for four weeks between January and March 2018 and that no signs warning visitors that facial recognition was in use were displayed. The owner of Meadowhall shopping centre is reported as saying (last August) that the data from the facial recognition trial was “deleted immediately” after the trial ended. It has also been reported that the police have confirmed that they supported the trial.

Questions

The disclosure has prompted some commentators to question not only the ethical and legal perspective of not just holding public facial recognition trials without displaying signs but also of the police allegedly sharing photos of criminals (presumably from their own records) with a private landlord.

The UK Home Office’s Surveillance Camera Code of Practice, however, does appear to support the use of facial recognition or other biometric characteristic recognition systems if their use is “clearly justified and proportionate.”

Other Shopping Centres

Other facial recognition trials in shopping centres and public shopping areas have been met with a negative response too.  For example, the halting of a trial at the Trafford Centre shopping mall in Manchester in 2018, and with the Kings Cross facial recognition trial (between May 2016 and March 2018) which is still the subject of an ICO investigation.

Met Rolling Out Facial Recognition Anyway

Meanwhile, and despite a warning from Elizabeth Denham, the UK’s Information Commissioner, back in November, the Metropolitan Police has announced it will be going ahead with its plans to use live facial recognition cameras on an operational basis for the first time on London’s streets to find suspects wanted for serious or violent crime. Also, it has been reported that South Wales Police will be going ahead in the Spring with a trial of body-worn facial recognition cameras.

EU – No Ban

Even though many privacy campaigners were hoping that the EC would push for a ban on the use of facial recognition in public spaces for up to five years while new regulations for its use are put in place, Reuters has reported that The European Union has now scrapped any possibility of a ban on facial recognition technology in public spaces.

Facebook Pays

Meanwhile, Facebook has just announced that it will pay £421m to a group of Facebook users in Illinois, who argued that its facial recognition tool violated the state’s privacy laws.

What Does This Mean For Your Business?

Most people would accept that facial recognition could be a helpful tool in fighting crime, saving costs, and catching known criminals more quickly and that this would be of benefit to businesses and individuals. The challenge, however, is that despite ICO investigations and calls for caution, and despite problems that the technology is known to have e.g. being inaccurate and showing a bias (being better at identifying white and male faces), not to mention its impact on privacy, the police appear to be pushing ahead with its use anyway.  For privacy campaigners and others, this may give the impression that their real concerns (many of which are shared by the ICO) are being pushed aside in an apparent rush to get the technology rolled out. It appears to many that the use of the technology is happening before any of the major problems with it have been resolved and before there has been a proper debate or the introduction of an up-to-date statutory law and code of practice for the technology.

Avast Anti-Virus Is To Close Subsidiary Jumpshot After Browsing Data Selling Privacy Concerns

Avast, the Anti-virus company, has announced that it will not be providing any more data to, and will be commencing “a wind down” of its subsidiary Jumpshot Inc after a report that it was selling supposedly anonymised data to advertiser third parties that could be linked to individuals.

Jumpshot Inc.

Jumpshot Inc, founded in 2010, purchased by Avast in 2013, and operated as a data company since 2015 essentially organises and sells packaged data, that has been gathered from Avast, to enterprise clients and marketers as marketing intelligence.

Avast anti-virus incorporates a plugin that has, until now, enabled subsidiary Junpshot to scrape/gain access to that data which Jumpshot could sell to (mainly bigger) third party buyers so that they can learn what consumers are buying and where thereby helping with targeting their advertising.

Avast is reported to have access to data from 100 million devices, including PCs and phones.

Investigation Findings

The reason why Avast has, very quickly, decided to ‘wind down’ i.e. close Jumpshot is that the report of an investigation by Motherboard and PCMag revealed that Avast appeared to be harvesting users’ browser histories with the promise (to those who opted-in to data sharing) that the data would be ‘de-identified,’ ( to protect user privacy), whereas what actually appeared to be happening was that the data, which was being sold to third parties, could be linked back to people’s real identities, thereby potentially exposing every click and search they made.

When De-Identification Fails

As reported by PCMag, the inclusion of timestamp information and persistent device IDs with the collected URLs of user clicks, in this case, could, in fact, be analysed to expose someone’s identity.  This could, in theory, mean that the data taken from Avast and supplied via subsidiary Jumpshot to third parties may not be de-identified, and could, therefore, pose a privacy risk to those Avast users.

What Does This Mean For Your Business?

As an anti-virus company, security and privacy are essential elements of Avast’s products and customer trust is vital to its brand and its image. Some users may be surprised that their supposedly ‘de-identified’ data was being sold to third parties anyway, but with a now widely-reported privacy risk of this kind and the potential damage that it could do to Avast’s brand and reputation, it is perhaps no surprise that is has acted quickly in closing Jumphot and distancing itself from what was happening. As Avast says in its announcement about the impending closure of Jumpshot (with the loss of many jobs) “The bottom line is that any practices that jeopardize user trust are unacceptable to Avast”.  PCMag has reported that it has been informed by Avast that the company will no longer be using any data from the browser extensions for any other purpose than the core security engine.

Featured Article – ‘Snake’ Ransomware, A Threat To Your Whole Network

Over the last couple of weeks, there have been reports of a new type of ransomware known as ‘Snake’ which can encrypt all the files stored on your computer network and on all the connected devices.

Discovered

Snake ransomware is so-called because it is the reverse order spelling of the ‘ekans’ file marker that it attaches to each file that it encrypts.  It was discovered by the MalwareHunterTeam and studied in detail by Vitali Kremez who is the Head of SentinelLabs and who describes himself as an “Ethical Hacker”, “Reverse Engineer” and “Threat Seeker”.

How Does It Infect Your Network?

Snake can be introduced to a computer network in infected email attachments (macros) e.g. phishing emails with attached Office or PDF documents, RAR or ZIP files, .exe files, JavaScript files, Trojans, torrent websites, unpatched public-facing software and malicious ads.

How Does Snake Operate?

As ransomware, the ultimate goal of the cybercriminals who are targeting (mainly) businesses with Snake is to lock away (through encryption) important files in order to force the victim to pay a ransom in order to release those files, with the hope of restoring systems to normal as the motivator to pay.

In the case of Snake, which is written in Go (also known as Golang), an open-source programming language that’s syntactically similar to C and provides cross-platform support, once it is introduced to an operating system e.g. after arriving in an email, it operates the following way:

– Firstly, Snake removes Shadow Volume Copies (backup copies or snapshots of files) and stops processes related to SCADA Systems (the supervisory control and data acquisition system that’s used for gathering and analysing real-time data). Snake also stops any Virtual Machines, Industrial Control Systems, Remote Management Tools, and Network Management Software.

– Next, Snake (relatively slowly) uses powerful AES-256 and RSA-2048 cryptographic algorithms to encrypt files and folders across the whole network and on all connected devices, while skipping files in the Windows system folders and system files.

– As part of the encryption process, and unlike other ransomware, Snake adds a random five-character string as a suffix to file extension names e.g. myfile.jpg becomes myfile.jpgBGyWl. Also, an “EKANS” file marker is added to each encrypted file.

Ransom Note

Lastly, Snake generates a ransom note named Fix-Your-Files.txt which is posted on the desktop of the victim.  This ransom note advises the victim that the only way to restore their files is to purchase a decryption tool which contains a private key that has been created specifically for their network and that, once run on an affected computer, it will decrypt all encrypted files.

The note informs the victim that in order to purchase the decryption software they must send an email to bapcocrypt@ctemplar.com which has up to 3 of the encrypted files from their computers attached, not databases or spreadsheets (up to 3MB size) so that the cybercriminals can send back decrypted versions as proof that the decryption software (and key) works on their files (and to encourage payment and restoration of business).

Timing

Snake allows cybercriminals to not only target chosen businesses network but also to choose the time of the attack and the time that encryption takes place could, therefore, be after hours, thereby making it more difficult for admins to control the damage caused by the attack. Also, cybercriminals can choose to install additional password-stealing trojans and malware infections together with the Snake ransomware infection.

What To Do If Infected

If your network is infected with Snake ransomware there is, of course, no guarantee that paying the ransom will mean that you are sent any decryption software by the cybercriminals and it appears unlikely that those who targeted your company to take your money would do anything other to help than just take that money and disappear.

Some companies on the web are offering Snake removal (for hundreds of dollars), and there are some recommendations that running Spyhunter anti-malware software on your systems may be one way to remove this particularly damaging ransomware.

Ransomware Protection

News of the severity of Snake is a reminder to businesses that protection from malware is vital.  Ways in which companies can protect themselves from falling victim to malware, including ransomware include:

– Staff education and training e.g. about the risks of and how to deal with phishing and other suspicious and malicious emails, and other threats where social engineering is involved.

– Ensuring that all anti-virus software, updates and patching are up to date.

– Staying up to date with malware and ransomware resources e.g. the ‘No More Ransom’ portal (https://www.nomoreransom.org/ ), which was originally released in English, is now available in 35 other languages, and thanks to the cooperation between more than 150 partners, provides a one-stop-shop of tools that can help to decrypt ransomware infections – see https://www.nomoreransom.org/en/decryption-tools.html.

– Making sure that there is a regular and secure backup of company data, important business file and folders.

– Developing (and communicating to relevant staff) and updating a Business Continuity and Disaster Recovery Plan.

EU Considers Ban on Facial Recognition

It has been reported that the European Commission is considering a ban on the use of facial recognition in public spaces for up to five years while new regulations for its use are put in place.

Document

The reports of a possible three to five-year ban come from an 18-page EC report, which has been seen by some major news distributors.

Why?

Facial recognition trials in the UK first raised the issues of how the technology can be intrusive, can infringe upon a person’s privacy and data rights, and how facial recognition technology is not always accurate.  These issues have also been identified and raised in the UK, For example:

– In December 2018, Elizabeth Denham, the UK’s Information Commissioner launched a formal investigation into how police forces used FRT after high failure rates, misidentifications and worries about legality, bias, and privacy. This stemmed from the trial of ‘real-time’ facial recognition technology on Champions League final day June 2017 in Cardiff, by South Wales and Gwent Police forces, which was criticised for costing £177,000 and yet only resulting in one arrest of a local man whose arrest was unconnected.

– Trials of FRT at the 2016 and 2017 Notting Hill Carnivals led to the Police facing criticism that FRT was ineffective, racially discriminatory, and confused men with women.

– In September 2018 a letter, written by Big Brother Watch (a privacy campaign group) and signed by more than 18 politicians, 25 campaign groups, and numerous academics and barristers highlighted concerns that facial recognition is being adopted in the UK before it has been properly scrutinised.

– In September 2019 it was revealed that the owners of King’s Cross Estate had been using FRT without telling the public, and with London’s Metropolitan Police Service supplying the images for a database.

– In December 2019, a US report showed that, after tests by The National Institute of Standards and Technology (Nist) of 189 algorithms from 99 developers, their facial recognition technology was found to be less accurate at identifying African-American and Asian faces, and was particularly prone to misidentifying African-American females.

Impact Assessment

The 18-page EC report is said to contain the recommendation that a three to five-year ban on the public use of facial recognition technology would allow time to develop a methodology for assessing the impacts of (and developing risk management measures for) the use of facial recognition technology.

Google Calls For AI To Be Regulated

The way in which artificial intelligence (AI) is being widely and quickly deployed before the regulation of the technology has had a chance a to catch up is the subject of recent comments by Sundar Pichai, the head of Google’s parent company, Alphabet’.  Mr Pichai (in the Financial Times) called for regulation with a sensible approach and for a set of rules for areas of AI development such as self-driving cars and AI usage in health.

What Does This Mean For Your Business?

It seems that there is some discomfort in the UK, Europe and beyond that relatively new technologies which have known flaws, and are of concern to government representatives, interest groups and the public are being rolled out before the necessary regulations and risk management measures have had time to be properly considered and developed.  It is true that facial recognition could have real benefits (e.g. fighting crime) which could have benefits for many businesses and that AI has a vast range of opportunities for businesses to save money and time plus innovating products, services and processes.  However, the flaws in these technologies, and their potential to be used improperly, covertly, and in a way that could infringe the rights of the public cannot be ignored, and it is likely to be a good thing in the long term, that time is taken and efforts are made now to address the issues of stakeholders and develop regulations and measures that could prevent bigger problems involving these technologies further down the line.

£100m Fines Across Europe In The First 18 Months of GDPR

It has been reported that since the EU’s General Data Protection Regulation (GDPR) came into force in May 2018, £100m of data protection fines have been imposed on companies and organisations across Europe.

The Picture In The UK

The research, conducted by law firm DLA Piper, shows that the total fines imposed in the UK by the ICO stands at £274,000, but this figure is likely to be much higher following the finalising of penalties to be imposed on BA and Marriott.  For example, Marriott could be facing a £99 million fine for data breach between 2014 and 2018 that, reportedly involved up to 383 million guests, and BA (owned by IAG) could be facing a record-breaking £183 million for a breach of its data systems last year that could have affected 500,000 customers.

Also, the DLA Piper research shows that although the UK did not rankly highly in terms of fines, the UK ranked third in the number of breach notifications, with 22,181 reports since May 2018.  This equates to a relative ranking of 13th for data breach notifications per 100,000 people in the UK.

Increased Rate of Reporting

On the subject of breach notifications, the research shows a big increase in the rate of reporting, with 247 reports per day over the six months of GDPR between May 2018 and January 2019, which rose to 278 per day throughout last year. This rise in reporting is thought to be due to a much greater (and increasing) awareness about GDPR and the issue of data breaches.

France and Germany Hit Hardest With Fines

The fines imposed in the UK under GDPR are very small compared to Germany where fines totalled 51.1 million euros (top of the table for fines in Europe) and France where 24.6 million euros in fines were handed out.  In the case of France, much of the figure of fines collected relates to one penalty handed out to Google last January.

Already Strict Laws & Different Interpretations

It is thought that businesses in the UK having to meet the requirements of the already relatively strict Data Protection Act 1998 (the bones of which proved not to differ greatly from GDPR) is the reason why the UK finds itself (currently) further down the table in terms of fines and data breach notifications per 100,000 people.

Also, the EU’s Data Protection Directive wasn’t adopted until 1995, and GDPR appears to have been interpreted differently across Europe because it is principle-based, and therefore, apparently open to some level of interpretation.

What Does This Mean For Your Business?

These figures show that a greater awareness of data breach issues, greater reporting of breaches, and increased activity and enforcement action by regulators across Europe are likely to contribute to more big fines being imposed over the coming year.  This means that businesses and organisations need to ensure that they stay on top of the issue of data security and GDPR compliance.  Small businesses and SMEs shouldn’t assume that work done to ensure basic compliance on the introduction of GDPR back in 2018 is enough or that the ICO would only be interested in big companies as regulators appear to be increasing the number of staff who are able to review reports and cases.  It should also be remembered, however, the ICO is most likely to want to advise, help and guide businesses to comply where possible.

Featured Article – Windows 7 Deadline Now Passed

Microsoft’s Windows 7 Operating system and Windows Server 2008 formally and finally reached their ‘End of Life’ (end of support, security updates and fixes) earlier on Wednesday 14 January.

End of Life – What Now?

End of life isn’t quite as final as it sounds because Windows 7 will still run but support i.e. security updates and patches and technical support will no longer be available for it. If you are still running Windows 7 then you are certainly not alone as it still has a reported 27 per cent market share among Windows users (Statcounter).

For most Windows 7 users, the next action will be to replace (or upgrade) the computers that are running these old operating systems.  Next, there is the move to Windows 10 and if you’re running a licensed and activated copy of Windows 7, Windows 8 or Windows 8.1, Home or Pro, you can get it for free by :

>> going to the Windows 10 download website

>>  choosing to Create Windows 10 installation media

>> Download tool now and Run

>> Upgrade this PC now (if it’s just one PC –  for another machine choose ‘Create installation media for another PC’ and save installation files) and follow the instructions.   >> After installation, you can see your digital license for Windows 10 by going to Settings Update & Security > Activation.

Windows Server

Windows Server 2008 and Windows Server 2008 R2 have also now reached their end-of-life which means no additional free security updates on-premises or non-security updates and free support options, and no online technical content updates.

Microsoft is advising that customers who use Windows Server 2008 or Windows Server 2008 R2 products and services should migrate to its Microsoft Azure.

About Azure

For Azure customers, the Windows Virtual Desktop means that there’s the option of an extra three years of extended support (of critical and important security updates) as part of that package, but there may be some costs incurred in migrating to the cloud service.

Buying Extended Security Updates

‘Extended Security Updates’ can be also purchased by customers with active Software Assurance for subscription licenses for 75% of the on-premises annual license cost, but this should only really be considered as a temporary measure to ease the transition to Windows 10, or if you’ve simply been caught out by the deadline.

Unsupported Devices – Banking & Sensitive Data Risk

One example of the possible risks of running Windows 7 after its ‘end-of-life’ date has been highlighted by the National Cyber Security Centre (NCSC), the public-facing part of GCHQ.  The NCSC has advised Windows 7 users to replace their unsupported devices as soon as possible and to move any sensitive data to a supported device.  Also, the NCSC has advised Windows 7 users to not use unsupported devices for tasks such as accessing bank and other sensitive accounts and to consider accessing email from a different device.

The NCSC has pointed out that cyber-criminals began targeting Windows XP immediately after extended support ended in 2015. It is likely, therefore, that the same thing could happen to Windows 7 users.

Businesses may wish to note that there have already been reports (in December) of attacks on Windows 7 machines in an attempt to exploit the EternalBlue vulnerability which was behind the serious WannaCry attacks.

Windows 7 History

Windows 7 was introduced in 2009 as an upgrade in the wake of the much-disliked Windows Vista.  Looking back, it was an unexpected success in many ways, and looking forward, if you’re one of the large percentage of Windows users still running Windows 7 (only 44% are running Windows 10), you may feel that you’ve been left with little choice but to move away from the devil you know to the not-so-big-bad Windows 10.

Success For Microsoft

Evolving from early codename versions such as “Blackcomb”, “Longhorn,” and then “Vienna” (in early 2006), what was finally named as Windows 7 in October 2008 proved to be an immediate success on its release in 2009.  The update-turned Operating System, which was worked upon by an estimated 1,000 developers clocked-up more than 100 million sales worldwide within the first 6 months of its release. Windows 7 was made available in 6 different editions, with the most popularly recognised being the Home Premium, Professional, and Ultimate editions.

Improvement

Windows 7 was considered to be a big improvement upon Windows Vista which, although achieving some impressive usage figures (still lower than XP though) came in for a lot of criticism for its high system requirements, longer boot time and compatibility problems with pre-Vista hardware and software.

Some of the key improvements that Windows 7 brought were the taskbar and a more intuitive feel, much-improved performance, and fewer annoying User Account Control popups. Some of the reasons for switching to Windows 7 back in 2009 were that it had been coded to support most pieces of software that ran on XP, it could automatically install device drivers, the Aero features provided a much better interface, it offered much better hardware support, the 64-bit version of Windows 7 could handle a bigger system memory, and the whole Operating System had a better look and feel.

Embracing the Positive

It may even be the case that in the process of worrying about the many complications and potential challenges of migrating to Windows 10 you haven’t allowed yourself to focus on the positive aspects of the OS such as a faster and more dynamic environment and support for important business software like Office 365 and Windows server 2016.

What To Do Now

The deadline to the end of support/end of life for Windows 7 has now passed and the key factor to remember is that Windows 7 (and your computers running Windows 7) is now exposed to any new risks that come along. If you have been considering some possible OS alternatives to Windows 10, these could bring their own challenges and risks and you may now have very limited time to think about them. Bearing in mind the targeting of Windows XP immediately at the end of its extended support (in 2015), we may reasonably expect similar targeting of Windows 7 which makes the decision to migrate more pressing.

For most businesses, the threat of no more support now means that continuing to run Windows 7 presents a real risk to the business e.g. from every new hacking and malware attack, and as the NCSC has highlighted, there is a potentially high risk in using devices running Windows 7 for anything involving sensitive data and banking.

If you choose to upgrade to Windows 10 on your existing computers, you will need to consider factors such as the age and specification of those computers, and there are likely to be costs involved in upgrading existing computers.  You may also be considering (depending on the size/nature of your business and your IT budget) the quick solution of buying new computers with Windows 10 installed, and in addition to the cost implications, you may also be wondering how and whether you can use any business existing systems or migrate any important existing data and programs to this platform.  The challenge now, however, is that time has officially run out in terms of security updates and support so, the time to make the big decisions has arrived.

Facebook Bans Deepfake Videos

In a recent blog post, ahead of the forthcoming US election, Monika Bickert, Vice President, of Facebook’s Global Policy Management has announced that the social media giant is banning deepfakes and “all types of manipulated media”.

Not Like Last Time

With the 59th US presidential election scheduled for Tuesday, November 3, 2020, Facebook appears to be taking no chances after the trust-damaging revelations around unauthorised data sharing with Cambridge Analytica, and the use of the platform by foreign powers such as Russia in an attempt to influence the outcome of the 2016 election of Donald Trump.

The fallout of the news that 50 million Facebook profiles were harvested as early as 2014 in order to build a software program that could predict and use personalised political adverts to influence choices at the ballot box in the last U.S. election includes damaged trust in Facebook, a substantial fine, plus a fall in the number of daily users in the United States and Canada for the first time in its history.

Deepfakes

One of the key concerns to Facebook this time around appears to be so-called ‘deepfake’ videos.  These use deep learning technology and manipulated images of target individuals (found online), often celebrities, politicians, and other well-known people to create very convincing videos of the subjects saying and doing whatever the video-maker wants them to. These videos could obviously be used to influence public thinking about political candidates, and as well as having an influence in election results, it would be very damaging for Facebook, which has been very public about trying to rid itself of ‘fake news’ and not to be seen as a platform for the easy distribution of deepfake videos.  No doubt Facebook’s CEO Mark Zuckerberg would like to avoid having to appear before Congress again to answer questions about his company’s handling of personal data, as he had to back in April 2018.

The New Statement From Facebook

This latest blog post statement from Facebook says that as a matter of policy, it will now remove any misleading media from its platform if the media meets two criteria, which are:

  • If it has been synthesised i.e. more than just adjustments for clarity or quality to the point where the ‘average person’ could be misled into thinking the subject of the media/video is saying words that they did not actually say, and…
  • If the media is the product of artificial intelligence or machine learning that has merged, replaced or superimposed content onto a video, in order to make it appear to be authentic.

Not Satire

Facebook has been careful to point out that this policy change will not affect content that is clearly intended to be parody or satire, or videos that have been edited just to omit or change the order of the words featured in them.

Existing Policies

Any media posted to Facebook is subject to the social media giant’s existing comply-or-be-removed ‘Community Standards’ policies which cover, among other things, voter suppression and hate speech.

What Will Happen?

Facebook says that any videos that don’t meet its standards for removal are still eligible for review by one its independent third-party fact-checkers (which include 50+ partners worldwide) and that any photos or videos rated as false or partly false (by a fact-checker) will have its distribution “significantly” reduced in News Feed and will be rejected if it’s being run as an ad. Also, those who see it and try to share it, or have already shared it, will be shown warnings alerting them that it’s false.

Measures

Facebook has taken many measures to ensure that it is not seen as a platform that can’t be trusted with user data or as a distributor of fake news.  For example:

– In January 2019 Facebook announced (in the UK) that it was working with London-based, registered charity ‘Full Fact’ to review stories, images and videos, in an attempt to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.

– In September 2019, Facebook launched its Deep Fake Detection Challenge, with $10 million in grants and with a cross-sector coalition of organisations in order to encourage the production of tools to detect deepfakes.

– In October 2019, Facebook launched the ‘News’ tab on its mobile app to direct users to unbiased, curated articles from credible sources in a bid to publicly combat fake news and help restore trust in its own brand.

– Facebook has partnered with Reuters to produce a free online training course to help newsrooms worldwide to identify deepfakes and manipulated media.

Criticism

Despite this recent announcement of policy change to help eradicate deepfakes from its platform, Facebook has been criticised by some commentators for appearing to allow some videos which some could describe as misinformation in certain situations (apparently of its choosing).  For example, Facebook has said that content that violates its policies could be allowed if it is deemed newsworthy e.g. presumably, the obviously doctored videos of Labour’s Keir Starmer and US House Speaker Nancy Pelosi.

What Does This Mean For Your Business?

Clearly, any country would like to guard against outside influence in its democratic processes and the deliberate spread of misinformation, and bearing in mind the position of influence that Facebook has, it is good for everyone that it is taking responsibility and trying to block obvious attempts to spread misinformation by altering its policies and working with other organisations. Businesses that use Facebook as an advertising platform also need to know that Facebook users have trust in (and will continue to use) that platform (and see their adverts) so it’s important to businesses that Facebook is vigilant and takes action where it can.  Also, by helping to protect the democratic processes of the countries it operates in, particularly in the US at the time of and election (and bearing in mind what happened last time), it is in Facebook’s own interest to protect its brand against any accusations of not allowing political influence through a variety of media on its platform, and any further loss of trust by its public. This change of policy also shows that Facebook is trying to show readiness to deal with the most up to date threat of deepfakes (even though they are relatively rare).

That said, Google and Twitter (with its new restrictions on micro-targeting for example), have both been very public about trying to stop all lies in political advertising on their platforms, but Facebook has just been criticised by the IPA over its decision not to ban political ads that are using micro-targeting and spurious claims to sway the opinions of voters.

Facebook Bans Deepfake Videos

In a recent blog post, ahead of the forthcoming US election, Monika Bickert, Vice President, of Facebook’s Global Policy Management has announced that the social media giant is banning deepfakes and “all types of manipulated media”.

Not Like Last Time

With the 59th US presidential election scheduled for Tuesday, November 3, 2020, Facebook appears to be taking no chances after the trust-damaging revelations around unauthorised data sharing with Cambridge Analytica, and the use of the platform by foreign powers such as Russia in an attempt to influence the outcome of the 2016 election of Donald Trump.

The fallout of the news that 50 million Facebook profiles were harvested as early as 2014 in order to build a software program that could predict and use personalised political adverts to influence choices at the ballot box in the last U.S. election includes damaged trust in Facebook, a substantial fine, plus a fall in the number of daily users in the United States and Canada for the first time in its history.

Deepfakes

One of the key concerns to Facebook this time around appears to be so-called ‘deepfake’ videos.  These use deep learning technology and manipulated images of target individuals (found online), often celebrities, politicians, and other well-known people to create very convincing videos of the subjects saying and doing whatever the video-maker wants them to. These videos could obviously be used to influence public thinking about political candidates, and as well as having an influence in election results, it would be very damaging for Facebook, which has been very public about trying to rid itself of ‘fake news’ and not to be seen as a platform for the easy distribution of deepfake videos.  No doubt Facebook’s CEO Mark Zuckerberg would like to avoid having to appear before Congress again to answer questions about his company’s handling of personal data, as he had to back in April 2018.

The New Statement From Facebook

This latest blog post statement from Facebook says that as a matter of policy, it will now remove any misleading media from its platform if the media meets two criteria, which are:

  • If it has been synthesised i.e. more than just adjustments for clarity or quality to the point where the ‘average person’ could be misled into thinking the subject of the media/video is saying words that they did not actually say, and…
  • If the media is the product of artificial intelligence or machine learning that has merged, replaced or superimposed content onto a video, in order to make it appear to be authentic.

Not Satire

Facebook has been careful to point out that this policy change will not affect content that is clearly intended to be parody or satire, or videos that have been edited just to omit or change the order of the words featured in them.

Existing Policies

Any media posted to Facebook is subject to the social media giant’s existing comply-or-be-removed ‘Community Standards’ policies which cover, among other things, voter suppression and hate speech.

What Will Happen?

Facebook says that any videos that don’t meet its standards for removal are still eligible for review by one its independent third-party fact-checkers (which include 50+ partners worldwide) and that any photos or videos rated as false or partly false (by a fact-checker) will have its distribution “significantly” reduced in News Feed and will be rejected if it’s being run as an ad. Also, those who see it and try to share it, or have already shared it, will be shown warnings alerting them that it’s false.

Measures

Facebook has taken many measures to ensure that it is not seen as a platform that can’t be trusted with user data or as a distributor of fake news.  For example:

– In January 2019 Facebook announced (in the UK) that it was working with London-based, registered charity ‘Full Fact’ to review stories, images and videos, in an attempt to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.

– In September 2019, Facebook launched its Deep Fake Detection Challenge, with $10 million in grants and with a cross-sector coalition of organisations in order to encourage the production of tools to detect deepfakes.

– In October 2019, Facebook launched the ‘News’ tab on its mobile app to direct users to unbiased, curated articles from credible sources in a bid to publicly combat fake news and help restore trust in its own brand.

– Facebook has partnered with Reuters to produce a free online training course to help newsrooms worldwide to identify deepfakes and manipulated media.

Criticism

Despite this recent announcement of policy change to help eradicate deepfakes from its platform, Facebook has been criticised by some commentators for appearing to allow some videos which some could describe as misinformation in certain situations (apparently of its choosing).  For example, Facebook has said that content that violates its policies could be allowed if it is deemed newsworthy e.g. presumably, the obviously doctored videos of Labour’s Keir Starmer and US House Speaker Nancy Pelosi.

What Does This Mean For Your Business?

Clearly, any country would like to guard against outside influence in its democratic processes and the deliberate spread of misinformation, and bearing in mind the position of influence that Facebook has, it is good for everyone that it is taking responsibility and trying to block obvious attempts to spread misinformation by altering its policies and working with other organisations. Businesses that use Facebook as an advertising platform also need to know that Facebook users have trust in (and will continue to use) that platform (and see their adverts) so it’s important to businesses that Facebook is vigilant and takes action where it can.  Also, by helping to protect the democratic processes of the countries it operates in, particularly in the US at the time of and election (and bearing in mind what happened last time), it is in Facebook’s own interest to protect its brand against any accusations of not allowing political influence through a variety of media on its platform, and any further loss of trust by its public. This change of policy also shows that Facebook is trying to show readiness to deal with the most up to date threat of deepfakes (even though they are relatively rare).

That said, Google and Twitter (with its new restrictions on micro-targeting for example), have both been very public about trying to stop all lies in political advertising on their platforms, but Facebook has just been criticised by the IPA over its decision not to ban political ads that are using micro-targeting and spurious claims to sway the opinions of voters.

Featured Article – Email Security (Part 2)

Following on from last month’s featured article about email security (part 1), in part 2 we focus on many of the email security and threat predictions for this year and for the near, foreseeable future.

Looking Forward

In part 1 of this ‘Email Security’ snapshot, we looked at how most breaches involve email, the different types of email attacks, and how businesses can defend themselves against a variety of known email-based threats. Unfortunately, businesses and organisations now operate in an environment where cyber-attackers are using more sophisticated methods across multi-vectors and where threats are constantly evolving.

With this in mind, and with businesses seeking to be as secure as possible against the latest threats, here are some of the prevailing predictions based around email security for the coming year.

Ransomware Still a Danger

As highlighted by a recent Malwarebytes report, and a report by Forbes, the ransomware threat is by no means over and since showing an increase in the first quarter of 2019 of 195 per cent on the previous year’s figures it is still predicted to be a major threat in 2020. Tech and security commentators have noted that although ransomware attacks on consumers have declined by 33 per cent since last year, attacks against organisations have worsened.  In December, for example, a ransomware attack was reported to have taken a US Coast Guard (USCG) maritime base offline for more than 30 hours.

At the time of writing this article, it has been reported that following an attack discovered on New Year’s Day, hackers using ransomware are holding Travelex’s computers for ransom to such a degree that company staff have been forced to use pen and paper to record transactions!

Information Age, for example, predicts that softer targets (outdated software, inadequate cybersecurity resources, and a motivation to pay the ransom) such as the healthcare services will be targeted more in the coming year with ransomware that is carried by email.

Phishing

The already prevalent email phishing threat looks likely to continue and evolve this year with cybercriminals set to try new methods in addition to sending phishing emails e.g. using SMS and even spear phishing (highly targeted phishing) using deepfake videos to pose as company authority figures.

As mentioned in part 1 of the email security articles, big tech companies are responding to help combat phishing with new services e.g. the “campaign views” tool in Office 365 and Google’s advanced security settings for G Suite administrators.

BEC & VEC

Whereas Business Email Compromise (BEC) attacks have been successful at using email fraud combined with social engineering to bait one staff member at-a-time to extract money from a targeted organisation, security experts say that this kind of attack is morphing into a much wider threat of ‘VEC’ (Vendor Email Compromise). This is a larger and more sophisticated version which, using email as a key component, seeks to leverage organisations against their own suppliers.

Remote Access Trojans

Remote Access Trojans (RATs) are malicious programs that can arrive as email attachments.  RATs provide cybercriminals with a back door for administrative control over the target computer, and they can be adapted to help them to avoid detection and to carry out a number of different malicious activities including disabling anti-malware solutions and enabling man-in-the-middle attacks.  Security experts predict that more sophisticated versions of these malware programs will be coming our way via email this year.

The AI Threat

Many technology and security experts agree that AI is likely to be used in cyberattacks in the near future and its ability to learn and to keep trying to reach its target e.g. in the form of malware, make it a formidable threat. Email is the most likely means by which malware can reach and attack networks and systems, so there has never been a better time to step up email security, train and educate staff about malicious email threats, how to spot them and how to deal with them. The addition of AI to the mix may make it more difficult for malicious emails to be spotted.

The good news for businesses, however, is that AI and machine learning is already used in some anti-virus software e.g. Avast, and this trend of using AI in security solutions to counter AI security threats is a trend that is likely to continue.

One Vision of the Email Security Future

The evolving nature of email threats means that businesses and organisations may need to look at their email security differently in the future.

One example of an envisaged approach to email security comes from Mimecast’s CEO Peter Bauer.  He suggests that in order to truly eliminate the threats that can abuse the trust in their brands “out in the wild” companies need to “move from perimeter to pervasive email security.  This will mean focusing on the threats:

– To the Perimeter (which he calls Zone1).  This involves protecting users’ email and data from spam and viruses, malware and impersonation attempts, data leaks – in fact, protecting the whole customer, partner and vendor ecosystem.

– From inside the perimeter (Zone 2).  This involves being prepared to be able to effectively tackle internal threats like compromised user accounts, lateral movement from credential harvesting links, social engineering, and employee error threats.

– From beyond the perimeter (Zone 3).  These could be threats to brands and domains from spoofed or hijacked sites that could be used to defraud customers and partners.

As well as recognising and looking to deal with threats in these 3 zones, Bauer also suggests an API-led approach to help deliver pervasive security throughout all zones.  This could involve businesses monitoring and observing email attacks with e.g. SOARs, SIEMs, endpoints, firewalls and broader threat intelligence platforms, feeding this information and intelligence to security teams to help keep email security as up to date and as tight as possible.

Into 2020 and Beyond

Looking ahead to email security in 2020 and beyond, companies will be facing plenty more of the same threats (phishing, ransomware, RATs) which rely on email combined with human error and social engineering to find their way into company systems and networks. Tech companies are responding with updated anti-phishing and other solutions.

SME’s (rather than just bigger companies) are also likely to find themselves being targeted with more attacks involving email, and companies will need to, at the very least, make sure they have the basic automated, tech and human elements in place (training, education, policies and procedures) to help provide adequate protection (see the end of part 1 for a list of email security suggestions).

The threat of AI-powered attacks, however, is causing some concern and the race is on to make sure that AI-powered protection is up to the level of any AI-powered attacks.

Taking a leaf out of companies like Mimecast’s book, and looking at email security in much wider scope and context (outside the perimeter, inside the perimeter, and beyond) may bring a more comprehensive kind of email security that can keep up with the many threats that are now arriving across a much wider attack surface.