GDPR

Autonomous AI Cyber Weapons Inevitable Says Security Research Expert

Speaking at a recent CloudSec event in London, Trend Micro’s vice-president of security research, Rik Ferguson said that AI cyberattacks operated autonomously are an inevitable threat that security professionals must adapt to tackling.

If Leveraged By Cybercriminals

Mr Ferguson said that when cybercriminals manage to leverage the power of AI, organisations may find themselves experiencing attacks that happen very quickly, contain malicious code, and can even adapt themselves to target specific people in an organisation e.g. impersonating senior company personnel in order to get payments authorised, pretending to be a penetration testing tool, or finding ways to motivate targeted persons to fall victim to a phishing scam.

AI Vs AI

Mr Ferguson suggested that the inevitability of cybercriminals developing autonomous AI-driven attack weapons means that it may be time to be thinking in a world of AI versus AI.

Example of Attack

One close example given by Ferguson is the Emojet Trojan.  This malware, which obtains financial information by injecting computer code into the networking stack of an infected Microsoft Windows computer, was introduced 5 years ago but has managed to adapt and cover its tracks even though it is not even AI-driven.

AI Launching Own Attacks Without Human Intervention

Theresa Payton, who was the first women to be a White House CIO (under president George W Bush) and is now CEO of security consultancy Fortalice, has been reported as saying that the advent of genuine AI has posed serious questions, that the cybersecurity industry is falling behind, and that we may even be facing a situation where AI will be able to launch its own attacks without human intervention.

Challenge

One challenge to responding effectively to AI cyber-attacks is likely to be that cybersecurity and law enforcement agencies must move at the speed of law, particularly where procedures must be followed to request help from and arrange coordination between foreign agencies.  The speed of the law, unfortunately, is likely to be much slower than the speed of an AI-powered attack.

What Does This Mean For Your Business?

It is a good thing for all businesses that the cybersecurity industry recognises the inevitability of AI-powered attacks, and although it fears that it risks falling behind, it is talking about the issue, taking it seriously, and looking at ways in which it needs to change in order to respond.

Adopting AI Vs AI thinking now may be a sensible way to help security professionals, and those in charge of national security to focus thinking and resources on finding ways to innovate and create their own AI-based detection and defensive systems and tools, and the necessary strategies and alliances in readiness for a new kind of attack.

Report Shows That 99% of Cyber Attacks Now Involve Social Engineering

The Human Factor report from Proofpoint shows that almost all cyber-attacks, at some stage, involve the exploitation of human error in the form of social engineering.

What Are Social Engineering Attacks?

Social engineering attacks involve the manipulation and deception of people into performing actions such as transferring money to criminal accounts or divulging confidential information.

What Kind of Attacks?

The Proofpoint Human Factor report makes the point that as many as 99% of cyber-attacks now involve social engineering through cloud applications, email or social media.  Social engineering attacks can also involve cybercriminals making phone calls to key persons in an organisation.

Easier and More Profitable

These attacks are designed to enable a macro, or trick people into opening a malicious file or follow a malicious link through human error, rather than the cyber attacker having to face the considerable and time-consuming challenge of trying to hack into the (often well-defended) systems and infrastructure of enterprises and other organisations. Social engineering attacks are, therefore, easier, less costly, more profitable, and more likely to be successful than having to create an exploit to try and gain access to company systems.

Targets – “Very Attacked People”

Cybercriminals are looking for money and valuable data and information. The Proofpoint report, which was based on 18 months of data analysis collated from across the company’s global customer base, highlights the fact that the gatekeepers of money and data in target organisations become the “very attacked people” (VAP) i.e. the most often approached targets. These VAPs are often identified by attackers using information from sources such as corporate websites, social media, trade publications, and search engines.

Patterns & Routines

The report also revealed how attacks involving email messages can be made to mimic standard business routines and legitimate email traffic patterns e.g. downtime at weekends and spikes on Mondays.  Also, malware tends to be evenly distributed over the first three days of the working week, and attacks in the Middle East and Europe appear to be more likely to succeed after lunch.

What Does This Mean For Your Business?

The fact that many businesses and organisations are taking cyber defence seriously and have improved their system defences means that cybercriminals are moving into social engineering attacks.

Businesses and organisations can protect themselves against such attacks through staff training (particularly for guardians of funds and data), keeping anti-virus and online filtering up to date, using encryption e.g. VPNs for certain employees, having clear policies and procedures in place with built-in verification and authorisation for money and data requests, and being careful about publicly-visible employee information that could be used to target key staff members.

AI Mimics CEO’s Voice To Steal Over £200,000

A recent Wall Street Journal report has highlighted how, in March this year, a group of hackers were able to use AI software to mimic an energy company CEO’s voice in order to steal £201,000.

What Happened?

Reports indicate that the CEO of an unnamed UK-based energy company received a phone call from someone that he believed to be the German chief executive of the parent company.  The person on the end of the phone ordered the CEO of the UK-based energy company to immediately transfer €220,000 (£201,000) into the bank account of a Hungarian supplier.

The voice was reported to have been so accurate in its sound, that the CEO of the energy company even recognised what he thought was the subtleties of the German accent of his boss, and even “melody” of the accent.

The call was so convincing that the energy company made the transfer of funds as requested.

Fraudster Using AI Software

The caller, who was later discovered to have been a fraudster using AI-base voice-altering software to simulate the voice of the German boss, called 3 times.  In the first call, the fraudster requested the transfer, in the second call they (falsely) claimed that the transfer had been reimbursed, and in the third call the fraudster requested an additional payment. It was this third call that aroused suspicion, partly based on the fact that the telephone number appeared to indicate that the caller was in Austria and not Hungary.

Money To Hungary, Mexico and Beyond

Unfortunately, the money had already been transferred to a Hungarian account after the first call, and it has since been discovered that money was immediately transferred from the alleged supplier’s Hungarian bank account to an account in Mexico, and then further disbursed to accounts in other locations, thereby making it very difficult for authorities to follow the trail.

What Sort of Software?

The kind of software used in this attack may have been similar in its output to that demonstrated by researchers from Dessa, an AI company based in Toronto.  Dessa has produced a video of how this kind of software has been able to produce a relatively accurate simulation of the voice of popular podcaster and comedian Joe Rogan – see: https://www.youtube.com/watch?time_continue=1&v=DWK_iYBl8cA

What Does This Mean For Your Business?

It is known that cybercriminals, deterred by improved and more robust enterprise security practices have decided to look for human error and concentrate more on social engineering attacks, a category that this voice simulation attack (via phone calls) fits into. The fact that this attack has taken place and been successful shows that some cybercriminals are already equipped with the computing power and most up-to-date machine-learning AI technology that they are clearly capable of using.

This means that companies and organisations (particularly larger ones), may now be at risk of facing more sophisticated deception and phishing attacks. The AI company Dessa has suggested that organisations and even individuals could expect to face future threats such as  spam callers impersonating relatives or spouses to obtain personal information, impersonations intended to bully or harass, persons trying to gain entrance to high security clearance areas by impersonating a government officials, and even an ‘audio deepfake’ of a politician being used to manipulate election results or cause a social uprising.

Companies should try to guard against social engineering attacks by educating all staff to the risks and having clear verification procedures (and not just relying on phone calls), tests, and chain of command authorisation in place for any requests for funds.

Your Password Can Be Guessed By An App Listening To Your Keystrokes

Researchers from SMU’s (Southern Methodist University) Darwin Deason Institute for Cyber-security have found that the sound waves produced when we type on a computer keyboard can be picked up by a smartphone and a skilled hacker could decipher which keys were struck.

Why?

The research was carried out to test whether the ‘always-on’ sensors in devices such as smartphones could be used to eavesdrop on people who use laptops in public places (if the phones were on the same table as the laptop) e.g. coffee shops and libraries, and whether there was a way to successfully decipher what was being typed from just the acoustic signals.

Where?

The experiment took place in a simulated noisy Conference Room at SMU where the researchers arranged several people, talking to each other and taking notes on a laptop. As many as eight mobile phones were placed on the same table as the laptops or computers, anywhere from three inches to several feet away. The study participants were not given scripts of what to say when talking, could use shorthand or full sentences when typing and could either correct typewritten errors or leave them.

What Happened?

Eric C. Larson, one of the two lead authors and an assistant professor in SMU Lyle School’s Department of Computer Science reported that the researchers were able to pick up what people were typing at an amazing 41 per cent word accuracy rate and that that this could probably be extended above 41 per cent if what researchers figured out what the top 10 words might be.

Sensors In Smart Phones

The researchers highlighted the fact that there are several sensors in smartphones that are used for orientation and although some require permission to be switched on, some are always on.  It was the sensors that were always switched on that the researchers were able to develop a specialised app for which could process the sensor output and, therefore, predict the key that was pressed by a typist.

What Does This Mean For Your Business?

Most of us may be aware of the dangers of using public Wi-Fi and how to take precautions such as using a VPN.  It is much less well-known, however, that smartphones have sensors that are always on and could potentially be used (with a special app) to eavesdrop.

Mobile device manufacturers may want to take note of this research and how their products may need to be modified to prevent this kind of hack.

Also, users of laptops may wish to consider the benefits of using a password manager for auto-filling instead of typing in passwords and potentially giving those passwords away.

Over A Million Fingerprints Exposed In Data Breach

It has been reported that more than one million fingerprints have been exposed online by biometric security firm Suprema which appears to have installed its standard Biostar 2 product on an open network.

Suprema and Biostar 2

Suprema is a South Korea-based biometric technology company and is one of the world’s top 50 security manufacturers.  Suprema offers products including biometric access control systems, time and attendance solutions, fingerprint live scanners, mobile authentication solutions and embedded fingerprint modules.

Biostar 2 is a web-based, open, and integrated security platform for access control and time and attendance, manage user permissions, integrate with 3rd party security apps, and record activity logs.  Biostar 2 is used by many thousands of companies and organisations worldwide, including the UK’s Metropolitan Police as a tool to control access to parts of secure facilities. Biostar 2 uses fingerprint scanning and recognition as part of this access control system.

What Happened?

Researchers working with cyber-security firm VPNMentor have reported that they were able to access data from Biostar 2 from 5 August until it was made private again on 13 August (Suprema were contacted by VPNMentor about the problem on 7th August).  It is not clear how long before 5 August the data had been exposed online.  The exposure of personal data to public access is believed to have been caused by the Biostar 2 product being placed on an open network.

In addition to more than one million fingerprint records being exposed, the VPNMentor researchers also claim to have found photographs of people, facial recognition data, names, addresses, unencrypted usernames and passwords, employment history details, mobile device and OS information, and even records of when employees had accessed secure areas.

VPNMentor claims that its team was able to access over 27.8 million records, a total of 23 gigabytes of data,

Affected

VPNMentor claims that many businesses worldwide were affected.  In the UK, for example, VPNMentor claims that Associated Polymer Resources (a plastics recycling company), Tile Mountain (a home decor and DIY supplier), and Medical supply store Farla Medical were among those affected.

It has been reported that the UK’s data protection watchdog, the Information Commissioner’s Office (ICO) has said that it was aware of reports about Biostar 2 and would be making enquiries.

What Does This Mean For Your Business?

For companies and organisations using Biostar 2, this is very worrying and is a reminder of how data breaches can occur through third-party routes.

In this case, fingerprint records were exposed, and the worry is that this kind of data can never be secured again once it has been stolen. Also, the large amount of other personal employee data that was taken could not only affect individual businesses but could also mean that employees and clients could be targeted for fraud and other crimes e.g. phishing campaigns and even blackmail and extortion.

The breach may have been avoided had Suprema secured its servers with better protection measures, not saved actual fingerprints but a version that couldn’t be reverse engineered instead, implemented better rules on databases, and not left a system that didn’t require authentication open to the internet.  Those companies that are still using and have concerns about Biostar2 may now wish to contact Suprema for assurances about security.

Facial Recognition at King’s Cross Prompts ICO Investigation

The UK’s data protection watchdog (the Information Commissioner’s Office  i.e. the ICO) has said that it will be investigating the use of facial recognition cameras at King’s Cross by Property Development Company Argent.

What Happened?

Following reports in the Financial Times newspaper, the ICO says that it is launching an investigation into the use of live facial recognition in the King’s Cross area of central London.  It appears that the Property Development Company, Argent, had been using the technology for an as-yet-undisclosed period, and using an as-yet-undisclosed number of cameras. A reported statement by Argent (in the Financial Times) says that Argent had been using the system to “ensure public safety”, and that facial recognition is one of several methods that the company employs to this aim.

ICO

The ICO has said that, as part of its enquiry, as well requiring detailed information from the relevant organisations (Argent in this case) about how the technology is used, it will also inspect the system and its operation on-site to assess whether or not it complies with data protection law.

The data protection watchdog has made it clear in a statement on its website that if organisations want to use facial recognition technology they must comply with the law and they do so in a fair, transparent and accountable way. The ICO will also require those companies to document how and why they believe their use of the technology is legal, proportionate and justified.

Privacy

The main concern for the ICO and for privacy groups such as Big Brother Watch is that people’s faces are being scanned to identify them as they lawfully go about their daily lives, and all without their knowledge or understanding. This could be considered a threat to their privacy.  Also, with GDPR in force, it is important to remember that if a person’s face (if filmed e.g. with CCTV) is part of their personal data, and the handling, sharing, and security of that data also becomes an issue.

Private Companies

An important area of concern to the ICO, in this case, is the fact that a private company is using facial recognition becasuse the use of this technology by private companies is difficult to monitor and control.

Problems With Police Use

Following criticism of the Police use of facial recognition technology in terms of privacy, accuracy, bias, and management of the image database, the House of Commons Science and Technology Committee has recently called for a temporary halt in the use of the facial recognition systems.  This follows an announcement in December 2018 by the ICO’s head, Elizabeth Dunham, that a formal investigation was being launched into how police forces use facial recognition technology (FRT) after high failure rates, misidentifications and worries about legality, bias, and privacy.

What Does This Mean For Your Business?

The use of facial recognition technology is being investigated by the ICO and a government committee has even called for a halt in its use over several concerns. The fact that a private company (Argent) was found, in this case, to be using the technology has therefore caused even more concern and has highlighted the possible need for more regulation and control in this area.

Companies and organisations that want to use facial recognition technology should, therefore, take note that the ICO will require them to document how and why they believe their use of the technology is legal, proportionate and justified, and make sure that they comply with the law in a fair, transparent and accountable way.

Using GDPR To Get Partner’s Personal Data

A University of Oxford researcher, James Pavur, has explained how (with the consent of his partner) he was able to exploit rights granted under GDPR to obtain a large amount of his partner’s personal data from a variety of companies.

Right of Access

Mr Pavur reported that he was able to send out 75 Right of Access Requests/Subject Access Requests (SAR) in order to get the first pieces of information from companies, such as his partner’s full name, some email addresses and phone numbers. Mr Pavur reported using a fake email address to make the SARs.

SAR

A Subject Access Request (SAR), which is a legal right for everyone in the UK, is where an individual can ask a company or organisation, verbally or in writing, to confirm whether they are processing their personal data and, if so, can ask the company or organisation for a copy of that data e.g. paper copy or spreadsheet.  With a SAR, individuals have the legal right to know the specific purpose of any processing of their data, what type of data is being processed, who the recipients of that processed data are, how long that data stored, how the data was obtained from them in the first place, and for information about how that processed and stored data is being safeguarded. Under GDPR, individuals can make a SAR for free, although companies and organisations can charge “reasonable fees” if requests are unfounded, excessive (in scope), or where additional copies of data are requested to the original request.

Another 75 Requests

Mr Pavur reported that he was able to use the information that he received back from the first 75 requests to send out another 75 requests.  From the second batch of requests Mr Pavur was able to obtain a large amount of personal data about his partner including her social security number, date of birth, mother’s maiden name, previous home addresses, travel and hotel logs, her high school grades, passwords, partial credit card numbers, and some details about her online dating.

The Results

In fact, Mr Pavur reported that 24% of the targeted firms who responded (72%) accepted an email address (a false one) and a phone number as proof of identity and revealed his partner’s personal details on the strength of these.  One company even revealed the results of a historic criminal background check.

Who?

According to Mr Pavur, the prevailing pattern was that large (technology) companies responded well the requests, small companies ignored the requests, and mid-sized companies showed a lack of knowledge about how to handle and verify the requests.

What Does This Mean For Your Business?

The ICO recognises on its website that GDPR does not specify how to make a valid request and that individuals can make a SAR to a company verbally or in writing, or to any part of your organisation (including by social media) and it doesn’t have to be made to a specific person or contact point.  Such a request also doesn’t have to include the phrase ‘subject access request’ or Article 15 of the GDPR, but any request must be clear that the individual is asking for their own personal data.  This means that although there may be some confusion about whether a request has actually been made, companies should at least ensure that they have identity verification and checking procedures in place before they send out personal data anyone. Sadly, in the case of this experiment, the researcher was able to obtain a large amount of personal and sensitive data about his (very understanding) partner using a fake email address.

Businesses may benefit from looking which members of staff regularly interact with individuals and offering specific training to help those staff members identify requests.

Also, the ICO points out that it is good practice to have a policy for recording details of the requests that businesses receive, particularly those made by telephone or in-person so that businesses can check with the requester that their request has been understood.  Businesses should also keep a log of verbal requests.

Opting Out of People Reviewing Your Alexa Recordings

Amazon has now added an opt-out option for manual review of voice recordings and their associated transcripts taken through Amazon’s Alexa but it has not stopped the practice of taking voice recordings to help develop new Alexa features.

Opt-Out Toggle

The opt-out toggle can be found in the ‘Manage How Your Data Improves Alexa’ section of your privacy settings, which you will have to sign-in to Amazon to be able to see.  This section contains a “Help Improve Amazon Services and Develop New Features” section with a toggle switch to the right-hand side of it and moving the toggle from the default ‘yes’ to the ‘no’ position will stop humans reviewing your voice recordings.

Echo owners can see the transcript and hear what Alexa has recorded of their voices by visiting the ‘Review Voice History’ of the privacy section.

Why Take Recordings?

Amazon argues that training its Alexa digital voice assistant using recordings from a diverse range of customers can help to ensure that Alexa works well for all users, and those voice recordings may be used to help develop new features.

Why Manually Review?

Amazon says that manually reviewing recordings and transcripts is another method that the company uses to help improve their services, and that only “an extremely small fraction” of the voice recordings taken are manually reviewed.

Google and Apple Have Stopped

Google has recently been forced to stop the practice of manually reviewing its auto snippets (in Europe) by the Hamburg data protection authority, which threatened to use Article 66 powers of the General Data Protection Regulation (GDPR) to stop Google from doing so.  This followed a leak of more than 1,000 recordings to the Belgian news site VRT by a contractor working as a Dutch language reviewer.  It has been reported that VRT was even able to identify some of the people in the recorded clips.

Apple has also stopped the practice of manual, human reviewing of recordings and transcripts taken via Siri after a (Guardian) report revealed that contractors used by Apple had heard private medical information and even recordings of people having sex in the clips.  This was thought to be the result of the digital assistant mistaking another word for its wake word.

What Does This Mean For Your Business?

If you have an Amazon Echo and you visit the ‘Review Voice History’ section of your privacy page, you may be very surprised to see just how many recordings have been taken, and the dates, times, and what has been said could even be a source of problems to those who have been recorded.  Even though we understand that AI/Machine Learning technology needs training in order to improve its recognition of and response to our questions, the fact that mistakes with wake words could lead to sensitive discussions being recorded and listened to by third-party contractors, and that voices could even be identified from those recordings highlights a real threat to privacy and security, and a trade-off that many users may not be prepared to accept.

It’s a shame that mistakes and legal threats were the catalysts for stopping Google and Apple from using manual reviewing, and it is surprising that in the light of their cases, Amazon is not stopping the practice as a default altogether but is merely including an opt-out toggle switch deep within the Privacy section of its platform.

This story is a reminder that although smart speakers and the AI behind them bring many benefits, attention needs to be paid, as it does by all companies to privacy and security when dealing with what can be very personal data.

Google Plugs Incognito Mode Detection Loophole With Chrome 76

Google has announced that with the introduction of Chrome 76 (at the end of July), it has plugged a loophole that enabled websites to tell when you were browsing in Incognito mode.

Incognito

Incognito mode in Chrome (private browsing) is really designed to protect privacy for those using shared or borrowed devices, and exclude certain activities from being recorded in their browsing histories. Also, less commonly, private browsing can be very important for people suffering political oppression or domestic abuse for example, where there may be important safety reasons for concealing web activity.

Loophole Plugged

The loophole that is being plugged with the introduction of Chrome 76 relates to FileSystem API.  In the case of Google’s Incognito mode, the problem has been that whereas Chrome’s FileSystem API is disabled in Incognito Mode to avoid leaving traces of activity on someone’s device, some websites that have been checking for Incognito mode have still been able to detect that is being used, and have received an error messages to confirm this.  This has meant that Incognito browsing has not been technically incognito.

In Chrome 76, which has just been introduced, the behaviour of the FileSystem API has been modified to ensure that Incognito Mode use can no longer be detected, and Google has stated that it will work to remedy any other future means of Incognito Mode usage in Chrome being detected.

Metered Paywalls Affected

While this change may be a good thing for Chrome users, it is more bad news for web publishers with ‘metered paywalls’. These are web publishers that offer a certain number of free articles to view before a visitor must register and log in. These websites have already suffered from the ability of users to use Incognito mode to circumvent this system, and as a result, many of these publishers resorted to Incognito detection to stop people from circumventing their publishing system.  Stopping the ability to detect Incognito browsing with the introduction of Chrome 76 will, therefore, cause more problems for metered paywall publishers.

Google has said that although its News teams support sites with meter strategies and understand their need to reduce meter circumvention, any approach that’s based on private browsing detection undermines the principles of its Incognito Mode.

What Does This Mean For Your Business?

Plugging this loophole with the new, improved Chrome 76 is good news for users, many of whom may not have realised that Incognito mode was not as incognito as they had thought. Using Incognito mode on your browser, however, will only provide privacy on the devices you browse on and won’t stop many sites from still being able to track you.  If you’d like greater privacy, it may be a case of using another browser e.g. Tor or Brave, or a VPN.

For metered paywall publishers, however, the plugged loophole in Chrome 76 is not good news as, unless these publishers make changes to their current system and/or decide to go through the process of exploring other solutions with Google, they will be open to more meter circumvention.

Lancaster University Hit By “Sophisticated and Malicious Phishing Attack”

Lancaster University, which offers a GCHQ accredited cyber-security course and has its own Cyber Security Research Centre has been hit by what it has described as a “sophisticated and malicious phishing attack”, resulting in the leak of the personal data of new university applicants.

12,000+ Affected?

On the University’s website, even though it states that only “a very small number of students” actually had their records and ID documents accessed as a result of the attack, other estimates published by IT news commentators online, and based on statistics compiled by UCAS suggest that possibly over 12,000 people may have been affected.

Who?

The attack appears to have been focused on the new student applicant data records for 2019 and 2020.

What?

According to the university, the new applicant information which may have been accessed includes names, addresses, telephone numbers, and email addresses.

There have also been reports that, following the attack, fraudulent invoices have been sent to some undergraduate applicants.

Why?

Although very little information has been divulged about the exact nature of the attack, universities are known to be particularly attractive targets for phishing emails i.e. emails designed to trick the recipient into clicking on malicious links or transferring funds.  This is because educational institutions tend to have large numbers of users spread across many different departments, different facilities and faculties, and data is moved between these, thereby making admin and IT security very complicated.  Also, universities have a lot of valuable intellectual property as well as student and staff personal data within their systems which are tempting targets for hackers.

When?

Lancaster University says that it became aware of the breach on Friday 19th July, whereupon it established an incident team to handle the situation and immediately reported the incident to the Information Commissioner’s Office (ICO).

A criminal investigation led by the National Crime Agency’s (NCA) National Cyber Crime Unit (NCU) is now believed to be under way, and the university has been focusing efforts on safeguarding its IT systems and identifying and advising any students and applicants who have been affected.

US Universities & Colleges Hit Days Before

Just days before the attack on Lancaster University came to light, The U.S. Department of Education reported that a vulnerability in the Ellucian Banner System authentication software led to 62 colleges or universities being been affected.

What Does This Mean For Your Business?

For reasons already mentioned (see the ‘Why?’ section), schools, colleges and universities are prime targets for hackers, and this is why many IT and security commentators think that the higher education sector should be looking to take cyber-security risks very seriously, and make sure that training and software are put in place to enable a more proactive approach to attack prevention.  Users, both students and staff, need to be educated about threats, and how to spot and what to do with suspicious communications by email or social media.  Students, for example, need to be aware that during summer months when they are more stressed, and when they are awaiting news of applications they may be more vulnerable to phishing attacks, and that they should only contact universities through a trusted, previously tried method, and not rely upon the contact information and links given in emails.

For Lancaster University, which has its own Cyber Security Research Centre and offers a GCHQ approve cybersecurity course, this attack, which has generated some bad publicity and may adversely affect some victims, is likely to be very embarrassing and may even deter some future applicants.

Lancaster University has advised applicants, students and staff to make contact (via email or phone) f they receive any suspicious communications.