Data Security

£500,000 Fine For Facebook Data Breaches

Sixteen months after the Information Commissioners Office (ICO) began its investigation into the Facebook’s sharing the personal details of users with political consulting firm Cambridge Analytica, the ICO has announced that Facebook will be fined £500,000 for data breaches.

Maximum

The amount of the fine is the maximum that can be imposed under GDPR. Although it sounds like a lot, for a corporation valued at around $500 billion, and with $11.97 billion in advertising revenue and $4.98 billion in profit for the past quarter (mostly from mobile advertising), it remains to be seen how much of an effect it will have on Facebook.

Time Before Responding

Facebook has now been given time to respond to the ICO’s verdict before a final decision is made by the ICO.

Facebook have said, however, that it acknowledges that it should have done more to investigate claims about Cambridge Analytica and taken action back in 2015.

Reminder of What Happened

The fine relates to the harvesting of the personal details of 87 million Facebook users without their explicit consent, and the sharing of that personal data with London-based political Consulting Firm Cambridge Analytica, which is alleged to have used that data to target political messages and advertising in the last US presidential election campaign.

Also, harvested Facebook user data was shared with Aggregate IQ, a Data Company which worked with the ‘Vote Leave’ campaign in the run-up to the Brexit Referendum.

The sharing of personal user data with those companies was exposed by former Cambridge Analytica employee and whistleblower Christopher Wylie. The resulting publicity caused public outrage, saw big falls in Facebook’s share value, brought apologies from its founder / owner, and saw insolvency proceedings (back in May) for Cambridge Analytica and its parent SCL Elections.

What About Cambridge Analytica?

Although Facebook has been given a £500,000 fine, Cambridge Analytica no longer exists as a company. The ICO has indicated, however, that it is still considering taking legal action against the company’s directors. If successful, a prosecution of this kind could result in convictions and an unlimited fine.

AggregateIQ

As for Canadian data analytics firm AggregateIQ, the ICO is reported to still be investigating whether UK voters’ personal data provided by the Brexit referendum’s Vote Leave campaign had been transferred and accessed outside the UK and whether this amounted to a breach of the Data Protection Act. Also, the ICO is reported to be investigating to what degree AIQ and SCL Elections had shared UK personal data, and the ICO is reported to have served an enforcement notice forbidding AIQ from continuing to make use of a list of UK citizens’ email addresses and names that it still holds.

Worries About 11 Main Political Parties

The ICO is also reported to have written to the UK’s 11 main political parties, asking them to have their data protection practices audited because it is concerned that the parties may have purchased certain information about members of the public from data brokers, who might not have obtained consent.

What Does This Mean For Your Business?

When this story originally broke, it was a wake-up call about what can happen to the personal data that we trust companies / corporations with, and it undoubtedly damaged trust between Facebook and its users to a degree. It’s a good job that the ICO is there to follow things up on our behalf because, for example, a Reuters/Ipsos survey conducted back in April found that, even after all the publicity surrounding Facebook and Cambridge Analytica scandal, most users remained loyal to the social media giant.

Also, the case has raised questions about how our data is shared and used for political purposes, and how the using and sharing of our data to target messages can influence the outcome of elections, and, therefore, can influence the whole economic and business landscape. This has meant that there has now been a call for the UK government to step-in and introduce a code of practice which should limit how personal information can be used by political campaigns before the next general election.
Facebook has recently been waging a campaign, including heavy television advertising, to convince us that it has changed and is now more focused on protecting our privacy. Unfortunately, this idea has been challenged by the recent ‘Deceived By Design’ report by the government-funded Norwegian Consumer Council, which accused tech giants Microsoft, Facebook and Google of being unethical by leading users into selecting settings that do not actually benefit their privacy.

Tech Giant GDPR Privacy Settings ‘Unethical’ Says Council

The ‘Deceived By Design’ report by the government-funded Norwegian Consumer Council has accused tech giants Microsoft, Facebook and Google of being unethical by leading users into selecting settings that do not benefit their privacy.

Illusion of Control

The report alleges that, far from actually giving users more control over their personal data (as laid out by GDPR), the tech giants may simply be giving users the illusion that this is happening. The report points to the possible presence of practices such as:

– Facebook and Google making users who want the privacy-friendly option go through a significantly longer process (privacy intrusive defaults).

– Facebook, Google and Windows 10 using pop-ups that direct users away from the privacy-friendly choices.

– Google presenting users with a hard-to-use dashboard with a maze of options for their privacy and security settings. For example, on Facebook it takes 13 clicks to opt out of authorising data collection (opting in can take just one).
– Making it difficult to delete data that’s already been collected. For example, deleting data about location history requires clicking through 30 to 40 pages.

– Google not warning users about the downside of personalisation e.g. telling users they would simply see less useful ads, rather than mentioning the potential to be opted in to receive unbalanced political ad messages.

– Facebook and Google pushing consumers to accept data collection e.g. with Facebook stating how, if users keep face recognition turned off, Facebook won’t be able to stop a stranger from using the user’s photo to impersonate them, while not stating how Facebook will use the information collected.

Dark Patterns

In general, the reports criticised how the use of “dark patterns” such as misleading wording and default settings that are intrusive to privacy, settings that give users an illusion of control, hiding privacy-friendly options, and presenting “take-it-or-leave-it choices”, could be leading users to make choices that actually stop them from exercising all of their privacy rights..

Big Accept Button

The report, by Norway’s consumer protection watchdog, also notes how the GDPR-related notifications have a large button for consumers to accept the company’s current practices, which could appear to many users to be far more convenient than searching for the detail to read through.

Response

Google, Facebook and Microsoft are all reported to have responded to the report’s findings by issuing statements focusing on the progress and improvements they’ve made towards meeting the requirements of the GDPR to date.

What Does This Mean For Your Business?

GDPR was supposed to give EU citizens much more control over their data, and the perhaps naive expectation was that companies with a lot to lose (in fines for non-compliance and reputation), such as the big tech giant and social media companies would simply fall into line and afford us all of those new rights straight away.

The report by the Norwegian consumer watchdog appears to be more of a reality check that shows how our personal data is a valuable commodity to the big tech companies, and that, according to the report, the big tech companies are willing to manipulate users and give the illusion that they are following the rules without actually doing so. The report appears to indicate that these large corporations are willing to force consumers to try to fight for rights that have already been granted to them in GDPR.

New, Improved Wi-Fi Security Standard WPA3 Starts Rollout

The non-profit, global trade group, the Wi-Fi Alliance, has announced the commencement of the rollout of the new Wi-Fi Protected Access (WPA) protocol WPA3 which should bring improvements in authentication and data protection.

What’s Been The Problem?

There are estimated to be around 9 billion Wi-Fi devices in use in the world, but the current security protocol, WPA2, dates back to 2004. The rapidly changing security landscape has, therefore, left many Wi-Fi devices vulnerable to new methods of attack, fuelling the calls for the fast introduction of a new, more secure standard.

WPA2 Vulnerabilities

For example, WPA2 which is mandatory for Wi-Fi Certified devices, is known to be vulnerable to offline dictionary attacks to guess passwords. This is where an attacker can have as many attempts as they like at guessing Wi-Fi credentials without being on the same network. Offline attacks allow the perpetrator to either passively stand and capture an exchange, or even interact with a user once before finding-out the password. Using Wi-Fi on public networks with the current protocol has also left people vulnerable to ‘man-in-the-middle’ attacks or ‘traffic sniffing’.

One key contributor to the vulnerability of using Wi-Fi with the WPA2 standard is the home / business using obvious / simple passwords.

What’s So Good About The New Standard?

The new WPA3 standard has several advantages. These include:

  • The fact that it has been designed for the security challenges of businesses, although it has two modes of operation: Personal and Enterprise.
  • The equivalent of 192-bit cryptographic strength, thereby offering a higher level of security than WPA2.
  • The addition of Easy Connect, which allows a user to add any device to a Wi-Fi network using a secondary device already on the network via a QR code. This makes the connection more secure and helps simplify IoT device protection.
  • WPA3-Personal mode offers enhanced protection against offline dictionary attacks and password guessing attempts through the introduction of a feature called Simultaneous Authentication of Equals (SAE). Some commentators have suggested that it ‘saves users from themselves’ by offering improved security even if a user chooses a more simple password. It also offers ‘forward secrecy’ to protect communications even if a password has been compromised.

In Tandem For The Time Being

The current standard WPA2 will be run in tandem with the new WPA3 standard until the standard becomes more widely used.

Protection Against Passive Evesdropping

In June, the Wi-Fi Alliance also announced the rollout of the Wi-Fi Enhanced Open, a certification program. This provides protection for unauthenticated networks e.g. coffee shops, hotels and airports, and protects connections against passive eavesdropping without needing a password by providing each user with a unique individual encryption that secures traffic between their device and the Wi-Fi network.

What Does This Mean For Your Business?

Wi-Fi security and the security of a growing number of IoT devices has long been a source of worry to individuals and businesses, particularly as the nature and variety of attack methods have evolved while the current security standard is 14 years old.

The introduction of a new, up-to-date standard / protocol which offers greater security, has been designed with businesses in mind, offers more features, and protects the user from their own slack approach to security is very welcome. WPA3 will be particularly welcomed by those who use networks to send and receive very sensitive data, such as the public sector or financial industry.

Samsung Phones Sending Photos Without Permission

The Samsung Galaxy S9, Galaxy S9+ and Note 8 are all reported to have been recently affected by a bug in the Samsung Messages app that sends out photos from the user’s gallery without their permission … to random contacts.

What Happens?

According to Samsung phone users on social media and the company’s forum, some users have been affected by a bug in the default texting app on Galaxy, Samsung Messages. Reports indicate that the bug causes Samsung Messages to text photos stored in a user’s gallery to a random person listed as contact. The user is not informed that the pictures have been sent, or to whom, and there has even been one reported complaint that a person’s whole gallery was sent to a contact in the middle of the night!

Why?

Although there is no conclusive evidence concerning the cause, online speculation has centred on the bug being related to the interaction between Samsung Messages and recent RCS (Rich Communication Services) profile updates that have rolled out on carriers including T-Mobile. These updates have been rolled out to add updated and new features to the outdated SMS protocol e.g. better media sharing and typing indicators.

Acknowledged

Samsung is reported to have acknowledged the reports of problems, and is said to be looking into them. Samsung is also reported to have urged concerned customers to contact them directly on 1-800-SAMSUNG, and the company supposedly have been in contact with T-Mobile about the issue. T-Mobile is recorded as saying that it is not their issue.

What Can You Do?

As well contacting Samsung, and in the absence of any definitive news of a fix as yet, there are two main possible fixes that Samsung owners can pursue. These are:

  1. To go into the phone’s app settings and revoke Samsung Messages’ ability to access storage. This should stop Messages from sending photos or anything else stored on the device.
  2. Switch to a different texting app e.g. Android Messages or Textra. There are no (known) reports of these being affected by the same bug.

What Does This Mean For Your Business?

People pay a lot of money to get the latest phones and to get the right contracts to allow for the high volume of communications associated with business use. It is (at the very least) annoying, but more generally scary and potentially damaging that personal, private image files can be randomly sent. These photos could, for example, contain commercially sensitive information that could put a company’s competitive advantage at risk if sent to the wrong person. Also, some photos could cause embarrassment for the user and / or the subject of the photo, and could damage business and personal relationships if they fell into the wrong hands. Some photos sent to the wrong person, as well as compromising privacy, could pose serious security risks.

At a time when we acknowledge that photos of ourselves / our faces stored by e.g. CCTV cameras are our personal data, Samsung could find itself on the wrong end of GDPR-related and other lawsuits if found to be directly responsible for the bug and its results.

Tesla Traps Tripp

California-based vehicle tech corporation ‘Tesla’ is suing a former employee, who some saw simply as a Whistleblower, over alleged acts of industrial espionage.

Named

The former Tesla technician who stands accused by Tesla boss Elon Musk of industrial espionage has been named as Martin Tripp. The allegations made against Mr Tripp include that he was hacking and stealing company secrets, and that he wrote software that was designed to aid in the theft of photos and videos.

Tesla has also alleged that Mr Tripp was partly motivated to commit malicious acts against the company after he failed to get a promotion. Tesla has filed a federal lawsuit against him.

Tesla is also reported as saying that 40-year old forces veteran Tripp made false claims to the media about the information he (allegedly) stole, particularly where claims about punctured battery cells, excess scrap material and manufacturing delays are concerned.

Whistleblower?

Far from being an alleged criminal who meant the company harm, Mr Tripp claims that he is simply a Whistleblower who the company is trying to get rid of in order to cover up details about products / components that could damage the company’s reputation if they were known.

For example, Mr Tripp claims that he has simply been trying to expose “some really scary things” at Tesla, including punctured batteries being used in vehicles. Mr Tripp has also alleged that he became disillusioned with Tesla when (as he alleges) he saw how Elon Musk was lying to investors about how many cars they were making.

Mr Tripp has also been reported as saying that he didn’t write any software to aid the theft of photos and videos because he has no patience for coding, and that he didn’t care about failing to get a promotion.

Tripp is looking for legal protection as a whistleblower.

Silencing a Scapegoat?

Mr Tripp has been reported as saying that he is being made a scapegoat because he provided information that was true, that Tesla are doing everything they can to silence him, and that he feels that he had no rights as a whistleblower.

The local Sheriff’s office is reported as announcing that there is no credible threat to the Tesla’s lithium-ion battery factory, known as the Gigafactory.

Mr Tripp has been reported as saying that he allegedly turned whistleblower after his concerns were not taken seriously by anyone in the company.

What Does This Mean For Your Business?

It would certainly not be unheard-of for a disgruntled employee / former employee to pose a security risk or commit acts of sabotage. For example, back in 2014, Andrew Skelton, who was an auditor at the head office of Morrisons (supermarket chain) in Bradford, leaked the personal details of almost 100,000 staff. Mr Skelton is believed to have deliberately stolen and leaked the data in a move to get back at the company after he was accused of dealing in legal highs at work.

We are also familiar with how difficult companies / organisations and other interested parties can make it for people who are ‘whistleblowers’ e.g. reports in the media about Dr Hayley Dare who received poison-pen letters was dismissed from a 20 year unblemished career with a 3 line email after raising concerns over a patient’s safety with her employer, an NHS Trust.

In the case of Tesla, it is currently not possible to say whether or not Mr Tripp is a whistleblower or a disgruntled former-employee with malicious intent. What it does remind us though is that corporate / company culture should be such that employees feel able to express their concerns, are listened to, and that it is viewed as a positive way to find areas to make improvements and modifications that could actually help a company in the long-run.

The Tesla story should also remind companies to plug some basic security loopholes in IT systems when employees leave / are dismissed. This includes simply changing passwords, access rights, and monitoring systems to ensure that nothing untoward is happening.

GDPR Exemption Sought

It has been reported that financial market regulators from the US, the UK and Asia are pressing for an exemption from GDPR.

Growing Calls For Exemption

Even though GDPR only came into force a little over a month ago (May 25th), financial regulators from several countries, most notably the US, have been pressing over several years for an exemption to be built-in, and have hosted multiple meetings about the matter on both sides of the Atlantic.

What’s The Problem?

Before GDPR, financial regulators could use their exemption to share vital information e.g. bank and trading account data, to advance misconduct probes. Now that GDPR is in force, regulators are, therefore, arguing that no exemption means that international probes and enforcement actions in cases involving market manipulation and fraud could be hampered.

Regulators say that they are particularly concerned about the effects on U.S. investigations into crypto-currency fraud and market manipulation (for which many actors are based overseas) could be at risk. Without an exemption, regulators say that cross-border information sharing could be challenged because some countries’ privacy safeguards now fall short of those now offered by the EU under GDPR.

Seeking An “Administrative Arrangement”

The form of exemption that regulators are reported to be seeking is a formal “administrative arrangement” with the Brussels-based European Data Protection Board (EDPB), headed by Andrea Jelinek. The written arrangement would clarify if and how the public interest exemption can be applied to their cross-border information sharing.

Which Regulators?

Reports indicate that the regulators involved in discussions about getting an exemption include the EU’s European Securities and Markets Authority (ESMA), the U.S. Commodity Futures Trading Commission (CFTC), the Securities and Exchange Commission (SEC), the Ontario Securities Commission (OSC), the Japan Financial Services Agency (FSA), Britain’s Financial Conduct Authority (FCA), and the Hong Kong Securities and Futures Commission (SFC).

Why Not?

The worry from the EDPB is that granting exemptions could lead to the illegitimate circumventing and watering down of the new GDPR privacy safeguards, now among the toughest in the world. This, in turn, could lead to the harming EU citizens which is exactly the opposite of the reason for the introduction of GDPR.

The matter has, however, been complicated by the fact that regulators’ slow response to the 2007-2009 global financial crisis was partly blamed on poor cross-border coordination, which has since been improved, and better information sharing after the crisis is reported to have lead to billions of dollars in fines for banks e.g. for trying to rig Libor interest rate benchmarks.

What Does This Mean For Your Business?

A financial crisis (e.g. involving bad behaviour by banks) can create serious knock-on costs and problems for businesses worldwide, and it is, therefore, possible to see why financial regulators feel they need an exemption so that they can continue to share information which will ultimately be in the interest of business and the public. It is likely, therefore, that discussions will continue for some time yet to try to find a way to grant exemptions in certain circumstances.

The contrary view is that granting exemptions will water down legislation that was designed to offer stronger protection to us all, potentially putting EU citizens at risk, and allowing organisations that we can’t effectively monitor to simply circumvent the new law and behave how they like. This could undermine the privacy and rights of EU citizens.

Calls to Stop Storing of Personal Communications Data and Voiceprints

Privacy groups have led calls to halt the blanket collection and storing of communications data in the EU area, and the creation and storing of the “audio signatures” of 5.1 million people by HM Revenue and Customs (HMRC).

Collection of Communications Data

The privacy groups Privacy International, Liberty, and Open Rights Group, have filed complaints to the European Commission which call for EU governments to stop making companies collect and store all communications data. Their complaints have also been echoed by dozens of community groups, non-governmental organisations (NGOs), and academics.

What’s The Problem?

The main complaint is that communications companies in EU states indiscriminately collect and retain all of our communications data. This includes the details of all calls, texts and so forth (i.e. who with, dates, times etc).

The privacy groups and their supporters argue that not only does this amount to a form of intrusive surveillance, but that the practice was actually ruled unlawful by the Court of Justice of the European Union (CJEU) in two judgments in 2014 and 2016.

Privacy groups have expressed concern that some companies in some EU states have tried to circumvent the CJEU judgements, and the CJEU have clearly stated that general and indiscriminate retention of communications data is disproportionate and can’t be justified.

In the UK, for example, the intelligence agencies collect details of thousands of calls daily, but under the CJEU judgements, this amounts to breaking the law.

HMRC Collecting Recordings of Voices

Perhaps even more shocking is the news this week that, according to privacy group Big Brother Watch, the UK HM Revenue and Customs (HMRC) has a Voice ID system that has collected 5.1 million audio signatures.

The accusation is that HMRC is creating biometric ID cards or voiceprints by the back door. These voiceprints could conceivably be used by government agencies to identify UK citizens across other areas of their private lives.

Big Brother Watch has also expressed concern that customers are not given the choice to opt out of the use of this system.

Helpful and Secure

HMRC, which launched the Voice ID scheme last year, asks callers to repeat the phrase “my voice is my password” to register and access their tax details, and says that the system has been very popular with customers. HMRC has also said that the 5 million+ voice recordings that it already has are stored securely.

Privacy campaigners are calling for the deletion of the voiceprints that are currently stored, and for a different system to be implemented, or to at least allow customers to opt out of Voice ID and to be able to use an alternative method.

What Does This Mean For Your Business?

Businesses may be very aware, after having to adjust their own systems to be compliant to the recently introduced GDPR, that all EU citizens should now have more rights about what happens to their personal data. The term ‘personal data’ in the GDPR sense now covers things like our images on CCTV footage, and should, therefore, cover recordings of our personal conversations and biometric data such as recordings of our voices / voice prints / audio signatures.

While we may accept that there are arguments for monitoring our communications data e.g. fighting terrorism, many people clearly feel that the blanket collection of all communications data, not just that of suspects, is a step too far, is an invasion of privacy, and has echoes of ‘big brother’.

Biometrics e.g. using a fingerprint / face-print to access a phone or as part of security to access a bank account is now becoming more commonplace, and can be a helpful, more secure way of validating / authenticating access. Again, images of our faces, fingerprints, and our audio signatures (in the case of HMRC) are our personal data, and it is right that we would want them to be secure, and as with GDPR, that they are only used for the one purpose that we have given consent for, and not to be passed secretly among states and unknown agencies. Also, the ideas that we can opt in or opt out of systems, and are given a choice of which system we use i.e. not being forced to submit a voice recording, is an important issue, and one that many thought GDPR would address.

As more and more biometric systems come into use in the future, legislation will, no doubt, need to be updated again to take account of the changes.

Appeal Dismissed After Asylum Seeker Data Breach

An appeal by the UK Home Office to limit the number of potential claimants from a 2013 data breach has been dismissed on the grounds that an accidentally uploaded spreadsheet exposed the confidential information and personal data of asylum applicants and their family members.

What Happened?

Back in 2013, the Home Office is reported to have uploaded a spreadsheet to their website. The spreadsheet should have simply contained general statistics about the system by which children who have no legal right to remain in the UK are returned to their country of origin (known as ‘the family returns process’).

Unfortunately, this spreadsheet also contained a link to a different downloadable spreadsheet that displayed the actual names of 1,598 lead applicants for asylum or leave to remain. It also contained personal details such as the applicants’ ages, nationality, the stage they had reached in the process and the office that dealt with their case. This information could also potentially be used to infer where they lived.

The spreadsheet is reported to have been available online for almost two weeks during which time the page containing the link was accessed from 22 different IP addresses and the spreadsheet was downloaded at least once. The spreadsheet was also republished to a US website, and from there it was accessed 86 times during a period of almost one month before it was finally taken down.

For those claiming asylum e.g. because of persecution in the home country that they had escaped from, this was clearly a very distressing and worrying situation.

Damages

In the court case that followed in June 2016, the Home Office was ordered to pay six claimants a combined total of £39,500 for the misuse of private information and breaches of the Data Protection Act (“DPA”). The defendants conceded that their actions amounted to a misuse of private information (“MPI”) and breaches of the DPA.

The Home Office did, however, lodge an appeal in an apparent attempt to limit the number of other potential claims for damages.

Appeal Dismissed

The appeal by the Home Office was dismissed by the three Appeal Court judges, and meant that both the named applicants and their wives (if proof of ‘distress’ could be shown) could sue for both the common law and statutory torts. This was because the judges said that the processing of data in the name of claimant about his family members was just as much the processing of their personal data as his, therefore, meaning that their personal and confidential information had also been misused.

Not The First Time

The Home Office appears to have been the subject similar incidents in the past. For example, back in January the Home Office paid £15,500 in compensation after admitting handing over sensitive information about an asylum seeker to the government of his Middle East home country, thereby possibly endangering his life and that of his family.

The handling of the ‘Windrush’ cases, which has recently made the headlines, has also raised questions about the quality of decision-making and the processes in place when it comes to matters of immigration.

What Does This Mean For Your Business?

In this case, it is possible that those individuals whose personal details were exposed would have experienced distress, and that the safety of them and their families could have been compromised as well as their privacy. This story indicates the importance of organisations and businesses being able to correctly and securely handle the personal data of service users, clients and other stakeholders. This is particularly relevant since the introduction of GDPR.

It is tempting to say that this case illustrates that no organisation is above the law when it comes to data protection. However, it was announced in April that the Home Office will be granted data protection exemptions via a new data protection bill. The exemptions could deprive applicants of a reliable means of obtaining files about themselves from the department through ‘subject access requests’. It has also been claimed that the new bill will mean that data could be shared secretly between public services, such as the NHS, and the Home Office, more easily. Some critics have said that the bill effectively exempts immigration matters from data protection. If this is so, it goes against the principles of accountability and transparency that GDPR is based upon. It remains to be seen how this bill will progress and be challenged.

AI Creates Phishing URLs That Can Beat Auto-Detection

A group of computer scientists from Florida-based cyber security company, Cyxtera Technologies, are reported to have built machine-learning software that can generate phishing URLs that can beat popular security tools.

Look Legitimate

Using the Phishtank database (a free community site where anyone can submit, verify, track and share phishing data) the scientists built the DeepPhish machine-learning software that is able to create URLs for web pages that appear to be legitimate (but are not) login pages for real websites.

In actual fact, the URLs, which can fool security tools, lead to web pages that can collect the entered username and passwords for malicious purposes e.g. to hijack accounts at a later date.

DeepPhish

The so-called ‘DeepPhish’ machine-learning software that was able to produce the fake but convincing URLs is actually an AI algorithm. It was able to produce the URLs by learning effective patterns used by threat actors and using them to generate new, unseen, and effective attacks based on that attacker data.

Can Increase The Effectiveness of Phishing Attacks

Using Phishtank and the DeepPhish AI algorithm in tests, the scientists found that two uncovered attackers could increase their phishing attacks effectiveness from 0.69% to 20.9%, and 4.91% to 36.28%, respectively.

Training The AI Algorithm

The effectiveness of AI algorithms is improved by ‘training’ them. In this case, the training involved the team of scientist first inspecting more than a million URLs on Phishtank. From this, the team were able to identify three different phishing attacks that had generated web pages to steal people’s credentials. These web addresses were then fed into the AI phishing detection algorithm to measure how effective the URLs were at bypassing a detection system.

The team then added all the text from effective, malicious URLs into a Long-Short-Term-Memory network (LSTM) so that the algorithm could learn the general structure of effective URLs, and extract relevant features.

All of this enabled the algorithm to learn how to generate the kind of phishing URLs that could beat popular security tools.

What Does This Mean For Your Business?

AI offers some exciting opportunities for businesses to save time and money, and improve the effectiveness of their services. Where cyber-security is concerned, AI-enhanced detection systems are more accurate than traditional manual classification, and the use of intelligent detection systems has enabled the identification of threat patterns and the detection of phishing URLs with 98.7% accuracy, thereby giving the battle advantage to defensive teams.

However, it has been feared for some time that if cyber-criminals were able to use a well-trained and sophisticated AI systems to defeat both traditional and AI-based cyber-defence systems, this could pose a major threat to Internet and data security, and could put many businesses in danger.

The tests by the Florida-based cyber-security scientists don’t show very high levels of accuracy in enabling effective defence-beating phishing URLs to be generated. This is a good thing for now, because it indicates that most cyber-criminals with even fewer resources may not yet be able to harness the full power to launch AI-based attacks. The hope is that the makers of detection and security systems will be able to use AI to stay one step ahead of attackers.

State-sponsored attackers, however, may have many more resources at their disposal, and it is highly likely that AI-based attack methods are already being used by state-sponsored players. Unfortunately, state-sponsored attacks can cause a lot of damage in the business and civilian worlds.

Domain Names & GDPR

A recent ruling by a German court about GDPR also applies to personal information held in the worldwide whois service, and could mean that domain name admin and tech contact details may no longer be needed because of the GDPR ‘data minimisation principle’.

Up Until Now

Laws up until now have required ICANN, the Internet Corporation for Assigned Names and Numbers, to ask its accredited domain registrars to collect and store certain details of people who register / purchase domain names. These details include the owner’s name and address, and the name, postal address, e-mail address, telephone number, and (where available) fax number of the domain’s technical and administrative contacts. Many of these may, in fact, be the same person.

No More Collecting and Storing Details of Owners

The recent German court ruling came about because German registrar EPAG Domain services thought that one important aspect of GDPR, which came into force on May 25th, is the principle of data minimisation.

Under this key GDPR principle, personal data collected by companies should be adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed. In other words, under GDRR, companies should only collect the personal data that is absolutely necessary to provide the service.

The German registrar EPAG Domain services used this GDPR principle to argue that it no longer needed or wanted to collect the personal details for the technical and administrative contacts of domains, although it would still be happy to collect the personal details of the actual domain name owners.

ICANN Still Wanted Details Collected

ICANN didn’t agree with EPAG, and pushed for an injunction to ensure that EPAG either continued to collect administrative and technical contact details, or pay a €250,000 (US$291,000) fine!

The court came down on EPAG’s side, and refused to grant the injunction on the grounds that there was no evidence that the extra information was needed, especially since the same person could be listed as the owner, technical, and administrative contact.

ICANN’s Own Policy Proposal

ICANN had already published its own temporary policy to cover how information gathered by registrars should be made publicly available through the global whois service. ICANN’s policy was for tiered / layered access to personal information, limiting it to users with a legitimate and proportionate purpose e.g. law enforcement, competition regulation, consumer protection or rights protection.

Irony

One ironic aspect of the court’s ruling is that ICANN itself doesn’t register any personal details for administrative and technical contacts, and only lists a single number for both contacts’ phone and fax, which turns out to be the main number for its network operations centre. It could be argued that this is data minimisation in action from a company that appears to have argued against it.

What Does This Mean For Your Business?

This story is a practical example of how GDPR could affect aspects of company operations that may not have really been considered until now. It shows how current ways of doing things can be, relatively easily challenged in some courts, the results of which could spread across a whole industry.

If the ruling, in this case, is taken on board in other European countries e.g. most other EU countries, it could save domain registrars some time, and could cut through bureaucracy while protecting privacy at the same time.

It is still early days for GDPR, and there are likely to be many different challenges and changes to come across many industries as a result.