GDPR

Fake News Crowding Threat Outlined

UK MPs in the Digital, Culture, Media and Sport Committee (DCMSC) have been investigating the challenges and potential threat to democracy posed by ‘fake news’ crowding out real news, and have published their findings in a “Disinformation and ‘fake news’: Interim Report”.

Difficult To Identify & ‘Crowding Out’ Real News

Tory MP Damian Collins made the news this week by highlighting one of the main challenges which is that people struggle to identify “fake news”, and the DCMSC reports focused on how this challenge has been capitalised on by those seeking to influence elections.

The government is also concerned that the sheer volume of disseminated misinformation / fake news is beginning to crowd our real news.

UK Legal Framework Not Fit To Cope

The main points of the report are that fake news poses a threat to democracy, that the UK legal framework is not currently fit to cope with it, and that action needs to be taken by the Government and other regulatory agencies to build resilience against misinformation and disinformation.

The DCMSC Report

The 89 page report which has been published online here https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/363/363.pdf covers the issues of the definition, role and legal responsibilities of tech companies, data targeting, based around the Facebook, GSR and Cambridge Analytica allegations, Russian influence in political campaigns, SCL influence in foreign elections, and digital literacy (and how it should be made the fourth pillar of digital education alongside reading, writing and maths).

Background

Some of the more worrying examples of the influence of fake news and the interests of some of the players considered by the government committee included:

– Facebook and Cambridge Analytica’s harvesting and sharing of the personal data of 87 million people to influence the outcome of the US 2016 presidential election and the UK Brexit referendum.

-Political donor Arron Banks being accused of misleading MPs about his meetings with the Russian Embassy, and his walking out of an evidence session to avoid scrutiny on the topic.

-Facebook’s deployment of ‘Free Service’ in Burma (data-free Facebook access) which was found by the United Nations to have played a key role in stirring up hatred against the Rohingya Muslim minority in Rakhine State, partly because people could only access news and content via Facebook.

Social Media Companies Made Liable?

The report also contains a recommendation that social media companies should be defined by a new category i.e. not just a ‘platform’ nor a ‘publisher’, and should be made liable to act against harmful or illegal content appearing on their platforms.

Other Recommendations

Other recommendations made in the report include the need to update electoral law, a new tax on social networks could pay for digital literacy programmes in schools, the setting up of a code for political advertising on social media, greater transparency around online advertising, and a “digital Atlantic charter” to protect personal information and rights.

What Does This Mean For Your Business?

The business world is influenced by the political world, and vice versa. It is in the interests of businesses and governments that truly fake news is kept to a minimum and that certain parties (e.g. other nation states) aren’t allowed to exert significant influence on elections and referendums.

That said, states / governments around the world have for many years seen social media as a threat. Some governments have opted for a blanket blocking of social media whereas others have sought ways gain some control over it by focusing on its negative aspects and / or by seeking regulation or even back-door access to users. It seems, however, that some international actors have seen social media as an opportunity for influence (e.g. alleged Russian use of Facebook to influence the US election) and this, in turn, has now helped those governments who feel threatened by it e.g. by enabling them to discredit it as a legitimate news source, and thereby boost the credibility of their own state media.

Facebook has, after its involvement in the Cambridge Analytica scandal and the ‘Vote Leave’ campaign, played into the hands of those who would like to see it operated with greater regulation and control. Scandals like these have even helped the cause of world leaders such as President Trump, who appears able to simply say the phrase ‘fake news’ to counter any stories that could show him in a bad light, whether true or not.

Even our ‘real’ news is slanted in newspapers to reflect the views and allegiances of the owner newspaper, and it is commonplace, but accepted, that newspapers print some stories that are false / contain false information that they later simply issue an apology for, and carry on as normal.

Truth and trust are the victims of fake news, and just as governments are happy to focus on it as a threat and as a means to apply pressure to popular media that they can’t overtly control, they can also now see what a powerful tool and opportunity it can be as another tool for influence.

Facebook Favours Free Speech Over Fake News Removal

In a recent Facebook media presentation in Manhattan, and despite the threat of social media regulation e.g. from Ofcom, Facebook said that removing fabricated posts would be “contrary to the basic principles of free speech”.

Fake News

The term ‘fake news’ has become synonymous with the 2016 US general election and accusations that Facebook was a platform for fake political news to be spread e.g. by Russia. Also, fake news is a term that has become synonymous with President Trump, who frequently uses the term, often (some would say) to act as a catch-all term to discredit/counter critical stories in the media.

In essence, fake news refers to deliberate misinformation or hoaxes, manipulated to resemble credible journalism and attract maximum attention, and it is spread mainly by social media. Facebook has tried to be seen to flag up and clean up obvious fake news ever since its reputation was tarnished by the election news scandals.

What About InfoWars?

The point was made to Facebook at the media presentation by a CNN reporter that the fact that InfoWars, a site having been known to have published false information and conspiracy theories, has been allowed to remain on the platform may be evidence that Facebook is not tackling fake news as well as it could.

A Matter of Perspective

To counter this and other similar accusations, Facebook has stated that it sees pages on both the left and the right side of politics distributing what they consider to be opinion or analysis but what others, from a different perspective, may call fake news.

Facebook also tweeted that banning those kinds of pages e.g. InfoWars, would be contrary to the basic principles of free speech.

A Matter of Trust

Ofcom research has suggested that people have relatively little trust in what they read in social media content anyway. The research showed that only 39% consider social media to be a trustworthy news source, compared to 63% for newspapers and 70% for TV.

Age Plays A Part

Other research from Stanford’s Graduate School of Education, involving more than 7,800 responses from middle school, high school and college students in 12 US states focused on their ability to assess information sources. The results showed a shocking lack of ability to evaluate information at even as basic a level as distinguishing advertisements from articles. When you consider that many young people get their news from social media, this shows that they may be more vulnerable and receptive to fake stories, and their wide networks of friends could mean that fake stories could be quickly and widely spread among other potentially vulnerable recipients.

Although Facebook is known to have an older demographic now, many young people still use it, Facebook has tried to launch a kind of Facebook for children to attract more young users, and Facebook owns Instagram, partly as a means to try and mop up young users who leave Facebook. It could be argued, therefore, that Facebook, and other social media platforms have a responsibility to regulate some content in order to protect users.

What Does This Mean For Your Business?

Fake news stories are not exclusive to social media platforms as the number of retractions and apologies in newspapers over the years are a testament. The real concern has arisen about social media, and Facebook particularly, because of what appears (allegedly) to have been the ability of actors from a foreign power being able to use fake news on Facebook to actually influence the election of a President. Which party and President is in power in the US can, in turn, have a dramatic effect on businesses and markets around the world, and the opportunities that other foreign powers think they have.

Facebook is also busy fighting another crisis in trust that has arisen from news of its sharing of users’ personal data with Cambridge Analytica, and the company is focusing much of its PR effort not on talking specifically about fake news, but about how Facebook has changed, why we should trust it again, and how much it cares about our privacy.

Meanwhile in the UK, Ofcom chief executive Sharon White, has clearly stated that she believes that media platforms need to be “more accountable” in their policing of content. While this may be understandable, many rights and privacy campaigners would not like the idea that free speech could be influenced and curbed by governments, perhaps to suit their own agenda. The arguments continue.

Google Chrome’s New ‘Site Location’ Security Feature Activated

The new ‘Site Isolation’ security feature for Google’s Chrome browser has been switched on, and could protect users from log-in credentials theft.

Decade-Long History

The newly switched-on feature actually has a decade-long history in the making. It has been reported that Google invested those engineer-years, mostly in the last 6 years, and a lot of money in producing a DiD (defence-in-depth) feature, and what is a now essential defence against a prolific class of attack.

What Does Site Isolation Do?

It has recently been discovered that all modern chips / processors have security vulnerabilities in them that can contribute to the success of ‘data leakage’ attacks. These vulnerabilities, dubbed Spectre and Meltdown (Meltdown only on Intel chips), can be used by hackers to steal passwords or other confidential data from computers and mobile devices through popular web browsers like Chrome, Internet Explorer, Firefox, and Safari for Macs or iOS.

With Site Isolation enabled, each renderer process contains documents from a maximum of one site which means that all navigations to cross-site documents cause a switch in processes, and all cross-site iframes are put into a different process than their parent frame. This ‘isolation’ of the processes provides effective detection against data leakage attacks like Spectre, which means that the vast majority of Chrome users are now theoretically safer from this one kind of attack. It has also been reported that work is underway to protect against attacks from compromised renderers.

It Does Sap Some Memory

One of the trade-offs that Google has had to make to in order to make this feature effective is greater resource consumption. With Site Isolation on, there is a 10-13% total memory overhead in real workloads due to the larger number of processes. Google is reported to be working on trying to reduce the memory burden.

Even 10-13% is good compared to the 20% memory overhead that was being used when Chrome 63 debuted with Site Isolation.

Not Android Yet – But Soon

Site Isolation is scheduled to be included in Chrome 68 for Android but reports indicate that Google is still working on resource consumption issues before that can be rolled out.

What Does This Mean For Your Business?

The switching on of this feature is, of course, good news for businesses, as it is an additional, free way to strengthen cyber resilience against a popular kind of attack that could have serious consequences. This is of particular importance when businesses are trying to do everything possible to achieve and maintain compliance with GDPR.

Up until now, all businesses have heard is that all modern processors have security flaws in them, and that software patching is the only real answer. Back in May, another 8 flaws, in addition to Spectre and Meltdown, were discovered in processors, dubbed Spectre Next Generation (Spectre NB). At least the switching-on of this Chrome feature is one tangible step in the journey to patch these vulnerabilities before cyber-criminals manage to exploit them all. Hopefully, more, similar features will be introduced across other browsers in the near future.

Cambridge Analytica Re-Born

A new offshoot of Cambridge Analytica, the disgraced data analysis company at the heart of the Facebook personal data sharing scandal, has been set up by former members of staff under the name ‘Auspex’.

Old Version Shut Down

After news of the scandal, which saw the details of an estimated 87 million Facebook users (mostly in the US) being shared with CA, and then used by CA to target people with political messages in relation to the last US presidential elections, CA was shut down by its parent company SCL Elections. CA is widely reported to have ceased operations and filed for bankruptcy in the wake of the scandal.

Ethical This Time

Auspex, which (it should be stressed) is not just another version of CA, but is likely to carry on the same kind of data analysis work, has been set up by Ahmed Al-Khatib, a former director of Emerdata which was also set up after the Cambridge Analytica scandal. Mr Al-Khatib has been reported as saying that Auspex will use ethically based, data-driven communications with a focus on improving the lives of people in the developing world.

Middle East and Africa

The markets in the developing world that Auspex will initially be focusing on are the Middle East and Africa, and the kinds of ethical work that it will be doing, according Auspex’s own communications, are health campaigning and tackling the spread of extremist ideology among a disenfranchised youth.

Compliant

Auspex has been quick to state that it has made changes and that it will be fully compliant from the outset, thereby hoping to further distance itself from its murky origins in CA.

Personnel

One thing that is likely to attract the attention of critics is that, not only is Mark Turnbull, the former head of CA’s political division the new Auspex Managing Director, but that the listed directors of the new company include Alastair Harris, who is reported to have worked at CA, and Omar Al-Khatib is listed as a citizen of the Seychelles.

What Does This Mean For Your Business?

The Cambridge Analytica and Facebook scandal is relatively recent, and the ICO have only just presented their report about the incident. For many people, it may not feel right that personnel from Cambridge Analytica can appear to simply set up under another name and start again. Critics can be forgiven for perhaps not trusting statements about a new ethical approach, especially since Mark Turnbull appeared alongside former CA chief executive Alexander Nix in an undercover film by Channel 4, where Nix gave examples of how his company could discredit politicians e.g. by setting up encounters with prostitutes.

The introduction of GDPR has brought the matters of data security and privacy into sharp focus for businesses in the UK, and businesses will be all too aware of the possible penalties if they get on the wrong side of the ICO.

In the case of the Facebook / Cambridge Analytica scandal, the ICO has recently announced that Facebook will be fined £500,000 for data breaches, and that it is still considering taking legal action against CA’s company’s directors. If successful, a prosecution of this kind could result in convictions and an unlimited fine.

12 Russian Intelligence Officers Charged With Election Hacking

Even though, in an interview this week, President Trump appeared to absolve Russia of election interference (since retracted), the US Department of Justice has now charged 12 Russian intelligence officers with hacking Democratic officials in the 2016 US elections.

The Allegations

It is alleged by the US Justice Department that, back in March 2016, on the run-up to the presidential election campaign which saw Republican Donald Trump elected as president, the Russian intelligence officers were responsible for cyber-attacks on the email accounts of staff for Hillary Clinton’s Democrat presidential campaign.

Also, the Justice Department alleges that the accused Russians corresponded with several Americans (but not in a conspiratorial way), used fictitious online personas, released thousands of stolen emails (beginning in June 2016), and even plotted to hack into the computers of state boards of elections, secretaries of state, and voter software.

No Evidence Says Kremlin

The Kremlin is reported to have said that it believes there is no evidence for the US allegations, describing the story as an “old duck” and a conspiracy theory.

32, So Far

The latest allegations are all part of the investigation, led by Special Counsel Robert Meuller, into US intelligence findings that the Russians allegedly conspired in favour of Trump, and that some of his campaign aides may have colluded.
So far, 32 people (mostly Russians) have been indicted. 3 companies and 4 former Trump advisers have also been implicated.

Trump Says…

President Trump has dismissed allegations that the Russians help put him in the White House as a “rigged witch hunt” and “pure stupidity”.

In a press conference after his meeting with Russian President, Vladimir Putin in Helsinki, President Trump, however, caused shock and disbelief when asked whether he thought Russia had been involved in US election interference, he said “I don’t see any reason why it would be”.

He has since appeared to backtrack by saying that he meant to say “wouldn’t” rather than “would”, and that he accepts his own intelligence agency’s findings that Russia interfered in the 2016 election, and that other players may have been involved too.

What Does This Mean For Your Business?

Part of the fallout of constant struggle between states and super-powers are the cyber attacks that end up affecting many businesses in the UK. Also, if there has been interference in an election favouring one party, this, in turn, affects the political and economic decisions made in that country, and its foreign policy. These have a knock-on effect on markets, businesses and trade around the world, particularly for those businesses that export to, import from, or have other business interests in the US. Even though, in the US, one of the main results of the alleged electoral interference scandal appears to have been damaged reputations and disrupted politics, the wider effects have been felt in businesses around the world.

These matters and the links to Facebook and Cambridge Analytica have also raised awareness among the public about their data security and privacy, whether they can actually trust corporations with it, and how they could be targeted with political messages which could influence their own beliefs.

£500,000 Fine For Facebook Data Breaches

Sixteen months after the Information Commissioners Office (ICO) began its investigation into the Facebook’s sharing the personal details of users with political consulting firm Cambridge Analytica, the ICO has announced that Facebook will be fined £500,000 for data breaches.

Maximum

The amount of the fine is the maximum that can be imposed under GDPR. Although it sounds like a lot, for a corporation valued at around $500 billion, and with $11.97 billion in advertising revenue and $4.98 billion in profit for the past quarter (mostly from mobile advertising), it remains to be seen how much of an effect it will have on Facebook.

Time Before Responding

Facebook has now been given time to respond to the ICO’s verdict before a final decision is made by the ICO.

Facebook have said, however, that it acknowledges that it should have done more to investigate claims about Cambridge Analytica and taken action back in 2015.

Reminder of What Happened

The fine relates to the harvesting of the personal details of 87 million Facebook users without their explicit consent, and the sharing of that personal data with London-based political Consulting Firm Cambridge Analytica, which is alleged to have used that data to target political messages and advertising in the last US presidential election campaign.

Also, harvested Facebook user data was shared with Aggregate IQ, a Data Company which worked with the ‘Vote Leave’ campaign in the run-up to the Brexit Referendum.

The sharing of personal user data with those companies was exposed by former Cambridge Analytica employee and whistleblower Christopher Wylie. The resulting publicity caused public outrage, saw big falls in Facebook’s share value, brought apologies from its founder / owner, and saw insolvency proceedings (back in May) for Cambridge Analytica and its parent SCL Elections.

What About Cambridge Analytica?

Although Facebook has been given a £500,000 fine, Cambridge Analytica no longer exists as a company. The ICO has indicated, however, that it is still considering taking legal action against the company’s directors. If successful, a prosecution of this kind could result in convictions and an unlimited fine.

AggregateIQ

As for Canadian data analytics firm AggregateIQ, the ICO is reported to still be investigating whether UK voters’ personal data provided by the Brexit referendum’s Vote Leave campaign had been transferred and accessed outside the UK and whether this amounted to a breach of the Data Protection Act. Also, the ICO is reported to be investigating to what degree AIQ and SCL Elections had shared UK personal data, and the ICO is reported to have served an enforcement notice forbidding AIQ from continuing to make use of a list of UK citizens’ email addresses and names that it still holds.

Worries About 11 Main Political Parties

The ICO is also reported to have written to the UK’s 11 main political parties, asking them to have their data protection practices audited because it is concerned that the parties may have purchased certain information about members of the public from data brokers, who might not have obtained consent.

What Does This Mean For Your Business?

When this story originally broke, it was a wake-up call about what can happen to the personal data that we trust companies / corporations with, and it undoubtedly damaged trust between Facebook and its users to a degree. It’s a good job that the ICO is there to follow things up on our behalf because, for example, a Reuters/Ipsos survey conducted back in April found that, even after all the publicity surrounding Facebook and Cambridge Analytica scandal, most users remained loyal to the social media giant.

Also, the case has raised questions about how our data is shared and used for political purposes, and how the using and sharing of our data to target messages can influence the outcome of elections, and, therefore, can influence the whole economic and business landscape. This has meant that there has now been a call for the UK government to step-in and introduce a code of practice which should limit how personal information can be used by political campaigns before the next general election.
Facebook has recently been waging a campaign, including heavy television advertising, to convince us that it has changed and is now more focused on protecting our privacy. Unfortunately, this idea has been challenged by the recent ‘Deceived By Design’ report by the government-funded Norwegian Consumer Council, which accused tech giants Microsoft, Facebook and Google of being unethical by leading users into selecting settings that do not actually benefit their privacy.

Tech Giant GDPR Privacy Settings ‘Unethical’ Says Council

The ‘Deceived By Design’ report by the government-funded Norwegian Consumer Council has accused tech giants Microsoft, Facebook and Google of being unethical by leading users into selecting settings that do not benefit their privacy.

Illusion of Control

The report alleges that, far from actually giving users more control over their personal data (as laid out by GDPR), the tech giants may simply be giving users the illusion that this is happening. The report points to the possible presence of practices such as:

– Facebook and Google making users who want the privacy-friendly option go through a significantly longer process (privacy intrusive defaults).

– Facebook, Google and Windows 10 using pop-ups that direct users away from the privacy-friendly choices.

– Google presenting users with a hard-to-use dashboard with a maze of options for their privacy and security settings. For example, on Facebook it takes 13 clicks to opt out of authorising data collection (opting in can take just one).
– Making it difficult to delete data that’s already been collected. For example, deleting data about location history requires clicking through 30 to 40 pages.

– Google not warning users about the downside of personalisation e.g. telling users they would simply see less useful ads, rather than mentioning the potential to be opted in to receive unbalanced political ad messages.

– Facebook and Google pushing consumers to accept data collection e.g. with Facebook stating how, if users keep face recognition turned off, Facebook won’t be able to stop a stranger from using the user’s photo to impersonate them, while not stating how Facebook will use the information collected.

Dark Patterns

In general, the reports criticised how the use of “dark patterns” such as misleading wording and default settings that are intrusive to privacy, settings that give users an illusion of control, hiding privacy-friendly options, and presenting “take-it-or-leave-it choices”, could be leading users to make choices that actually stop them from exercising all of their privacy rights..

Big Accept Button

The report, by Norway’s consumer protection watchdog, also notes how the GDPR-related notifications have a large button for consumers to accept the company’s current practices, which could appear to many users to be far more convenient than searching for the detail to read through.

Response

Google, Facebook and Microsoft are all reported to have responded to the report’s findings by issuing statements focusing on the progress and improvements they’ve made towards meeting the requirements of the GDPR to date.

What Does This Mean For Your Business?

GDPR was supposed to give EU citizens much more control over their data, and the perhaps naive expectation was that companies with a lot to lose (in fines for non-compliance and reputation), such as the big tech giant and social media companies would simply fall into line and afford us all of those new rights straight away.

The report by the Norwegian consumer watchdog appears to be more of a reality check that shows how our personal data is a valuable commodity to the big tech companies, and that, according to the report, the big tech companies are willing to manipulate users and give the illusion that they are following the rules without actually doing so. The report appears to indicate that these large corporations are willing to force consumers to try to fight for rights that have already been granted to them in GDPR.

GDPR Exemption Sought

It has been reported that financial market regulators from the US, the UK and Asia are pressing for an exemption from GDPR.

Growing Calls For Exemption

Even though GDPR only came into force a little over a month ago (May 25th), financial regulators from several countries, most notably the US, have been pressing over several years for an exemption to be built-in, and have hosted multiple meetings about the matter on both sides of the Atlantic.

What’s The Problem?

Before GDPR, financial regulators could use their exemption to share vital information e.g. bank and trading account data, to advance misconduct probes. Now that GDPR is in force, regulators are, therefore, arguing that no exemption means that international probes and enforcement actions in cases involving market manipulation and fraud could be hampered.

Regulators say that they are particularly concerned about the effects on U.S. investigations into crypto-currency fraud and market manipulation (for which many actors are based overseas) could be at risk. Without an exemption, regulators say that cross-border information sharing could be challenged because some countries’ privacy safeguards now fall short of those now offered by the EU under GDPR.

Seeking An “Administrative Arrangement”

The form of exemption that regulators are reported to be seeking is a formal “administrative arrangement” with the Brussels-based European Data Protection Board (EDPB), headed by Andrea Jelinek. The written arrangement would clarify if and how the public interest exemption can be applied to their cross-border information sharing.

Which Regulators?

Reports indicate that the regulators involved in discussions about getting an exemption include the EU’s European Securities and Markets Authority (ESMA), the U.S. Commodity Futures Trading Commission (CFTC), the Securities and Exchange Commission (SEC), the Ontario Securities Commission (OSC), the Japan Financial Services Agency (FSA), Britain’s Financial Conduct Authority (FCA), and the Hong Kong Securities and Futures Commission (SFC).

Why Not?

The worry from the EDPB is that granting exemptions could lead to the illegitimate circumventing and watering down of the new GDPR privacy safeguards, now among the toughest in the world. This, in turn, could lead to the harming EU citizens which is exactly the opposite of the reason for the introduction of GDPR.

The matter has, however, been complicated by the fact that regulators’ slow response to the 2007-2009 global financial crisis was partly blamed on poor cross-border coordination, which has since been improved, and better information sharing after the crisis is reported to have lead to billions of dollars in fines for banks e.g. for trying to rig Libor interest rate benchmarks.

What Does This Mean For Your Business?

A financial crisis (e.g. involving bad behaviour by banks) can create serious knock-on costs and problems for businesses worldwide, and it is, therefore, possible to see why financial regulators feel they need an exemption so that they can continue to share information which will ultimately be in the interest of business and the public. It is likely, therefore, that discussions will continue for some time yet to try to find a way to grant exemptions in certain circumstances.

The contrary view is that granting exemptions will water down legislation that was designed to offer stronger protection to us all, potentially putting EU citizens at risk, and allowing organisations that we can’t effectively monitor to simply circumvent the new law and behave how they like. This could undermine the privacy and rights of EU citizens.

Appeal Dismissed After Asylum Seeker Data Breach

An appeal by the UK Home Office to limit the number of potential claimants from a 2013 data breach has been dismissed on the grounds that an accidentally uploaded spreadsheet exposed the confidential information and personal data of asylum applicants and their family members.

What Happened?

Back in 2013, the Home Office is reported to have uploaded a spreadsheet to their website. The spreadsheet should have simply contained general statistics about the system by which children who have no legal right to remain in the UK are returned to their country of origin (known as ‘the family returns process’).

Unfortunately, this spreadsheet also contained a link to a different downloadable spreadsheet that displayed the actual names of 1,598 lead applicants for asylum or leave to remain. It also contained personal details such as the applicants’ ages, nationality, the stage they had reached in the process and the office that dealt with their case. This information could also potentially be used to infer where they lived.

The spreadsheet is reported to have been available online for almost two weeks during which time the page containing the link was accessed from 22 different IP addresses and the spreadsheet was downloaded at least once. The spreadsheet was also republished to a US website, and from there it was accessed 86 times during a period of almost one month before it was finally taken down.

For those claiming asylum e.g. because of persecution in the home country that they had escaped from, this was clearly a very distressing and worrying situation.

Damages

In the court case that followed in June 2016, the Home Office was ordered to pay six claimants a combined total of £39,500 for the misuse of private information and breaches of the Data Protection Act (“DPA”). The defendants conceded that their actions amounted to a misuse of private information (“MPI”) and breaches of the DPA.

The Home Office did, however, lodge an appeal in an apparent attempt to limit the number of other potential claims for damages.

Appeal Dismissed

The appeal by the Home Office was dismissed by the three Appeal Court judges, and meant that both the named applicants and their wives (if proof of ‘distress’ could be shown) could sue for both the common law and statutory torts. This was because the judges said that the processing of data in the name of claimant about his family members was just as much the processing of their personal data as his, therefore, meaning that their personal and confidential information had also been misused.

Not The First Time

The Home Office appears to have been the subject similar incidents in the past. For example, back in January the Home Office paid £15,500 in compensation after admitting handing over sensitive information about an asylum seeker to the government of his Middle East home country, thereby possibly endangering his life and that of his family.

The handling of the ‘Windrush’ cases, which has recently made the headlines, has also raised questions about the quality of decision-making and the processes in place when it comes to matters of immigration.

What Does This Mean For Your Business?

In this case, it is possible that those individuals whose personal details were exposed would have experienced distress, and that the safety of them and their families could have been compromised as well as their privacy. This story indicates the importance of organisations and businesses being able to correctly and securely handle the personal data of service users, clients and other stakeholders. This is particularly relevant since the introduction of GDPR.

It is tempting to say that this case illustrates that no organisation is above the law when it comes to data protection. However, it was announced in April that the Home Office will be granted data protection exemptions via a new data protection bill. The exemptions could deprive applicants of a reliable means of obtaining files about themselves from the department through ‘subject access requests’. It has also been claimed that the new bill will mean that data could be shared secretly between public services, such as the NHS, and the Home Office, more easily. Some critics have said that the bill effectively exempts immigration matters from data protection. If this is so, it goes against the principles of accountability and transparency that GDPR is based upon. It remains to be seen how this bill will progress and be challenged.

AI Creates Phishing URLs That Can Beat Auto-Detection

A group of computer scientists from Florida-based cyber security company, Cyxtera Technologies, are reported to have built machine-learning software that can generate phishing URLs that can beat popular security tools.

Look Legitimate

Using the Phishtank database (a free community site where anyone can submit, verify, track and share phishing data) the scientists built the DeepPhish machine-learning software that is able to create URLs for web pages that appear to be legitimate (but are not) login pages for real websites.

In actual fact, the URLs, which can fool security tools, lead to web pages that can collect the entered username and passwords for malicious purposes e.g. to hijack accounts at a later date.

DeepPhish

The so-called ‘DeepPhish’ machine-learning software that was able to produce the fake but convincing URLs is actually an AI algorithm. It was able to produce the URLs by learning effective patterns used by threat actors and using them to generate new, unseen, and effective attacks based on that attacker data.

Can Increase The Effectiveness of Phishing Attacks

Using Phishtank and the DeepPhish AI algorithm in tests, the scientists found that two uncovered attackers could increase their phishing attacks effectiveness from 0.69% to 20.9%, and 4.91% to 36.28%, respectively.

Training The AI Algorithm

The effectiveness of AI algorithms is improved by ‘training’ them. In this case, the training involved the team of scientist first inspecting more than a million URLs on Phishtank. From this, the team were able to identify three different phishing attacks that had generated web pages to steal people’s credentials. These web addresses were then fed into the AI phishing detection algorithm to measure how effective the URLs were at bypassing a detection system.

The team then added all the text from effective, malicious URLs into a Long-Short-Term-Memory network (LSTM) so that the algorithm could learn the general structure of effective URLs, and extract relevant features.

All of this enabled the algorithm to learn how to generate the kind of phishing URLs that could beat popular security tools.

What Does This Mean For Your Business?

AI offers some exciting opportunities for businesses to save time and money, and improve the effectiveness of their services. Where cyber-security is concerned, AI-enhanced detection systems are more accurate than traditional manual classification, and the use of intelligent detection systems has enabled the identification of threat patterns and the detection of phishing URLs with 98.7% accuracy, thereby giving the battle advantage to defensive teams.

However, it has been feared for some time that if cyber-criminals were able to use a well-trained and sophisticated AI systems to defeat both traditional and AI-based cyber-defence systems, this could pose a major threat to Internet and data security, and could put many businesses in danger.

The tests by the Florida-based cyber-security scientists don’t show very high levels of accuracy in enabling effective defence-beating phishing URLs to be generated. This is a good thing for now, because it indicates that most cyber-criminals with even fewer resources may not yet be able to harness the full power to launch AI-based attacks. The hope is that the makers of detection and security systems will be able to use AI to stay one step ahead of attackers.

State-sponsored attackers, however, may have many more resources at their disposal, and it is highly likely that AI-based attack methods are already being used by state-sponsored players. Unfortunately, state-sponsored attacks can cause a lot of damage in the business and civilian worlds.