News

EU Considers Ban on Facial Recognition

It has been reported that the European Commission is considering a ban on the use of facial recognition in public spaces for up to five years while new regulations for its use are put in place.

Document

The reports of a possible three to five-year ban come from an 18-page EC report, which has been seen by some major news distributors.

Why?

Facial recognition trials in the UK first raised the issues of how the technology can be intrusive, can infringe upon a person’s privacy and data rights, and how facial recognition technology is not always accurate.  These issues have also been identified and raised in the UK, For example:

– In December 2018, Elizabeth Denham, the UK’s Information Commissioner launched a formal investigation into how police forces used FRT after high failure rates, misidentifications and worries about legality, bias, and privacy. This stemmed from the trial of ‘real-time’ facial recognition technology on Champions League final day June 2017 in Cardiff, by South Wales and Gwent Police forces, which was criticised for costing £177,000 and yet only resulting in one arrest of a local man whose arrest was unconnected.

– Trials of FRT at the 2016 and 2017 Notting Hill Carnivals led to the Police facing criticism that FRT was ineffective, racially discriminatory, and confused men with women.

– In September 2018 a letter, written by Big Brother Watch (a privacy campaign group) and signed by more than 18 politicians, 25 campaign groups, and numerous academics and barristers highlighted concerns that facial recognition is being adopted in the UK before it has been properly scrutinised.

– In September 2019 it was revealed that the owners of King’s Cross Estate had been using FRT without telling the public, and with London’s Metropolitan Police Service supplying the images for a database.

– In December 2019, a US report showed that, after tests by The National Institute of Standards and Technology (Nist) of 189 algorithms from 99 developers, their facial recognition technology was found to be less accurate at identifying African-American and Asian faces, and was particularly prone to misidentifying African-American females.

Impact Assessment

The 18-page EC report is said to contain the recommendation that a three to five-year ban on the public use of facial recognition technology would allow time to develop a methodology for assessing the impacts of (and developing risk management measures for) the use of facial recognition technology.

Google Calls For AI To Be Regulated

The way in which artificial intelligence (AI) is being widely and quickly deployed before the regulation of the technology has had a chance a to catch up is the subject of recent comments by Sundar Pichai, the head of Google’s parent company, Alphabet’.  Mr Pichai (in the Financial Times) called for regulation with a sensible approach and for a set of rules for areas of AI development such as self-driving cars and AI usage in health.

What Does This Mean For Your Business?

It seems that there is some discomfort in the UK, Europe and beyond that relatively new technologies which have known flaws, and are of concern to government representatives, interest groups and the public are being rolled out before the necessary regulations and risk management measures have had time to be properly considered and developed.  It is true that facial recognition could have real benefits (e.g. fighting crime) which could have benefits for many businesses and that AI has a vast range of opportunities for businesses to save money and time plus innovating products, services and processes.  However, the flaws in these technologies, and their potential to be used improperly, covertly, and in a way that could infringe the rights of the public cannot be ignored, and it is likely to be a good thing in the long term, that time is taken and efforts are made now to address the issues of stakeholders and develop regulations and measures that could prevent bigger problems involving these technologies further down the line.

£100m Fines Across Europe In The First 18 Months of GDPR

It has been reported that since the EU’s General Data Protection Regulation (GDPR) came into force in May 2018, £100m of data protection fines have been imposed on companies and organisations across Europe.

The Picture In The UK

The research, conducted by law firm DLA Piper, shows that the total fines imposed in the UK by the ICO stands at £274,000, but this figure is likely to be much higher following the finalising of penalties to be imposed on BA and Marriott.  For example, Marriott could be facing a £99 million fine for data breach between 2014 and 2018 that, reportedly involved up to 383 million guests, and BA (owned by IAG) could be facing a record-breaking £183 million for a breach of its data systems last year that could have affected 500,000 customers.

Also, the DLA Piper research shows that although the UK did not rankly highly in terms of fines, the UK ranked third in the number of breach notifications, with 22,181 reports since May 2018.  This equates to a relative ranking of 13th for data breach notifications per 100,000 people in the UK.

Increased Rate of Reporting

On the subject of breach notifications, the research shows a big increase in the rate of reporting, with 247 reports per day over the six months of GDPR between May 2018 and January 2019, which rose to 278 per day throughout last year. This rise in reporting is thought to be due to a much greater (and increasing) awareness about GDPR and the issue of data breaches.

France and Germany Hit Hardest With Fines

The fines imposed in the UK under GDPR are very small compared to Germany where fines totalled 51.1 million euros (top of the table for fines in Europe) and France where 24.6 million euros in fines were handed out.  In the case of France, much of the figure of fines collected relates to one penalty handed out to Google last January.

Already Strict Laws & Different Interpretations

It is thought that businesses in the UK having to meet the requirements of the already relatively strict Data Protection Act 1998 (the bones of which proved not to differ greatly from GDPR) is the reason why the UK finds itself (currently) further down the table in terms of fines and data breach notifications per 100,000 people.

Also, the EU’s Data Protection Directive wasn’t adopted until 1995, and GDPR appears to have been interpreted differently across Europe because it is principle-based, and therefore, apparently open to some level of interpretation.

What Does This Mean For Your Business?

These figures show that a greater awareness of data breach issues, greater reporting of breaches, and increased activity and enforcement action by regulators across Europe are likely to contribute to more big fines being imposed over the coming year.  This means that businesses and organisations need to ensure that they stay on top of the issue of data security and GDPR compliance.  Small businesses and SMEs shouldn’t assume that work done to ensure basic compliance on the introduction of GDPR back in 2018 is enough or that the ICO would only be interested in big companies as regulators appear to be increasing the number of staff who are able to review reports and cases.  It should also be remembered, however, the ICO is most likely to want to advise, help and guide businesses to comply where possible.

Tech Tip – Clipboard History

If you’d like to see the history of all the things you’ve attached to your clipboard in Windows 10 there’s a fast and easy way to do it. To see and to manage your clipboard items:

– Hold down the Windows key + V.  This brings up the scrollable clipboard panel listing all the items you’ve copied.

– Click on an item to paste it into your current document.

– Click on the cross symbol to permanently delete an item from the clipboard.

– Click on the pin symbol to keep an item even when you clear your clipboard history (there is a link to clear the history) or when you restart your PC.

– This feature also allows syncing across other devices so you can paste items from your clipboard to your other devices when you sign in with a Microsoft or work account.

Featured Article – Windows 7 Deadline Now Passed

Microsoft’s Windows 7 Operating system and Windows Server 2008 formally and finally reached their ‘End of Life’ (end of support, security updates and fixes) earlier on Wednesday 14 January.

End of Life – What Now?

End of life isn’t quite as final as it sounds because Windows 7 will still run but support i.e. security updates and patches and technical support will no longer be available for it. If you are still running Windows 7 then you are certainly not alone as it still has a reported 27 per cent market share among Windows users (Statcounter).

For most Windows 7 users, the next action will be to replace (or upgrade) the computers that are running these old operating systems.  Next, there is the move to Windows 10 and if you’re running a licensed and activated copy of Windows 7, Windows 8 or Windows 8.1, Home or Pro, you can get it for free by :

>> going to the Windows 10 download website

>>  choosing to Create Windows 10 installation media

>> Download tool now and Run

>> Upgrade this PC now (if it’s just one PC –  for another machine choose ‘Create installation media for another PC’ and save installation files) and follow the instructions.   >> After installation, you can see your digital license for Windows 10 by going to Settings Update & Security > Activation.

Windows Server

Windows Server 2008 and Windows Server 2008 R2 have also now reached their end-of-life which means no additional free security updates on-premises or non-security updates and free support options, and no online technical content updates.

Microsoft is advising that customers who use Windows Server 2008 or Windows Server 2008 R2 products and services should migrate to its Microsoft Azure.

About Azure

For Azure customers, the Windows Virtual Desktop means that there’s the option of an extra three years of extended support (of critical and important security updates) as part of that package, but there may be some costs incurred in migrating to the cloud service.

Buying Extended Security Updates

‘Extended Security Updates’ can be also purchased by customers with active Software Assurance for subscription licenses for 75% of the on-premises annual license cost, but this should only really be considered as a temporary measure to ease the transition to Windows 10, or if you’ve simply been caught out by the deadline.

Unsupported Devices – Banking & Sensitive Data Risk

One example of the possible risks of running Windows 7 after its ‘end-of-life’ date has been highlighted by the National Cyber Security Centre (NCSC), the public-facing part of GCHQ.  The NCSC has advised Windows 7 users to replace their unsupported devices as soon as possible and to move any sensitive data to a supported device.  Also, the NCSC has advised Windows 7 users to not use unsupported devices for tasks such as accessing bank and other sensitive accounts and to consider accessing email from a different device.

The NCSC has pointed out that cyber-criminals began targeting Windows XP immediately after extended support ended in 2015. It is likely, therefore, that the same thing could happen to Windows 7 users.

Businesses may wish to note that there have already been reports (in December) of attacks on Windows 7 machines in an attempt to exploit the EternalBlue vulnerability which was behind the serious WannaCry attacks.

Windows 7 History

Windows 7 was introduced in 2009 as an upgrade in the wake of the much-disliked Windows Vista.  Looking back, it was an unexpected success in many ways, and looking forward, if you’re one of the large percentage of Windows users still running Windows 7 (only 44% are running Windows 10), you may feel that you’ve been left with little choice but to move away from the devil you know to the not-so-big-bad Windows 10.

Success For Microsoft

Evolving from early codename versions such as “Blackcomb”, “Longhorn,” and then “Vienna” (in early 2006), what was finally named as Windows 7 in October 2008 proved to be an immediate success on its release in 2009.  The update-turned Operating System, which was worked upon by an estimated 1,000 developers clocked-up more than 100 million sales worldwide within the first 6 months of its release. Windows 7 was made available in 6 different editions, with the most popularly recognised being the Home Premium, Professional, and Ultimate editions.

Improvement

Windows 7 was considered to be a big improvement upon Windows Vista which, although achieving some impressive usage figures (still lower than XP though) came in for a lot of criticism for its high system requirements, longer boot time and compatibility problems with pre-Vista hardware and software.

Some of the key improvements that Windows 7 brought were the taskbar and a more intuitive feel, much-improved performance, and fewer annoying User Account Control popups. Some of the reasons for switching to Windows 7 back in 2009 were that it had been coded to support most pieces of software that ran on XP, it could automatically install device drivers, the Aero features provided a much better interface, it offered much better hardware support, the 64-bit version of Windows 7 could handle a bigger system memory, and the whole Operating System had a better look and feel.

Embracing the Positive

It may even be the case that in the process of worrying about the many complications and potential challenges of migrating to Windows 10 you haven’t allowed yourself to focus on the positive aspects of the OS such as a faster and more dynamic environment and support for important business software like Office 365 and Windows server 2016.

What To Do Now

The deadline to the end of support/end of life for Windows 7 has now passed and the key factor to remember is that Windows 7 (and your computers running Windows 7) is now exposed to any new risks that come along. If you have been considering some possible OS alternatives to Windows 10, these could bring their own challenges and risks and you may now have very limited time to think about them. Bearing in mind the targeting of Windows XP immediately at the end of its extended support (in 2015), we may reasonably expect similar targeting of Windows 7 which makes the decision to migrate more pressing.

For most businesses, the threat of no more support now means that continuing to run Windows 7 presents a real risk to the business e.g. from every new hacking and malware attack, and as the NCSC has highlighted, there is a potentially high risk in using devices running Windows 7 for anything involving sensitive data and banking.

If you choose to upgrade to Windows 10 on your existing computers, you will need to consider factors such as the age and specification of those computers, and there are likely to be costs involved in upgrading existing computers.  You may also be considering (depending on the size/nature of your business and your IT budget) the quick solution of buying new computers with Windows 10 installed, and in addition to the cost implications, you may also be wondering how and whether you can use any business existing systems or migrate any important existing data and programs to this platform.  The challenge now, however, is that time has officially run out in terms of security updates and support so, the time to make the big decisions has arrived.

Want A Walkie-Talkie? Now You Can Use Your Phone and MS Teams

Microsoft has announced that it is introducing a “push-to-talk experience” to its ‘Teams’ collaborative platform that turns employee or company-owned smartphones and tablets into walkie-talkies.

No Crosstalk or Eavesdropping

The new ‘Walkie Talkie’ feature will offer clear, instant and secure voice communication over the cloud.  This means that it will not be at risk from traditional analogue (unsecured network) walkie-talkie problems such as crosstalk or eavesdropping, and Microsoft says that because Walkie Talkie works over Wi-Fi or cellular data, it can also be used across geographic locations.

Teams Mobile App

The Walkie Talkie feature can be accessed in private preview in Teams in the first half of this year and will be available in the Teams mobile app.  Microsoft says that Walkie Talkie will also integrate with Samsung’s new Galaxy XCover Pro enterprise-ready smartphone for business.

Benefits

The main benefits of Walkie Talkie are making it easier for firstline workers to communicate and manage tasks as well as reducing the number of devices employees must carry and lowering IT costs.

One Better Than Slack

Walkie Talkie also gives Teams another advantage over its increasingly distant rival Slack, which doesn’t currently have its own Walkie Talkie-style feature, although things like spontaneous voice chat can be added to Slack with Switchboard.

Last month, Microsoft announced that its Teams product had reached the 20 million daily active users (and growing) mark, thereby sending Slack’s share price downwards.

Slack, which has 12 million users (a number which has increased by 2 million since January 2019) appears to be falling well into second place in terms of user numbers to Teams in the $3.5 billion chat-based collaborative working software market.  However, some tech commentators have noted that Slack has stickiness and strong user engagement and that its main challenge is that although large companies in the US use it and like it, they currently have a free version, so Slack will have to convince them to upgrade to the paid-for version if it wants to start catching up with Teams

Apple Watch Walkie-Talkie App

Apple Watch users (Series 1 or later with watch OS 5.3 or later, not in all countries though) have been able to use a ‘Walkie-Talkie’ app since October last year.

What Does This Mean For Your Business?

For businesses using Microsoft Teams, the new Walkie Talkie feature could be a cost-saving and convenient tool for firstline workers, and the fact that it integrates Samsung’s new Galaxy XCover Pro will give it even more value for businesses.

For Microsoft, the new Walkie Talkie feature, along with 7 other recently announced new tools for Teams focused firmly on communication and task management for firstline workers are more ways that Teams can gain a competitive advantage over rival Slack, and increase the value of Office 365 to valuable business customers.

Facebook Bans Deepfake Videos

In a recent blog post, ahead of the forthcoming US election, Monika Bickert, Vice President, of Facebook’s Global Policy Management has announced that the social media giant is banning deepfakes and “all types of manipulated media”.

Not Like Last Time

With the 59th US presidential election scheduled for Tuesday, November 3, 2020, Facebook appears to be taking no chances after the trust-damaging revelations around unauthorised data sharing with Cambridge Analytica, and the use of the platform by foreign powers such as Russia in an attempt to influence the outcome of the 2016 election of Donald Trump.

The fallout of the news that 50 million Facebook profiles were harvested as early as 2014 in order to build a software program that could predict and use personalised political adverts to influence choices at the ballot box in the last U.S. election includes damaged trust in Facebook, a substantial fine, plus a fall in the number of daily users in the United States and Canada for the first time in its history.

Deepfakes

One of the key concerns to Facebook this time around appears to be so-called ‘deepfake’ videos.  These use deep learning technology and manipulated images of target individuals (found online), often celebrities, politicians, and other well-known people to create very convincing videos of the subjects saying and doing whatever the video-maker wants them to. These videos could obviously be used to influence public thinking about political candidates, and as well as having an influence in election results, it would be very damaging for Facebook, which has been very public about trying to rid itself of ‘fake news’ and not to be seen as a platform for the easy distribution of deepfake videos.  No doubt Facebook’s CEO Mark Zuckerberg would like to avoid having to appear before Congress again to answer questions about his company’s handling of personal data, as he had to back in April 2018.

The New Statement From Facebook

This latest blog post statement from Facebook says that as a matter of policy, it will now remove any misleading media from its platform if the media meets two criteria, which are:

  • If it has been synthesised i.e. more than just adjustments for clarity or quality to the point where the ‘average person’ could be misled into thinking the subject of the media/video is saying words that they did not actually say, and…
  • If the media is the product of artificial intelligence or machine learning that has merged, replaced or superimposed content onto a video, in order to make it appear to be authentic.

Not Satire

Facebook has been careful to point out that this policy change will not affect content that is clearly intended to be parody or satire, or videos that have been edited just to omit or change the order of the words featured in them.

Existing Policies

Any media posted to Facebook is subject to the social media giant’s existing comply-or-be-removed ‘Community Standards’ policies which cover, among other things, voter suppression and hate speech.

What Will Happen?

Facebook says that any videos that don’t meet its standards for removal are still eligible for review by one its independent third-party fact-checkers (which include 50+ partners worldwide) and that any photos or videos rated as false or partly false (by a fact-checker) will have its distribution “significantly” reduced in News Feed and will be rejected if it’s being run as an ad. Also, those who see it and try to share it, or have already shared it, will be shown warnings alerting them that it’s false.

Measures

Facebook has taken many measures to ensure that it is not seen as a platform that can’t be trusted with user data or as a distributor of fake news.  For example:

– In January 2019 Facebook announced (in the UK) that it was working with London-based, registered charity ‘Full Fact’ to review stories, images and videos, in an attempt to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.

– In September 2019, Facebook launched its Deep Fake Detection Challenge, with $10 million in grants and with a cross-sector coalition of organisations in order to encourage the production of tools to detect deepfakes.

– In October 2019, Facebook launched the ‘News’ tab on its mobile app to direct users to unbiased, curated articles from credible sources in a bid to publicly combat fake news and help restore trust in its own brand.

– Facebook has partnered with Reuters to produce a free online training course to help newsrooms worldwide to identify deepfakes and manipulated media.

Criticism

Despite this recent announcement of policy change to help eradicate deepfakes from its platform, Facebook has been criticised by some commentators for appearing to allow some videos which some could describe as misinformation in certain situations (apparently of its choosing).  For example, Facebook has said that content that violates its policies could be allowed if it is deemed newsworthy e.g. presumably, the obviously doctored videos of Labour’s Keir Starmer and US House Speaker Nancy Pelosi.

What Does This Mean For Your Business?

Clearly, any country would like to guard against outside influence in its democratic processes and the deliberate spread of misinformation, and bearing in mind the position of influence that Facebook has, it is good for everyone that it is taking responsibility and trying to block obvious attempts to spread misinformation by altering its policies and working with other organisations. Businesses that use Facebook as an advertising platform also need to know that Facebook users have trust in (and will continue to use) that platform (and see their adverts) so it’s important to businesses that Facebook is vigilant and takes action where it can.  Also, by helping to protect the democratic processes of the countries it operates in, particularly in the US at the time of and election (and bearing in mind what happened last time), it is in Facebook’s own interest to protect its brand against any accusations of not allowing political influence through a variety of media on its platform, and any further loss of trust by its public. This change of policy also shows that Facebook is trying to show readiness to deal with the most up to date threat of deepfakes (even though they are relatively rare).

That said, Google and Twitter (with its new restrictions on micro-targeting for example), have both been very public about trying to stop all lies in political advertising on their platforms, but Facebook has just been criticised by the IPA over its decision not to ban political ads that are using micro-targeting and spurious claims to sway the opinions of voters.

Facebook Bans Deepfake Videos

In a recent blog post, ahead of the forthcoming US election, Monika Bickert, Vice President, of Facebook’s Global Policy Management has announced that the social media giant is banning deepfakes and “all types of manipulated media”.

Not Like Last Time

With the 59th US presidential election scheduled for Tuesday, November 3, 2020, Facebook appears to be taking no chances after the trust-damaging revelations around unauthorised data sharing with Cambridge Analytica, and the use of the platform by foreign powers such as Russia in an attempt to influence the outcome of the 2016 election of Donald Trump.

The fallout of the news that 50 million Facebook profiles were harvested as early as 2014 in order to build a software program that could predict and use personalised political adverts to influence choices at the ballot box in the last U.S. election includes damaged trust in Facebook, a substantial fine, plus a fall in the number of daily users in the United States and Canada for the first time in its history.

Deepfakes

One of the key concerns to Facebook this time around appears to be so-called ‘deepfake’ videos.  These use deep learning technology and manipulated images of target individuals (found online), often celebrities, politicians, and other well-known people to create very convincing videos of the subjects saying and doing whatever the video-maker wants them to. These videos could obviously be used to influence public thinking about political candidates, and as well as having an influence in election results, it would be very damaging for Facebook, which has been very public about trying to rid itself of ‘fake news’ and not to be seen as a platform for the easy distribution of deepfake videos.  No doubt Facebook’s CEO Mark Zuckerberg would like to avoid having to appear before Congress again to answer questions about his company’s handling of personal data, as he had to back in April 2018.

The New Statement From Facebook

This latest blog post statement from Facebook says that as a matter of policy, it will now remove any misleading media from its platform if the media meets two criteria, which are:

  • If it has been synthesised i.e. more than just adjustments for clarity or quality to the point where the ‘average person’ could be misled into thinking the subject of the media/video is saying words that they did not actually say, and…
  • If the media is the product of artificial intelligence or machine learning that has merged, replaced or superimposed content onto a video, in order to make it appear to be authentic.

Not Satire

Facebook has been careful to point out that this policy change will not affect content that is clearly intended to be parody or satire, or videos that have been edited just to omit or change the order of the words featured in them.

Existing Policies

Any media posted to Facebook is subject to the social media giant’s existing comply-or-be-removed ‘Community Standards’ policies which cover, among other things, voter suppression and hate speech.

What Will Happen?

Facebook says that any videos that don’t meet its standards for removal are still eligible for review by one its independent third-party fact-checkers (which include 50+ partners worldwide) and that any photos or videos rated as false or partly false (by a fact-checker) will have its distribution “significantly” reduced in News Feed and will be rejected if it’s being run as an ad. Also, those who see it and try to share it, or have already shared it, will be shown warnings alerting them that it’s false.

Measures

Facebook has taken many measures to ensure that it is not seen as a platform that can’t be trusted with user data or as a distributor of fake news.  For example:

– In January 2019 Facebook announced (in the UK) that it was working with London-based, registered charity ‘Full Fact’ to review stories, images and videos, in an attempt to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.

– In September 2019, Facebook launched its Deep Fake Detection Challenge, with $10 million in grants and with a cross-sector coalition of organisations in order to encourage the production of tools to detect deepfakes.

– In October 2019, Facebook launched the ‘News’ tab on its mobile app to direct users to unbiased, curated articles from credible sources in a bid to publicly combat fake news and help restore trust in its own brand.

– Facebook has partnered with Reuters to produce a free online training course to help newsrooms worldwide to identify deepfakes and manipulated media.

Criticism

Despite this recent announcement of policy change to help eradicate deepfakes from its platform, Facebook has been criticised by some commentators for appearing to allow some videos which some could describe as misinformation in certain situations (apparently of its choosing).  For example, Facebook has said that content that violates its policies could be allowed if it is deemed newsworthy e.g. presumably, the obviously doctored videos of Labour’s Keir Starmer and US House Speaker Nancy Pelosi.

What Does This Mean For Your Business?

Clearly, any country would like to guard against outside influence in its democratic processes and the deliberate spread of misinformation, and bearing in mind the position of influence that Facebook has, it is good for everyone that it is taking responsibility and trying to block obvious attempts to spread misinformation by altering its policies and working with other organisations. Businesses that use Facebook as an advertising platform also need to know that Facebook users have trust in (and will continue to use) that platform (and see their adverts) so it’s important to businesses that Facebook is vigilant and takes action where it can.  Also, by helping to protect the democratic processes of the countries it operates in, particularly in the US at the time of and election (and bearing in mind what happened last time), it is in Facebook’s own interest to protect its brand against any accusations of not allowing political influence through a variety of media on its platform, and any further loss of trust by its public. This change of policy also shows that Facebook is trying to show readiness to deal with the most up to date threat of deepfakes (even though they are relatively rare).

That said, Google and Twitter (with its new restrictions on micro-targeting for example), have both been very public about trying to stop all lies in political advertising on their platforms, but Facebook has just been criticised by the IPA over its decision not to ban political ads that are using micro-targeting and spurious claims to sway the opinions of voters.

.ORG Silence Continues After ICANN Imposes Temporary Sale Halt

Internet companies are still none-the-wiser about the details of the proposed sale of the .org registry to private equity firm Ethos Capital following DNS overseer ICANN putting a temporary halt on the sale back on 9 December.

What Sale?

The rights to the .org domain registry, one of the largest internet registries in the world, with over 10 million names, was/is due to be sold by ISOC (aka the Internet Society), the parent company of PIR (the organisation that currently runs it) for an as-yet-undisclosed sum to Ethos Capital.

Always Not For Profit

The relatively sudden announcement of the sale caused shock and some dismay within the industry over the thought that a registry that has held its non-profit status since 2003 will now be ending up in private hands. Historically, .org domains have always been the outward sign of non-profit organisations.

About Ethos

Some industry commentators have also expressed concern about the lack of knowledge within the industry about Ethos Capital, and some worries have, therefore, been expressed about how qualified and able they may be to manage the .org registry.

Other Criticism

Other criticisms about the sale, which have been voiced online include:

– Suspicion about possible conflicts of interest e.g. around Fadi Chehade, a former CEO of ICANN who is credited by some with encouraging a free-market approach to internet addresses, and who some appear to believe is connected to Ethos Capital.

– After ICANN lifted the price caps on .org domains for the next 10 years (allowing unlimited price increases on the millions of .org domain names) many high-profile non-profit organisations have rejected ICANN’s claim that the move was simply to make the process consistent with the base form registry agreement and have accused ICANN of disregarding the public interest in favour of ICANN’s own administrative convenience.

– Worries that ICANN’s decision to approve the proposed sale may have been subject to bias and may not have reflected the true strength of feeling against the sale.

– Concerns were even expressed by those who supported the proposal e.g. ICANN’s At Large Advisory Committee (ALAC) and Non-Commercial Stakeholder Group (NCSG).

– Anger that ICANN appeared to move ahead with the decision to lift caps without any explanation, and that there still appears to be a level of secrecy surrounding the sale.

– Suspicion by some that the deal has long been the subject of informal discussion among key players.

Temporary Halt

A temporary halt was placed on the proposed sale of the .org Registry right to Ethos Capital in early December and since then, the Packet Clearing House (PCH) has argued (in a letter to ICANN) that the sale and move to non-profit status would mean less money being spent on .org’s operational costs, and could affect stability and could disrupt “critical real-time functions” of organisations using .org domains.

Silence

There is now a sense of frustration from many parties in the industry over the apparent silence, and the distinct lack of information since the temporary halt was placed on the sale.

What Does This Mean For Your Business?

There are many important organisations that use .org domains e.g. air traffic control, and these, as well as the 10 million others who have .org domains, will be concerned not just about the possible price rises of .orgs due to the lifting of the price cap, but also about the possible disruption and instability that the sale of this kind could cause.

There also appears to be a good deal of anger, concern, and unanswered questions in the Internet market about the decision to sell and the details of the sale, as well as apparent feelings of a possible lack of transparency and feelings that things may possibly have been rushed through with important arguments against the sale not being adequately addressed. That said, ICANN must have seen good enough reason to put a temporary halt on the sale, for the time being.

It remains to be seen exactly what happens next but in the interests of the industry and .org owners, the hope is that there will more communication, information and transparency very soon.

Featured Article – Email Security (Part 2)

Following on from last month’s featured article about email security (part 1), in part 2 we focus on many of the email security and threat predictions for this year and for the near, foreseeable future.

Looking Forward

In part 1 of this ‘Email Security’ snapshot, we looked at how most breaches involve email, the different types of email attacks, and how businesses can defend themselves against a variety of known email-based threats. Unfortunately, businesses and organisations now operate in an environment where cyber-attackers are using more sophisticated methods across multi-vectors and where threats are constantly evolving.

With this in mind, and with businesses seeking to be as secure as possible against the latest threats, here are some of the prevailing predictions based around email security for the coming year.

Ransomware Still a Danger

As highlighted by a recent Malwarebytes report, and a report by Forbes, the ransomware threat is by no means over and since showing an increase in the first quarter of 2019 of 195 per cent on the previous year’s figures it is still predicted to be a major threat in 2020. Tech and security commentators have noted that although ransomware attacks on consumers have declined by 33 per cent since last year, attacks against organisations have worsened.  In December, for example, a ransomware attack was reported to have taken a US Coast Guard (USCG) maritime base offline for more than 30 hours.

At the time of writing this article, it has been reported that following an attack discovered on New Year’s Day, hackers using ransomware are holding Travelex’s computers for ransom to such a degree that company staff have been forced to use pen and paper to record transactions!

Information Age, for example, predicts that softer targets (outdated software, inadequate cybersecurity resources, and a motivation to pay the ransom) such as the healthcare services will be targeted more in the coming year with ransomware that is carried by email.

Phishing

The already prevalent email phishing threat looks likely to continue and evolve this year with cybercriminals set to try new methods in addition to sending phishing emails e.g. using SMS and even spear phishing (highly targeted phishing) using deepfake videos to pose as company authority figures.

As mentioned in part 1 of the email security articles, big tech companies are responding to help combat phishing with new services e.g. the “campaign views” tool in Office 365 and Google’s advanced security settings for G Suite administrators.

BEC & VEC

Whereas Business Email Compromise (BEC) attacks have been successful at using email fraud combined with social engineering to bait one staff member at-a-time to extract money from a targeted organisation, security experts say that this kind of attack is morphing into a much wider threat of ‘VEC’ (Vendor Email Compromise). This is a larger and more sophisticated version which, using email as a key component, seeks to leverage organisations against their own suppliers.

Remote Access Trojans

Remote Access Trojans (RATs) are malicious programs that can arrive as email attachments.  RATs provide cybercriminals with a back door for administrative control over the target computer, and they can be adapted to help them to avoid detection and to carry out a number of different malicious activities including disabling anti-malware solutions and enabling man-in-the-middle attacks.  Security experts predict that more sophisticated versions of these malware programs will be coming our way via email this year.

The AI Threat

Many technology and security experts agree that AI is likely to be used in cyberattacks in the near future and its ability to learn and to keep trying to reach its target e.g. in the form of malware, make it a formidable threat. Email is the most likely means by which malware can reach and attack networks and systems, so there has never been a better time to step up email security, train and educate staff about malicious email threats, how to spot them and how to deal with them. The addition of AI to the mix may make it more difficult for malicious emails to be spotted.

The good news for businesses, however, is that AI and machine learning is already used in some anti-virus software e.g. Avast, and this trend of using AI in security solutions to counter AI security threats is a trend that is likely to continue.

One Vision of the Email Security Future

The evolving nature of email threats means that businesses and organisations may need to look at their email security differently in the future.

One example of an envisaged approach to email security comes from Mimecast’s CEO Peter Bauer.  He suggests that in order to truly eliminate the threats that can abuse the trust in their brands “out in the wild” companies need to “move from perimeter to pervasive email security.  This will mean focusing on the threats:

– To the Perimeter (which he calls Zone1).  This involves protecting users’ email and data from spam and viruses, malware and impersonation attempts, data leaks – in fact, protecting the whole customer, partner and vendor ecosystem.

– From inside the perimeter (Zone 2).  This involves being prepared to be able to effectively tackle internal threats like compromised user accounts, lateral movement from credential harvesting links, social engineering, and employee error threats.

– From beyond the perimeter (Zone 3).  These could be threats to brands and domains from spoofed or hijacked sites that could be used to defraud customers and partners.

As well as recognising and looking to deal with threats in these 3 zones, Bauer also suggests an API-led approach to help deliver pervasive security throughout all zones.  This could involve businesses monitoring and observing email attacks with e.g. SOARs, SIEMs, endpoints, firewalls and broader threat intelligence platforms, feeding this information and intelligence to security teams to help keep email security as up to date and as tight as possible.

Into 2020 and Beyond

Looking ahead to email security in 2020 and beyond, companies will be facing plenty more of the same threats (phishing, ransomware, RATs) which rely on email combined with human error and social engineering to find their way into company systems and networks. Tech companies are responding with updated anti-phishing and other solutions.

SME’s (rather than just bigger companies) are also likely to find themselves being targeted with more attacks involving email, and companies will need to, at the very least, make sure they have the basic automated, tech and human elements in place (training, education, policies and procedures) to help provide adequate protection (see the end of part 1 for a list of email security suggestions).

The threat of AI-powered attacks, however, is causing some concern and the race is on to make sure that AI-powered protection is up to the level of any AI-powered attacks.

Taking a leaf out of companies like Mimecast’s book, and looking at email security in much wider scope and context (outside the perimeter, inside the perimeter, and beyond) may bring a more comprehensive kind of email security that can keep up with the many threats that are now arriving across a much wider attack surface.

Glimpse of the Future of Tech at CES Expo Show

This week, at the giant CES expo in Las Vegas, the latest technology from around the world is on display, and here are just a few of the glimpses into the future that are being demonstrated there, with regards to business-tech.

Cyberlink FaceMe®

Leading facial recognition company Cyberlink will be demonstrating the power of its highly accurate FaceMe® AI engine. The FaceMe® system, which Cyberlink claims has an accuracy rate (TAR, True Acceptance Rate) of 99.5% at 10-4 FAR, is so advanced that it can recognise the age, gender and even the emotional state of passers-by and can use this information to display appropriate adverts.

D-ID

In a world where facial recognition technology is becoming more prevalent, D-ID recognise the need to protect the sensitive biometric data that makes up our faces. On display at CES expo is D-ID’s anti facial recognition solution which uses an algorithm, advanced image processing and deep learning techniques to re-synthesise any given photo to a protected version so that photos are unrecognisable to face recognition algorithms, but humans will not notice any difference.

Hour One

Another interesting contribution to the Las Vegas CES expo is Hour One’s AI-powered system for creating premium quality synthetic characters based on real-life people. The idea is that these very realistic characters can be used to promote products without companies having to hire expensive stars and actors and that companies using Hour One can save time and money and get a close match to their brief due to the capabilities, scale/cope and fast turnaround that Hour One offers.

Mirriad

Also adding to the intriguing and engaging tech innovations at the expo, albeit at private meetings there, is Mirriad’s AI-powered solution for analysing videos, TV programmes and movies for brand/product insertion opportunities and enabling retrospective brand placements in the visual content. For example, different adverts can be inserted in roadside billboards and bus stop advertising boards that are shown in pre-shot videos and films.

What Does This Mean For Your Business?

AI is clearly emerging as an engine that’s driving change and creating a wide range of opportunities for business marketing as well as for security purposes. The realism and accuracy, flexibility, scope, scale, and potential cost savings that AI offers could provide many beneficial business opportunities. The flipside for us as individuals and consumers is that, for example, as biometric systems (such as facial recognition) offers us some convenience and protection from cyber-crime, they can also threaten our privacy and security. It is ironic and probably inevitable, therefore, that we may need and value AI-powered protection solutions such as D-ID to protect us.

AI Better at Breast Cancer Detection Than Doctors

Researchers at Good Health have created an AI program which, in tests, has proven to be more accurate at detecting and diagnosing breast cancer than expert human radiologists.

Trained

The AI software, which was developed by Good Health researchers in conjunction DeepMind, Cancer Research UK Imperial Centre, Northwestern University and Royal Surrey County Hospital was ‘trained’ to detect the presence of breast cancer using X-ray images (from mammograms) from nearly 29,000 women.

Results

In the UK tests, compared to one radiologist, the AI program delivered a reduction of 1.2% in false positives, when a mammogram was incorrectly diagnosed as abnormal, and a reduction of 2.7% in false negatives, where the cancer was missed. These positive results were even greater for the US tests.

In another separate test, which used the program trained only on UK data then tested it against US data (to determine its wider effectiveness), there was a very respectable 3.5% reduction in false positives and 8.1% reduction in false negatives.

In short, these results appear to show that the AI program, which outperformed six radiologists in the reading of mammograms and only had mammograms to go on (human radiologists also have access to medical history) is better at spotting cancer than a single doctor, and equally as good as spotting cancer as the current double-reading system of two doctors.

Promising

Even though these initial test results have received a lot of publicity and appear to be very positive, bearing in mind the seriousness of the condition, AI-based systems of this kind still have some way to go before more research, clinical studies and regulatory approval brings them into the mainstream of healthcare services.

What Does This Mean For Your Business?

This faster and more accurate way of spotting and diagnosing breast cancer by harnessing the power of AI could bring many benefits. These include reducing stress for patients by shortening diagnosis time, easing the workload pressure on already stretched radiologists, and going some way towards helping bridge the UK’s current shortage of radiologists who need to be trained to read mammograms (which normally takes more than 10 years training). All this could mean greater early diagnosis and survival rates.

For businesses, this serves as an example of how AI can be trained and used to study some of the most complex pieces of information and produce results that can be more accurate, faster, and cheaper than humans doing the same job, remembering that, of course, AI programs work 24/7 without a day off.