Technology

Tracking For People Who Lose Things

Google Assistant is now supporting Tile’s Bluetooth tracker which means that Tile customers can use a simple voice command to enlist the help of Google Assistant in finding their lost keys, wallet, TV remote control and more.

What Is Tile?

Tile uses Bluetooth and a phone app to locate physical ‘Tile’ tracking devices of different sizes which can be attached to keyrings, bags, slipped into wallets, or even attached to a dog’s collar.  The tile app on the user’s smartphone can then be used to ring a Tile (the physical tracker that’s attached to e.g. your keyring) if it’s nearby (the Tile gives off a tone so it can be found), and by tapping the ‘Find’ button in the app, the item that has the Tile tracker attached can then be located.

If an item has been genuinely lost outside of the house, Tile can also be used to locate the item on a map which shows the last time and place that the item was with the used, and users who can’t locate their item this way can also ask the wider Tile community to anonymously help them find it.

Tile also has partnerships with manufacturers so that its technology is already built-in to items e.g. Sennheiser earphones.

Tile is reported to have already sold more than 22 million devices worldwide in 195 countries with its system being used to find 6 million items every day.

Google Assistant

The support from Google Assistant (via Nest devices – the Nest Mini or Nest Hub) means that, rather than opening a Tile app on their phone to locate their missing items, users can simply ask the Google Assistant where their item is, and/or ask the Google Assistant to ring their missing item. This adds an extra layer of convenience for Tile and Google Assistant users.

Competition From Apple

The move to partner with Google gives Tile a better opportunity to fend off likely competition from Apple, which is reported to be on the verge of releasing its own item location tracking system.

What Does This Mean For Your Business?

For Tile, teaming up with Google is a very important strategic move helping it to add extra convenience and a powerful brand endorsement to its services, strengthen its current competitive edge, and give it more of a chance to fight off competition from Apple when it enters the market (soon) with a similar service.

For Google, this is a chance to add another value-adding feature to its digital assistant’s services, thereby helping it compete in another small way with competitors like Amazon.

For users of Tile, and future users of Tile who have a Google Nest device, this offers an even more convenient and fast way of using Tile’s services.

WhatsApp Ceases Support For More Old Phone Operating Systems

WhatsApp has announced that its messaging app will no longer work on outdated operating systems, which is a change that could affect millions of smartphone users.

Android versions 2.3.7 and Older, iOS 8 and Older

The change, which took place on February 1, means that WhatsApp has ended support for Android operating system versions 2.3.7 and older and iOS 8 meaning that users of WhatsApp who have those operating systems on their smartphones will no longer be able to create new accounts or to re-verify existing accounts.  Although these users will still be able to use WhatsApp on their phones, WhatsApp has warned that because it has no plans to continue developing for the old operating systems, some features may stop functioning at any time.

Why?

The change is consistent with Facebook-owned app’s strategy of withdrawing support for older systems and older devices as it did back in 2016 (smartphones running older versions of Android, iOS, Windows Phone + devices running Android 2.2 Froyo, Windows Phone 7 and older versions, and iOS 6 and older versions), and when WhatsApp withdrew support for Windows phones on 31 December 2019.

For several years now, WhatsApp has made no secret of wanting to maintain the integrity of its end-to-end encrypted messaging service, making changes that will ensure that new features can be added that will keep the service competitive, maintain feature parity across different systems and devices, and focus on the operating systems that it believes that the majority of its customers in its main markets now use.

Security & Privacy?

This also means that, since there will no longer be updates for older operating systems, this could lead to privacy and security risks for those who continue using older operating systems.

What Now?

Users who have a smartphone with an older operating system can update the operating system, or upgrade to a newer smartphone with model in order to ensure that they can continue using WhatsApp.

The WhatsApp messaging service can also now be accessed through the desktop by syncing with a user’s phone.

What Does This Mean For Your Business?

WhatsApp is used by many businesses for general communication and chat, groups and sending pictures, and for those business users who still have an older smartphone operating system, this change may be another reminder that the perhaps overdue time to upgrade is at hand.  Some critics, however, have pointed to the fact that the move may have more of a negative effect on those WhatsApp users in growth markets e.g. Asia and Africa where many older devices and operating systems are still in use.

For WhatsApp, this move is a way to stay current and competitive in its core markets and to ensure that it can give itself the scope to offer new features that will keep users loyal and engaged with and committed to the app.

Business Leaders Lack Vital Digital Skills Says OU Survey

The Open University’s new ‘Leading in a Digital Age’ report highlights a link between improved business performance and leaders who are equipped, through technology training, to manage digital change.

Investing In Digital Skills Training

The latest version of the annual report, which bases its findings on a survey of 950 CTOs and senior leaders within UK organisations concludes that leaders who invested in digital skills training are experiencing improved productivity (56 per cent), greater employee engagement (55 per cent), enhanced agility, and vitally, increased profit.

The flipside, highlighted in the same survey, is that almost half (47 per cent) of those business leaders surveyed thought they lacked the tech skills to manage in the digital age, and more than three-quarters of them acknowledge that they could benefit from more digital training.

Key Point

The key point revealed by the OU survey and report is that the development of digital skills in businesses are led from the top and that those businesses that invest in learning and development of digital skills are likely to be more able to take advantage of opportunities in what could now be described as a ‘digital age’.

Skills Shortages

The report acknowledges the digital skills shortages that UK businesses and organisations face (63 per cent of senior business leaders report a skills shortage for their organisation) and the report identifies a regional divide in those companies reporting skills shortages – more employers in the South and particularly the South West are finding that skills are in short supply and reporting that recruitment for digital roles takes longer.

One likely contributing factor to some geographical/regional divides in skills shortages and difficulty in recruiting for tech roles in those areas may be the spending, per area, on addressing those skills shortages.  For example, London is reported to have spent (in 2019) £1.4 billion (the equivalent of £30,470 per organisation), while the North East spent the least (£172.2 million), and South East spent only £10,260 per organisation.

Factors Affecting The Skills Shortage

The OU report identifies several key factors that appear to be affecting the skills shortage and the investment that may be needed to address those skills shortages. These include the uncertainty over Brexit, increased competition, an ageing population, the speed and scope of the current ‘digital revolution’, and a lack of diversity.

What Does This Mean For Your Business?

Bearing in mind that the OU, whose survey and report this was, is a supplier of skills training, the report, nonetheless, makes some relevant and important points.  For many businesses, for example, managers and owners are most likely to the be the ones with the most integrated picture of the business and its aims, and if they had better digital skills and awareness they may be more likely to identify opportunities, and more likely to promote and invest in digital skills training within their organisation that could be integral to their organisation being able to take advantage of those opportunities.

The tech skills shortage in the UK is, unfortunately, not new and is not down to just businesses alone to solve the skills gap challenge. The government, the education system and businesses need to find ways to work together to develop a base of digital skills in the UK population and to make sure that the whole tech ecosystem finds effective ways to address the skills gap and keep the UK’s tech industries and business attractive and competitive.  As highlighted in the OU report, apprenticeships may be one more integrated way to help bridge skills shortages.

Featured Article – Proposed New UK Law To Cover IoT Security

The UK government’s Department for Digital, Culture, Media and Sport (DCMS), has announced that it will soon be preparing new legislation to enforce new standards that will protect users of IoT devices from known hacking and spying risks.

IoT Household Gadgets

This commitment to legislate leads on from last year’s proposal by then Digital Minister Margot James and follows a seven-month consultation with GCHQ’s National Cyber Security Centre, and with stakeholders including manufacturers, retailers, and academics.

The proposed new legislation will improve digital protection for users of a growing number of smart household devices (devices with an Internet connection) that are broadly grouped together as the ‘Internet of Things’ (IoT).  These gadgets, of which there is an estimated 14 billion+ worldwide (Gartner), include kitchen appliances and gadgets, connected TVs, smart speakers, home security cameras, baby monitors and more.

In business settings, IoT devices can include elevators, doors, or whole heating and fire safety systems in office buildings.

What Are The Risks?

The risks are that the Internet connection in IoT devices can, if adequate security measures are not in place, provide a way in for hackers to steal personal data, spy on users in their own homes, or remotely take control of devices in order to misuse them.

Default Passwords and Link To Major Utilities

The main security issue of many of these devices is that they have pre-set, default unchangeable passwords, and once these passwords have been discovered by cyber-criminals, the IoT devices are wide open to being tampered with and misused.

Also, IoT devices are deployed in many systems that link to and are supplied by major utilities e.g. smart meters in homes. This means that a large-scale attack on these IoT systems could affect the economy.

Examples

Real-life examples of the kind of IoT hacking that the new legislation will seek to prevent include:

– Hackers talking to a young girl in her bedroom via a ‘Ring’ home security camera (Mississippi, December 2019).  In the same month, a Florida family were subjected to vocal, racial abuse in their own home and subjected to a loud alarm blast after a hacker took over their ‘Ring’ security system without permission.

– In May 2018, A US woman reported that a private home conversation had been recorded by her Amazon’s voice assistant, and then sent it to a random phone contact who happened to be her husband’s employee.

– Back in 2017, researchers discovered that a sex toy with an in-built camera could also be hacked.

– In October 2016, the ‘Mirai’ attack used thousands of household IoT devices as a botnet to launch an online distributed denial of service (DDoS) attack (on the DNS service ‘Dyn’) with global consequences.

New Legislation

The proposed new legislation will be intended to put pressure on manufacturers to ensure that:

– All internet-enabled devices have a unique password and not a default one.

– There is a public point of contact for the reporting of any vulnerabilities in IoT products.

– The minimum length of time that a device will receive security updates is clearly stated.

Challenges

Even though legislation could make manufacturers try harder to make IoT devices more secure, technical experts and commentators have pointed out that there are many challenges to making internet-enabled/smart devices secure because:

  • Adding security to household internet-enabled ‘commodity’ items costs money. This would have to be passed on to the customer in higher prices, but this would mean that the price would not be competitive. Therefore, it may be that security is being sacrificed to keep costs down-sell now and worry about security later.
  • Even if there is a security problem in a device, the firmware (the device’s software) is not always easy to update. There are also costs involved in doing so which manufacturers of lower-end devices may not be willing to incur.
  • With devices which are typically infrequent and long-lasting purchases e.g. white goods, we tend to keep them until they stop working, and we are unlikely to replace them because they have a security vulnerability that is not fully understood. As such, these devices are likely to remain available to be used by cyber-criminals for a long time.

Looking Ahead

Introducing legislation that only requires manufacturers to make relatively simple changes to make sure that smart devices come with unique passwords and are adequately labelled with safety and contact information sounds as though it shouldn’t be too costly or difficult.  The pressure of having to display a label, by law, that indicates how safe the item is, could provide that extra motivation for manufacturers to make the changes and could be very helpful for security-conscious consumers.

The motivation for manufacturers to make the changes to the IoT devices will be even greater if faced with the prospect of retailers eventually being barred from selling products that don’t have a label, as was originally planned for the proposed legislation.

The hope from cyber-security experts and commentators is that the proposed new legislation won’t be watered down before it becomes law.

Police Images of Serious Offenders Reportedly Shared With Private Landlord For Facial Recognition Trial

There have been calls for government intervention after it was alleged that South Yorkshire Police shared its images of serious offenders with a private landlord (Meadowhall shopping centre in Sheffield) as part of a live facial recognition trial.

The Facial Trial

The alleged details of the image-sharing for the trial were brought to the attention of the public by the BBC radio programme File on 4, and by privacy group Big Brother Watch.

It has been reported that the Meadowhall shopping centre’s facial recognition trial ran for four weeks between January and March 2018 and that no signs warning visitors that facial recognition was in use were displayed. The owner of Meadowhall shopping centre is reported as saying (last August) that the data from the facial recognition trial was “deleted immediately” after the trial ended. It has also been reported that the police have confirmed that they supported the trial.

Questions

The disclosure has prompted some commentators to question not only the ethical and legal perspective of not just holding public facial recognition trials without displaying signs but also of the police allegedly sharing photos of criminals (presumably from their own records) with a private landlord.

The UK Home Office’s Surveillance Camera Code of Practice, however, does appear to support the use of facial recognition or other biometric characteristic recognition systems if their use is “clearly justified and proportionate.”

Other Shopping Centres

Other facial recognition trials in shopping centres and public shopping areas have been met with a negative response too.  For example, the halting of a trial at the Trafford Centre shopping mall in Manchester in 2018, and with the Kings Cross facial recognition trial (between May 2016 and March 2018) which is still the subject of an ICO investigation.

Met Rolling Out Facial Recognition Anyway

Meanwhile, and despite a warning from Elizabeth Denham, the UK’s Information Commissioner, back in November, the Metropolitan Police has announced it will be going ahead with its plans to use live facial recognition cameras on an operational basis for the first time on London’s streets to find suspects wanted for serious or violent crime. Also, it has been reported that South Wales Police will be going ahead in the Spring with a trial of body-worn facial recognition cameras.

EU – No Ban

Even though many privacy campaigners were hoping that the EC would push for a ban on the use of facial recognition in public spaces for up to five years while new regulations for its use are put in place, Reuters has reported that The European Union has now scrapped any possibility of a ban on facial recognition technology in public spaces.

Facebook Pays

Meanwhile, Facebook has just announced that it will pay £421m to a group of Facebook users in Illinois, who argued that its facial recognition tool violated the state’s privacy laws.

What Does This Mean For Your Business?

Most people would accept that facial recognition could be a helpful tool in fighting crime, saving costs, and catching known criminals more quickly and that this would be of benefit to businesses and individuals. The challenge, however, is that despite ICO investigations and calls for caution, and despite problems that the technology is known to have e.g. being inaccurate and showing a bias (being better at identifying white and male faces), not to mention its impact on privacy, the police appear to be pushing ahead with its use anyway.  For privacy campaigners and others, this may give the impression that their real concerns (many of which are shared by the ICO) are being pushed aside in an apparent rush to get the technology rolled out. It appears to many that the use of the technology is happening before any of the major problems with it have been resolved and before there has been a proper debate or the introduction of an up-to-date statutory law and code of practice for the technology.

Eating Lunch At Your Desk Brings Health Risks

Recent research by BUPA has highlighted how many UK workers don’t take a proper lunch break each day and end up risking their health and happiness and reducing their productivity by eating at their desks.

The Number

The research, which involved the study of the habits of 2,000 full-time workers revealed that almost two thirds (64 per cent) claim they are not always able to take their legally required 20-minute break when working six hours or more.  Also, only 29 per cent of employees said they take a full hour for lunch every day and only 28 per cent of workers said they never take a breather of any kind during the working day.

Working Lunch & Eating At The Desk

According to the research, with 45 per cent of employees not leaving the workplace during what should be lunchtime, and with one-third of employees (31 per cent) usually eating at their desk, this results in them having what is essentially just a working lunch as they have to respond to work calls (42 per cent) and to emails (40 per cent) while they’re eating at their desk.

Health (and Happiness) Risks

There are many health risks associated with not taking a proper lunch break and with having a ‘working lunch’ at the desk.  These include:

– Overeating due to distraction.  The ‘working lunch’ at the desk means that you don’t get/feel as full, which then leads to feeling hungry in the afternoon and then eating more.  This behaviour and its effects were studied and identified by researchers from the University of Surrey in 2012.

– Negative effects on health from sitting down most of the day.  Not taking a break, and not moving from your desk, let alone the workplace, can contribute to some serious health problems.  For example, a University of Leicester Study (2012) showed that sitting for long periods increases your risk of diabetes, heart disease and death and that this can be the case for people who meet typical physical activity guidelines.

– Staying seated at the desk for long periods during the day can cause tension in muscles, pain in joints, and can weaken hip and core muscles, which can, in turn, lead to other problems with muscles and joints.

– Increased stress levels can come from not having a break and from interruptions during eating.

– Risks from bacteria on the desk and on the keyboard (and phone) that can be exacerbated during eating and by dropping food particles from lunch at the desk.  For example, a Printerland survey (March 2018) showed that the average desk contains 400 times more germs than a toilet seat and that only a third of staff members follow guidelines about cleaning up their workplace, and one in 10 never clean their desks.

Productivity Affected

Not having a proper lunchbreak and detachment from work also affects the brain’s ability to effectively ‘reset’ and boost our attention and our body’s ability to refresh our energy.  This can lead to reduced productivity in the afternoon. It can also mean that we miss out on the inspiration, ideas, and clarity of thought (to potentially realise the solution to a work problem) that a break can deliver.

Happiness

With the reduced productivity, increased stress, and physical problems that staying at a desk to eat brings can come lower levels of satisfaction and happiness at work and a faster route to ‘burnout’.

Why?

It is thought that feeling obliged to eat at the desk by the work culture in the UK, being seen to be at your desk through fear of appearing absent or not committed to and part of the company, work and culture, and/or feeling too busy/overloaded with work are some of the reasons for these unhealthy work break (or no break) patterns.

What Does This Mean For Your Businesses?

It is understandable that businesses, particularly where customers come in, frequently phone, or where service is particularly urgent, always need to have staff available to deal with customers and enquiries during business hours.  This, however, can still be achieved by the planning of rotas and by encouraging staff to make arrangements to ensure that communications are covered fairly while allowing for fixed breaks for all staff members.

Some ways that businesses and organisations can help staff to look after themselves, and in doing so, look after the company and its productivity include encouraging their employees to take lunches away from their desk, creating a physical environment where employees can take themselves away from their desks, managers leading the way in the behaviour they want to see in the workplace and in encouraging a healthy break-taking culture.  Also, workers can help to improve their own health at work by walking around more (and perhaps placing a laptop on a filing cabinet so they have to stand), having standing meetings, reducing TV viewing time when not at work (to help offset any continuing unhealthy behaviours at work), scheduling lunches with friends or alone to ensure that they actually leave the office and are more productive on their return.

That said, the workload, management style and values and the work culture can have a strong influence on whether workers feel able and safe to take breaks, and managers need to authorise, endorse, and be seen to reward a break-taking culture for it to succeed and hopefully, benefit the business in the process.

EU Considers Ban on Facial Recognition

It has been reported that the European Commission is considering a ban on the use of facial recognition in public spaces for up to five years while new regulations for its use are put in place.

Document

The reports of a possible three to five-year ban come from an 18-page EC report, which has been seen by some major news distributors.

Why?

Facial recognition trials in the UK first raised the issues of how the technology can be intrusive, can infringe upon a person’s privacy and data rights, and how facial recognition technology is not always accurate.  These issues have also been identified and raised in the UK, For example:

– In December 2018, Elizabeth Denham, the UK’s Information Commissioner launched a formal investigation into how police forces used FRT after high failure rates, misidentifications and worries about legality, bias, and privacy. This stemmed from the trial of ‘real-time’ facial recognition technology on Champions League final day June 2017 in Cardiff, by South Wales and Gwent Police forces, which was criticised for costing £177,000 and yet only resulting in one arrest of a local man whose arrest was unconnected.

– Trials of FRT at the 2016 and 2017 Notting Hill Carnivals led to the Police facing criticism that FRT was ineffective, racially discriminatory, and confused men with women.

– In September 2018 a letter, written by Big Brother Watch (a privacy campaign group) and signed by more than 18 politicians, 25 campaign groups, and numerous academics and barristers highlighted concerns that facial recognition is being adopted in the UK before it has been properly scrutinised.

– In September 2019 it was revealed that the owners of King’s Cross Estate had been using FRT without telling the public, and with London’s Metropolitan Police Service supplying the images for a database.

– In December 2019, a US report showed that, after tests by The National Institute of Standards and Technology (Nist) of 189 algorithms from 99 developers, their facial recognition technology was found to be less accurate at identifying African-American and Asian faces, and was particularly prone to misidentifying African-American females.

Impact Assessment

The 18-page EC report is said to contain the recommendation that a three to five-year ban on the public use of facial recognition technology would allow time to develop a methodology for assessing the impacts of (and developing risk management measures for) the use of facial recognition technology.

Google Calls For AI To Be Regulated

The way in which artificial intelligence (AI) is being widely and quickly deployed before the regulation of the technology has had a chance a to catch up is the subject of recent comments by Sundar Pichai, the head of Google’s parent company, Alphabet’.  Mr Pichai (in the Financial Times) called for regulation with a sensible approach and for a set of rules for areas of AI development such as self-driving cars and AI usage in health.

What Does This Mean For Your Business?

It seems that there is some discomfort in the UK, Europe and beyond that relatively new technologies which have known flaws, and are of concern to government representatives, interest groups and the public are being rolled out before the necessary regulations and risk management measures have had time to be properly considered and developed.  It is true that facial recognition could have real benefits (e.g. fighting crime) which could have benefits for many businesses and that AI has a vast range of opportunities for businesses to save money and time plus innovating products, services and processes.  However, the flaws in these technologies, and their potential to be used improperly, covertly, and in a way that could infringe the rights of the public cannot be ignored, and it is likely to be a good thing in the long term, that time is taken and efforts are made now to address the issues of stakeholders and develop regulations and measures that could prevent bigger problems involving these technologies further down the line.

Want A Walkie-Talkie? Now You Can Use Your Phone and MS Teams

Microsoft has announced that it is introducing a “push-to-talk experience” to its ‘Teams’ collaborative platform that turns employee or company-owned smartphones and tablets into walkie-talkies.

No Crosstalk or Eavesdropping

The new ‘Walkie Talkie’ feature will offer clear, instant and secure voice communication over the cloud.  This means that it will not be at risk from traditional analogue (unsecured network) walkie-talkie problems such as crosstalk or eavesdropping, and Microsoft says that because Walkie Talkie works over Wi-Fi or cellular data, it can also be used across geographic locations.

Teams Mobile App

The Walkie Talkie feature can be accessed in private preview in Teams in the first half of this year and will be available in the Teams mobile app.  Microsoft says that Walkie Talkie will also integrate with Samsung’s new Galaxy XCover Pro enterprise-ready smartphone for business.

Benefits

The main benefits of Walkie Talkie are making it easier for firstline workers to communicate and manage tasks as well as reducing the number of devices employees must carry and lowering IT costs.

One Better Than Slack

Walkie Talkie also gives Teams another advantage over its increasingly distant rival Slack, which doesn’t currently have its own Walkie Talkie-style feature, although things like spontaneous voice chat can be added to Slack with Switchboard.

Last month, Microsoft announced that its Teams product had reached the 20 million daily active users (and growing) mark, thereby sending Slack’s share price downwards.

Slack, which has 12 million users (a number which has increased by 2 million since January 2019) appears to be falling well into second place in terms of user numbers to Teams in the $3.5 billion chat-based collaborative working software market.  However, some tech commentators have noted that Slack has stickiness and strong user engagement and that its main challenge is that although large companies in the US use it and like it, they currently have a free version, so Slack will have to convince them to upgrade to the paid-for version if it wants to start catching up with Teams

Apple Watch Walkie-Talkie App

Apple Watch users (Series 1 or later with watch OS 5.3 or later, not in all countries though) have been able to use a ‘Walkie-Talkie’ app since October last year.

What Does This Mean For Your Business?

For businesses using Microsoft Teams, the new Walkie Talkie feature could be a cost-saving and convenient tool for firstline workers, and the fact that it integrates Samsung’s new Galaxy XCover Pro will give it even more value for businesses.

For Microsoft, the new Walkie Talkie feature, along with 7 other recently announced new tools for Teams focused firmly on communication and task management for firstline workers are more ways that Teams can gain a competitive advantage over rival Slack, and increase the value of Office 365 to valuable business customers.

Facebook Bans Deepfake Videos

In a recent blog post, ahead of the forthcoming US election, Monika Bickert, Vice President, of Facebook’s Global Policy Management has announced that the social media giant is banning deepfakes and “all types of manipulated media”.

Not Like Last Time

With the 59th US presidential election scheduled for Tuesday, November 3, 2020, Facebook appears to be taking no chances after the trust-damaging revelations around unauthorised data sharing with Cambridge Analytica, and the use of the platform by foreign powers such as Russia in an attempt to influence the outcome of the 2016 election of Donald Trump.

The fallout of the news that 50 million Facebook profiles were harvested as early as 2014 in order to build a software program that could predict and use personalised political adverts to influence choices at the ballot box in the last U.S. election includes damaged trust in Facebook, a substantial fine, plus a fall in the number of daily users in the United States and Canada for the first time in its history.

Deepfakes

One of the key concerns to Facebook this time around appears to be so-called ‘deepfake’ videos.  These use deep learning technology and manipulated images of target individuals (found online), often celebrities, politicians, and other well-known people to create very convincing videos of the subjects saying and doing whatever the video-maker wants them to. These videos could obviously be used to influence public thinking about political candidates, and as well as having an influence in election results, it would be very damaging for Facebook, which has been very public about trying to rid itself of ‘fake news’ and not to be seen as a platform for the easy distribution of deepfake videos.  No doubt Facebook’s CEO Mark Zuckerberg would like to avoid having to appear before Congress again to answer questions about his company’s handling of personal data, as he had to back in April 2018.

The New Statement From Facebook

This latest blog post statement from Facebook says that as a matter of policy, it will now remove any misleading media from its platform if the media meets two criteria, which are:

  • If it has been synthesised i.e. more than just adjustments for clarity or quality to the point where the ‘average person’ could be misled into thinking the subject of the media/video is saying words that they did not actually say, and…
  • If the media is the product of artificial intelligence or machine learning that has merged, replaced or superimposed content onto a video, in order to make it appear to be authentic.

Not Satire

Facebook has been careful to point out that this policy change will not affect content that is clearly intended to be parody or satire, or videos that have been edited just to omit or change the order of the words featured in them.

Existing Policies

Any media posted to Facebook is subject to the social media giant’s existing comply-or-be-removed ‘Community Standards’ policies which cover, among other things, voter suppression and hate speech.

What Will Happen?

Facebook says that any videos that don’t meet its standards for removal are still eligible for review by one its independent third-party fact-checkers (which include 50+ partners worldwide) and that any photos or videos rated as false or partly false (by a fact-checker) will have its distribution “significantly” reduced in News Feed and will be rejected if it’s being run as an ad. Also, those who see it and try to share it, or have already shared it, will be shown warnings alerting them that it’s false.

Measures

Facebook has taken many measures to ensure that it is not seen as a platform that can’t be trusted with user data or as a distributor of fake news.  For example:

– In January 2019 Facebook announced (in the UK) that it was working with London-based, registered charity ‘Full Fact’ to review stories, images and videos, in an attempt to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.

– In September 2019, Facebook launched its Deep Fake Detection Challenge, with $10 million in grants and with a cross-sector coalition of organisations in order to encourage the production of tools to detect deepfakes.

– In October 2019, Facebook launched the ‘News’ tab on its mobile app to direct users to unbiased, curated articles from credible sources in a bid to publicly combat fake news and help restore trust in its own brand.

– Facebook has partnered with Reuters to produce a free online training course to help newsrooms worldwide to identify deepfakes and manipulated media.

Criticism

Despite this recent announcement of policy change to help eradicate deepfakes from its platform, Facebook has been criticised by some commentators for appearing to allow some videos which some could describe as misinformation in certain situations (apparently of its choosing).  For example, Facebook has said that content that violates its policies could be allowed if it is deemed newsworthy e.g. presumably, the obviously doctored videos of Labour’s Keir Starmer and US House Speaker Nancy Pelosi.

What Does This Mean For Your Business?

Clearly, any country would like to guard against outside influence in its democratic processes and the deliberate spread of misinformation, and bearing in mind the position of influence that Facebook has, it is good for everyone that it is taking responsibility and trying to block obvious attempts to spread misinformation by altering its policies and working with other organisations. Businesses that use Facebook as an advertising platform also need to know that Facebook users have trust in (and will continue to use) that platform (and see their adverts) so it’s important to businesses that Facebook is vigilant and takes action where it can.  Also, by helping to protect the democratic processes of the countries it operates in, particularly in the US at the time of and election (and bearing in mind what happened last time), it is in Facebook’s own interest to protect its brand against any accusations of not allowing political influence through a variety of media on its platform, and any further loss of trust by its public. This change of policy also shows that Facebook is trying to show readiness to deal with the most up to date threat of deepfakes (even though they are relatively rare).

That said, Google and Twitter (with its new restrictions on micro-targeting for example), have both been very public about trying to stop all lies in political advertising on their platforms, but Facebook has just been criticised by the IPA over its decision not to ban political ads that are using micro-targeting and spurious claims to sway the opinions of voters.

Facebook Bans Deepfake Videos

In a recent blog post, ahead of the forthcoming US election, Monika Bickert, Vice President, of Facebook’s Global Policy Management has announced that the social media giant is banning deepfakes and “all types of manipulated media”.

Not Like Last Time

With the 59th US presidential election scheduled for Tuesday, November 3, 2020, Facebook appears to be taking no chances after the trust-damaging revelations around unauthorised data sharing with Cambridge Analytica, and the use of the platform by foreign powers such as Russia in an attempt to influence the outcome of the 2016 election of Donald Trump.

The fallout of the news that 50 million Facebook profiles were harvested as early as 2014 in order to build a software program that could predict and use personalised political adverts to influence choices at the ballot box in the last U.S. election includes damaged trust in Facebook, a substantial fine, plus a fall in the number of daily users in the United States and Canada for the first time in its history.

Deepfakes

One of the key concerns to Facebook this time around appears to be so-called ‘deepfake’ videos.  These use deep learning technology and manipulated images of target individuals (found online), often celebrities, politicians, and other well-known people to create very convincing videos of the subjects saying and doing whatever the video-maker wants them to. These videos could obviously be used to influence public thinking about political candidates, and as well as having an influence in election results, it would be very damaging for Facebook, which has been very public about trying to rid itself of ‘fake news’ and not to be seen as a platform for the easy distribution of deepfake videos.  No doubt Facebook’s CEO Mark Zuckerberg would like to avoid having to appear before Congress again to answer questions about his company’s handling of personal data, as he had to back in April 2018.

The New Statement From Facebook

This latest blog post statement from Facebook says that as a matter of policy, it will now remove any misleading media from its platform if the media meets two criteria, which are:

  • If it has been synthesised i.e. more than just adjustments for clarity or quality to the point where the ‘average person’ could be misled into thinking the subject of the media/video is saying words that they did not actually say, and…
  • If the media is the product of artificial intelligence or machine learning that has merged, replaced or superimposed content onto a video, in order to make it appear to be authentic.

Not Satire

Facebook has been careful to point out that this policy change will not affect content that is clearly intended to be parody or satire, or videos that have been edited just to omit or change the order of the words featured in them.

Existing Policies

Any media posted to Facebook is subject to the social media giant’s existing comply-or-be-removed ‘Community Standards’ policies which cover, among other things, voter suppression and hate speech.

What Will Happen?

Facebook says that any videos that don’t meet its standards for removal are still eligible for review by one its independent third-party fact-checkers (which include 50+ partners worldwide) and that any photos or videos rated as false or partly false (by a fact-checker) will have its distribution “significantly” reduced in News Feed and will be rejected if it’s being run as an ad. Also, those who see it and try to share it, or have already shared it, will be shown warnings alerting them that it’s false.

Measures

Facebook has taken many measures to ensure that it is not seen as a platform that can’t be trusted with user data or as a distributor of fake news.  For example:

– In January 2019 Facebook announced (in the UK) that it was working with London-based, registered charity ‘Full Fact’ to review stories, images and videos, in an attempt to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.

– In September 2019, Facebook launched its Deep Fake Detection Challenge, with $10 million in grants and with a cross-sector coalition of organisations in order to encourage the production of tools to detect deepfakes.

– In October 2019, Facebook launched the ‘News’ tab on its mobile app to direct users to unbiased, curated articles from credible sources in a bid to publicly combat fake news and help restore trust in its own brand.

– Facebook has partnered with Reuters to produce a free online training course to help newsrooms worldwide to identify deepfakes and manipulated media.

Criticism

Despite this recent announcement of policy change to help eradicate deepfakes from its platform, Facebook has been criticised by some commentators for appearing to allow some videos which some could describe as misinformation in certain situations (apparently of its choosing).  For example, Facebook has said that content that violates its policies could be allowed if it is deemed newsworthy e.g. presumably, the obviously doctored videos of Labour’s Keir Starmer and US House Speaker Nancy Pelosi.

What Does This Mean For Your Business?

Clearly, any country would like to guard against outside influence in its democratic processes and the deliberate spread of misinformation, and bearing in mind the position of influence that Facebook has, it is good for everyone that it is taking responsibility and trying to block obvious attempts to spread misinformation by altering its policies and working with other organisations. Businesses that use Facebook as an advertising platform also need to know that Facebook users have trust in (and will continue to use) that platform (and see their adverts) so it’s important to businesses that Facebook is vigilant and takes action where it can.  Also, by helping to protect the democratic processes of the countries it operates in, particularly in the US at the time of and election (and bearing in mind what happened last time), it is in Facebook’s own interest to protect its brand against any accusations of not allowing political influence through a variety of media on its platform, and any further loss of trust by its public. This change of policy also shows that Facebook is trying to show readiness to deal with the most up to date threat of deepfakes (even though they are relatively rare).

That said, Google and Twitter (with its new restrictions on micro-targeting for example), have both been very public about trying to stop all lies in political advertising on their platforms, but Facebook has just been criticised by the IPA over its decision not to ban political ads that are using micro-targeting and spurious claims to sway the opinions of voters.

Featured Article – Email Security (Part 2)

Following on from last month’s featured article about email security (part 1), in part 2 we focus on many of the email security and threat predictions for this year and for the near, foreseeable future.

Looking Forward

In part 1 of this ‘Email Security’ snapshot, we looked at how most breaches involve email, the different types of email attacks, and how businesses can defend themselves against a variety of known email-based threats. Unfortunately, businesses and organisations now operate in an environment where cyber-attackers are using more sophisticated methods across multi-vectors and where threats are constantly evolving.

With this in mind, and with businesses seeking to be as secure as possible against the latest threats, here are some of the prevailing predictions based around email security for the coming year.

Ransomware Still a Danger

As highlighted by a recent Malwarebytes report, and a report by Forbes, the ransomware threat is by no means over and since showing an increase in the first quarter of 2019 of 195 per cent on the previous year’s figures it is still predicted to be a major threat in 2020. Tech and security commentators have noted that although ransomware attacks on consumers have declined by 33 per cent since last year, attacks against organisations have worsened.  In December, for example, a ransomware attack was reported to have taken a US Coast Guard (USCG) maritime base offline for more than 30 hours.

At the time of writing this article, it has been reported that following an attack discovered on New Year’s Day, hackers using ransomware are holding Travelex’s computers for ransom to such a degree that company staff have been forced to use pen and paper to record transactions!

Information Age, for example, predicts that softer targets (outdated software, inadequate cybersecurity resources, and a motivation to pay the ransom) such as the healthcare services will be targeted more in the coming year with ransomware that is carried by email.

Phishing

The already prevalent email phishing threat looks likely to continue and evolve this year with cybercriminals set to try new methods in addition to sending phishing emails e.g. using SMS and even spear phishing (highly targeted phishing) using deepfake videos to pose as company authority figures.

As mentioned in part 1 of the email security articles, big tech companies are responding to help combat phishing with new services e.g. the “campaign views” tool in Office 365 and Google’s advanced security settings for G Suite administrators.

BEC & VEC

Whereas Business Email Compromise (BEC) attacks have been successful at using email fraud combined with social engineering to bait one staff member at-a-time to extract money from a targeted organisation, security experts say that this kind of attack is morphing into a much wider threat of ‘VEC’ (Vendor Email Compromise). This is a larger and more sophisticated version which, using email as a key component, seeks to leverage organisations against their own suppliers.

Remote Access Trojans

Remote Access Trojans (RATs) are malicious programs that can arrive as email attachments.  RATs provide cybercriminals with a back door for administrative control over the target computer, and they can be adapted to help them to avoid detection and to carry out a number of different malicious activities including disabling anti-malware solutions and enabling man-in-the-middle attacks.  Security experts predict that more sophisticated versions of these malware programs will be coming our way via email this year.

The AI Threat

Many technology and security experts agree that AI is likely to be used in cyberattacks in the near future and its ability to learn and to keep trying to reach its target e.g. in the form of malware, make it a formidable threat. Email is the most likely means by which malware can reach and attack networks and systems, so there has never been a better time to step up email security, train and educate staff about malicious email threats, how to spot them and how to deal with them. The addition of AI to the mix may make it more difficult for malicious emails to be spotted.

The good news for businesses, however, is that AI and machine learning is already used in some anti-virus software e.g. Avast, and this trend of using AI in security solutions to counter AI security threats is a trend that is likely to continue.

One Vision of the Email Security Future

The evolving nature of email threats means that businesses and organisations may need to look at their email security differently in the future.

One example of an envisaged approach to email security comes from Mimecast’s CEO Peter Bauer.  He suggests that in order to truly eliminate the threats that can abuse the trust in their brands “out in the wild” companies need to “move from perimeter to pervasive email security.  This will mean focusing on the threats:

– To the Perimeter (which he calls Zone1).  This involves protecting users’ email and data from spam and viruses, malware and impersonation attempts, data leaks – in fact, protecting the whole customer, partner and vendor ecosystem.

– From inside the perimeter (Zone 2).  This involves being prepared to be able to effectively tackle internal threats like compromised user accounts, lateral movement from credential harvesting links, social engineering, and employee error threats.

– From beyond the perimeter (Zone 3).  These could be threats to brands and domains from spoofed or hijacked sites that could be used to defraud customers and partners.

As well as recognising and looking to deal with threats in these 3 zones, Bauer also suggests an API-led approach to help deliver pervasive security throughout all zones.  This could involve businesses monitoring and observing email attacks with e.g. SOARs, SIEMs, endpoints, firewalls and broader threat intelligence platforms, feeding this information and intelligence to security teams to help keep email security as up to date and as tight as possible.

Into 2020 and Beyond

Looking ahead to email security in 2020 and beyond, companies will be facing plenty more of the same threats (phishing, ransomware, RATs) which rely on email combined with human error and social engineering to find their way into company systems and networks. Tech companies are responding with updated anti-phishing and other solutions.

SME’s (rather than just bigger companies) are also likely to find themselves being targeted with more attacks involving email, and companies will need to, at the very least, make sure they have the basic automated, tech and human elements in place (training, education, policies and procedures) to help provide adequate protection (see the end of part 1 for a list of email security suggestions).

The threat of AI-powered attacks, however, is causing some concern and the race is on to make sure that AI-powered protection is up to the level of any AI-powered attacks.

Taking a leaf out of companies like Mimecast’s book, and looking at email security in much wider scope and context (outside the perimeter, inside the perimeter, and beyond) may bring a more comprehensive kind of email security that can keep up with the many threats that are now arriving across a much wider attack surface.