AI

Robot Dog Maintains Social Distancing

A remotely controlled robot called ‘SPOT’ that is being trialled in a Singapore park warns visitors to observe safe social distancing measures.

Sign

The 2-week trial in Singapore’s Bishan-Ang Mo Kio Park is a collaboration between Singapore’s National Parks Board (NParks) and GovTech.  A sign in the park tells visitors about the presence of SPOT and how the robot will be moving autonomously through the park to help ensure safe distancing in the park and gardens.

Sensors and Cameras

SPOT, the four-legged robot made by Boston Dynamics uses sensors to prevent any collisions with objects or people, and there is a person on-hand to help if there are any unforeseen issues.

Although SPOT is fitted with cameras which can help to estimate the number of visitors to the park, it has been reported that the cameras are not being used to collect personal data or to identify individuals.

Message

As SPOT proceeds around the park, it broadcasts a pre-recorded message that reminds visitors to observe social distancing.

Singapore Laws

People in Singapore are used to complying with a wide variety of laws governing behaviour in public spaces, so it is likely that even commands delivered by a robot will be observed by most people.  For example, in Singapore, on-the-spot fines are common e.g. for littering, smoking in some public places and e-cigarettes can be confiscated, chewing gum is banned, and not flushing the (public) toilet can also result in a fine.

Used in Hospital

Robot delivery services are already a common sight in many hospitals, but the SPOT robot is also being used at Brigham And Women’s Hospital of Harvard University for remote triage of patients suspected of having COVID-19.

Drones

In other cities in other countries e.g. China, the US, Spain and Israel, drones have been used to deliver social distancing and dispersal instructions where there has been an outdoor grouping of people, and (in Jerusalem) outside apartment building windows and balconies to check whether people who have been ordered to self-isolate are doing so.

What Does This Mean For Your Business?

For drone and robot companies, such as Boston Dynamics, demand has increased during the pandemic because the flexibility, manoeuvrability, and safety (from cross-contamination) provided by these devices has proven to have real value.  Robots and drones, using cameras, sensors and other tools can safely and quickly carry out a wide variety of tasks, as and when required, 24/7.  Delivery robots and commercial drones have also seen as a boost in demand at a time where human movement has been restricted but where a need for monitoring of property and premises, and delivery of food and other important items is still required.

Automation is becoming an important cost and time saving and an added-value element of many businesses and organisations and the success of robots and drones and the highlighting during the pandemic of the benefits they offer can only to boost the market further and make many businesses, organisations and sectors see new opportunities for robots and drones.

Featured Article – Facial Recognition and Super Computers Help in COVID-19 Fight

Technology is playing an important role in fighting the COVID-19 pandemic with adapted facial recognition cameras and super-computers now joining the battle to help beat the virus.

Adapted Facial Recognition

Facial recognition camera systems have been trialled and deployed in many different locations in the UK which famously include the 2016 and 2017 Notting Hill Carnivals, the Champions League final day June 2017 in Cardiff,  the Kings Cross Estate in 2019 and in a deliberately “overt” trial of live facial recognition technology by the Metropolitan Police in the centre of Romford, London, in January 2019.  Although it would be hard to deny that facial recognition technology (FRT) could prove to be a very valuable tool in the fight against crime, issues around its accuracy, bias and privacy have led to criticism in the UK from the Information Commissioner about some of the ways it has been used, while (in January) the European Commission was considering a ban on the use of facial recognition in public spaces for up to five years while new regulations for its use were put in place.

However, one way that some facial recognition systems have been adapted to help in the fight against COVID-19 include the incorporation of temperature screening.

Thermographic Temperature-Screening

In the early news reports of the initial spread of COVID-19 in China, news reports focused on how thermographic, temperature-screening cameras backed up by AI could be used to pick out people from crowds who displayed a key symptom, notably a raised temperature.

These systems are also likely to play a role in our post-lockdown, pre-vaccine world as one of many tools, systems, and procedures to improve safety as countries try to re-start their economies on the long road back.

In the UK – Facial Recognition Combined With ‘Fever Detection System’

In the UK, an AI-powered facial recognition system at Bristol Airport is reported to have been adapted to incorporate a ‘fever detection system’, developed by British technology company SCC. This means that the existing FRT system has been augmented with thermographic cameras that can quickly spot people, even in large moving groups (as would normally happen in airports) who have the kind of raised temperatures associated with COVID-19.

In Russia – Facial Recognition Combined With Digital Passes on Phones

It has also been reported that, as far back as March, officials in Moscow have been using the city’s network of tens of thousands of security cameras, which can offer instant, real-time facial recognition of citizens in combination with digital passes on mobile phones. It has been reported that the sheer number of cameras in Moscow, which can also be used to measure social distancing and detect crowds, coupled with the sophisticated FRT at the back-end is enough to ensure that those who are supposed to be in isolation can be detected even if they come outside their front door for a few seconds.  Moscow’s facial recognition system is also reported to be able to identify a person correctly, even if they are wearing a face mask.

Supercomputers

One of the great advantages of supercomputers is that they can carry out staggering numbers of calculations per second, thereby being able to solve complicated problems in a mere fraction of the time that it would take other computers to do the same thing.  Supercomputers are, therefore, now being used in the fight against coronavirus. For example:

– Scientists at the University of Texas at Austin’s Texas Advanced Computing Centre (TACC) in the U.S. are using a Frontera supercomputer and a huge computer model of the coronavirus to help researchers design new drugs and vaccines.

– University College London (UCL) researchers, as part of a consortium of over a hundred researchers from across the US and Europe, are using some of the world’s most powerful supercomputers (including the biggest one in Europe and the most powerful one in the world) to study the COVID-19 virus and thereby help develop effective treatments and, hopefully, a vaccine.  The researchers have been using the Summit at Oak Ridge National Lab, USA (1st) and SuperMUC-NG at GCS@LRZ, Germany (9th)  supercomputers to quickly search through existing libraries of compounds that could be used to attach themselves to the surface of the novel coronavirus.

– In the U.S. the COVID-19 High-Performance Computing (HPC) Consortium, a combined effort by private-public organisations, the White House Office of Science and Technology Policy, U.S. government departments and IBM are bringing together federal government, industry, and academics who are offering free computing time and resources on their supercomputers to help to understand and beat the coronavirus.

Looking Ahead

Facial recognition cameras used by police and government agencies have been the focus of some bad press and questions over a variety of issues, but the arrival of the pandemic has turned many things on their heads. The fact is that there are existing facial recognition camera systems which, when combined with other technologies, could help to stop the spread of a potentially deadly disease.

With vaccines normally taking years to develop, and with the pandemic being a serious, shared global threat, it makes sense that the world’s most powerful computing resources should be (and are being) deployed to speed up the process of understanding the virus and of quickly sorting through existing data and knowledge that could help.

AI Skills Course Available – Free of Charge

A free, basic AI skills course, funded by Finland’s Ministry of Economic Affairs and Employment (MEAE), is being made available to citizens across the EU’s 27 member states.

Success in Finland

The decision by the Finnish government to make the course available online across the EU to an estimated five million Europeans (1% of the total population of EU states) in the 2020-2021 academic year was boosted by the popularity of a test run of the course in Finland back in 2018.

The Course

The six-chapter ‘Elements of AI’ course, which is still open to UK citizens, is aimed at de-mystifying and providing a critical and customised understanding of AI, offers a basic understanding of what AI is, how it can be used to boost business productivity, and how it will affect jobs and society in the future. The six chapters of the course can be studied in a structured or ‘own-pace’ way and cover the topics of What is AI?, AI problem solving, real-world AI, machine learning, neural networks and implications.

The course is available in six languages – English, German, Swedish, Estonian, Norwegian and Finnish.

Run by the University of Helsinki, the course represents a way in which a university can play a role in reaching a Europe-wide, cross-border audience and build important competencies for the future across that area.

Gift

The provision of the online course, which is funded by the MEAE to an estimated cost of €1.7m a year is essentially a gift from Finland, not just to leaders of fellow EU states but to the people of EU countries to mark the end of Finland’s six-month rotating Presidency of the Council of the EU.  It is the hope, therefore, that Finland’s gift will have real-world value in terms of helping to develop digital literacy in the EU.

You can sign up for the course here: https://www.elementsofai.com/

170 Countries

It’s claimed that to date, the free online AI course has been completed by students from over 170 countries and that around 40 % of course participants are women, which is more than double the average for computer science courses.

What Does This Mean For Your Business?

With a tech skills shortage in the UK, with AI becoming a component in an increasing number of products and services, and with the fact that you can very rarely expect to get something of value for nothing, this free online course could be of some value to businesses across Europe.  The fact that the course is delivered online with just a few details needed to enrol makes it accessible, and the fact that it can be tackled in a structured way or at your own pace makes it convenient.  It’s also refreshing to see a country giving a gift to millions of citizens rather than just to other EU leaders and the fact that more women are taking the course must be good news for the tech and science sectors. Anything that can effectively, quickly and cheaply make a positive difference to digital literacy in the EU is likely to end up benefitting businesses across Europe.  Also, even though the UK’s now out of the EU, it’s a good job that we’re still able to access the course.

Amazon Offering Custom ‘Brand Voice’ to Replace Default Alexa Voice

Amazon’s AWS is offering a new ‘Brand Voice’ capability to companies which enables them to create their own custom voice for Alexa that replaces the default voice with one that reflects their “persona”, such as the voice of Colonel Sanders for KFC.

Brand Polly

The capability is being offered through Amazon’s ‘Brand Polly’, the cloud service by Amazon Web Services (AWS), that converts text into lifelike speech.  The name ‘Polly’ is a reference to parrots which are well-known for being able to mimic human voices.

Amazon says that companies can work with the Amazon Polly team of AI research scientists and linguists to build an exclusive, high-quality, Neural Text-to-Speech (NTTS) voice that will represent the “persona” of a brand.

Why?

According to Amazon, the ‘Brand Voice’ will give companies another way to differentiate their brand by incorporating a unique vocal identity into their products and services. Hearing the ‘Brand Voice’ of a company is also another way to help create an experience for customers that strengthen the brand, triggers the brand messages and attitudes that a customer has already assimilated through advertising, and helps to provide another element of consistency to brand messages, communications and interactions.

How?

The capability involves using deep learning technology that can learn the intonation patterns of natural speech data and reproduce from that a voice in a similar style or tone. For example, in September, Alexa users were given the option to use the voice of Samuel L. Jackson for their Alexa and in order to produce the voice, the NTTS models were ‘trained’ using hours of recorded dialogue rather than the actor being required to read new dialogue for the system.

Who?

Amazon Polly says on its website that it has already been working with Kentucky Fried Chicken (KFC) Canada (for a Colonel Sanders-style brand voice) and with National Australia Bank (NAB), using “the same deep learning technology that powers the voice of Alexa”.

Uses

The ‘Brand Voice’ created for companies can, for example, be used for call centre systems (as with NAB).

What Does This Mean For Your Business?

The almost inevitable ‘Brand Voice’ move sees Amazon taking another step to monetizing Alexa and moving more into the business market where there is huge potential for modifications and different targeted and customised versions of Alexa and digital assistants.  Back in April last year, for example, Amazon launched its Alexa for Business Blueprints, which is a platform that enables businesses to make their own Alexa-powered applications for their organisation and incorporate their own customised, private ‘skills’. The announcement of ‘Brand Voice’, therefore, is really an extension of this programme.  For businesses and organisations, Alexa for Business and ‘Brand Voice’ offers the opportunity to relatively easily customise some powerful, flexible technology in a way that can closely meet their individual needs, and provide a new marketing and communications tool that can add value in a unique way.

Featured Article – Innovations/Gamechangers to Expect in 2020

This is the time of year for looking ahead to how technology could be affecting and hopefully, enhancing our lives over the coming year and here is a selection of just some of the possible game-changing technological innovations that could be making an impact in 2020.

5G Technologies

Technology and communications commentators are saying that 5G’s increased bandwidth and speed, along with other benefits could start to improve file sharing and other communication capabilities for businesses this year (in the geographical areas where it’s deployed).

Quantum Technologies

Back in October, we heard about the paper, published in the journal Nature, that told how scientists may have reached quantum supremacy, whereby a quantum computer can now to do something significant that a classical computer can’t.  With Google’s Sycamore chip (54-qubit processor), an algorithm output that would take 10,000 years using a classical computer only took 200 seconds, and heralded greater potentially game-changing developments this year and beyond. With results from computing power of this kind, many hitherto extremely challenging problems could be solved quickly across a range of industries, and this is likely to attract much more investment in Quantum technologies in 2020.

AI and Health

The possibilities for AI are still being explored, but thanks to start-ups like Imagen which builds AI software for the medical field e.g. OsteoDetect which uses algorithms to scan X-ray images for common wrist bone fractures, and AI software developed by Good Health researchers (in conjunction with other key partners) which has proven to be more accurate at detecting and diagnosing breast cancer than expert human radiologists, AI could be finding more positive ways to impact upon healthcare in 2020 and beyond.

Although AI has promise in so many areas, including health, one of the predicted downsides of AI developments for workers is that the automation that it brings could really start to replace many more human jobs in 2020.

Neural Interfaces

There are many predictions of how commercial applications of neural interfaces may bridge the gap between humans and computers, perhaps allowing people to think instructions to computers.  One of the key challenges is, of course, that neural communications are both chemical and electrical, but this didn’t stop head of SpaceX and Tesla, Elon Musk, announcing in July last year that brain implants (‘Neuralink’) that can link directly to devices could be a reality within a year i.e. by the end of 2020.  It remains to be seen, however, how much progress is made this year, but the idea that a near-instantaneous, wireless communication between brain and computer via an implant is that human brains could be offered a kind of ‘upgrade’ to enable them to keep up with and compete with AI.

Electric Vehicle Explosion

The many technologies (and government subsidies in some countries) that have led to a commitment by big car manufacturers to the production of electric vehicles mean that sales are predicted to rise 35 per cent in the first nine months of 2020.  More electric cars being produced and purchased in developed countries could herald game-changing results e.g. lessening the negative environmental impact of cars.

One other innovation that could help boost the growth of electric cars is a breakthrough in battery technology, such as that announced by Tesla’s head of battery research and university academic Jeff Danh, who has published a paper about a battery that could last a million miles without losing capacity.

Display Screen Technology

Advances in technologies used for display-screen e.g. for phones are likely to prove game-changers in their industries. With new screens becoming ultra-thin LEDs and, therefore, able to be added as computational surfaces to many different surfaces and objects e.g. walls and mirrors, and with advances like foldable screens e.g. Microsoft’s Surface Neo, our environment and communications tools could see some real changes in 2020.

Translation

Technology for mobile devices, AI, and language have converged to create translation apps such as Google’s interpreter mode real-time translator that’s just been rolled out for Assistant-enabled Android and iOS phones worldwide.  Having a reliable tool to hand that enables back and forth conversation with someone speaking a foreign language (and is loaded with 44 languages) could be a game-changer for business and personal travel in 2020.

Augmented Reality

Several tech commentators are predicting (perhaps optimistically) that 2020 could be the year that reliable Augmented Reality glasses find their way onto the market e.g. perhaps from Apple and could see large-scale adoption.

Looking Ahead

2020, therefore, holds a great deal of promise in terms of how different existing and some new technologies and developments combined in new products and services could become game-changers that drive positive benefits for businesses and individual users alike.

Police Images of Serious Offenders Reportedly Shared With Private Landlord For Facial Recognition Trial

There have been calls for government intervention after it was alleged that South Yorkshire Police shared its images of serious offenders with a private landlord (Meadowhall shopping centre in Sheffield) as part of a live facial recognition trial.

The Facial Trial

The alleged details of the image-sharing for the trial were brought to the attention of the public by the BBC radio programme File on 4, and by privacy group Big Brother Watch.

It has been reported that the Meadowhall shopping centre’s facial recognition trial ran for four weeks between January and March 2018 and that no signs warning visitors that facial recognition was in use were displayed. The owner of Meadowhall shopping centre is reported as saying (last August) that the data from the facial recognition trial was “deleted immediately” after the trial ended. It has also been reported that the police have confirmed that they supported the trial.

Questions

The disclosure has prompted some commentators to question not only the ethical and legal perspective of not just holding public facial recognition trials without displaying signs but also of the police allegedly sharing photos of criminals (presumably from their own records) with a private landlord.

The UK Home Office’s Surveillance Camera Code of Practice, however, does appear to support the use of facial recognition or other biometric characteristic recognition systems if their use is “clearly justified and proportionate.”

Other Shopping Centres

Other facial recognition trials in shopping centres and public shopping areas have been met with a negative response too.  For example, the halting of a trial at the Trafford Centre shopping mall in Manchester in 2018, and with the Kings Cross facial recognition trial (between May 2016 and March 2018) which is still the subject of an ICO investigation.

Met Rolling Out Facial Recognition Anyway

Meanwhile, and despite a warning from Elizabeth Denham, the UK’s Information Commissioner, back in November, the Metropolitan Police has announced it will be going ahead with its plans to use live facial recognition cameras on an operational basis for the first time on London’s streets to find suspects wanted for serious or violent crime. Also, it has been reported that South Wales Police will be going ahead in the Spring with a trial of body-worn facial recognition cameras.

EU – No Ban

Even though many privacy campaigners were hoping that the EC would push for a ban on the use of facial recognition in public spaces for up to five years while new regulations for its use are put in place, Reuters has reported that The European Union has now scrapped any possibility of a ban on facial recognition technology in public spaces.

Facebook Pays

Meanwhile, Facebook has just announced that it will pay £421m to a group of Facebook users in Illinois, who argued that its facial recognition tool violated the state’s privacy laws.

What Does This Mean For Your Business?

Most people would accept that facial recognition could be a helpful tool in fighting crime, saving costs, and catching known criminals more quickly and that this would be of benefit to businesses and individuals. The challenge, however, is that despite ICO investigations and calls for caution, and despite problems that the technology is known to have e.g. being inaccurate and showing a bias (being better at identifying white and male faces), not to mention its impact on privacy, the police appear to be pushing ahead with its use anyway.  For privacy campaigners and others, this may give the impression that their real concerns (many of which are shared by the ICO) are being pushed aside in an apparent rush to get the technology rolled out. It appears to many that the use of the technology is happening before any of the major problems with it have been resolved and before there has been a proper debate or the introduction of an up-to-date statutory law and code of practice for the technology.

EU Considers Ban on Facial Recognition

It has been reported that the European Commission is considering a ban on the use of facial recognition in public spaces for up to five years while new regulations for its use are put in place.

Document

The reports of a possible three to five-year ban come from an 18-page EC report, which has been seen by some major news distributors.

Why?

Facial recognition trials in the UK first raised the issues of how the technology can be intrusive, can infringe upon a person’s privacy and data rights, and how facial recognition technology is not always accurate.  These issues have also been identified and raised in the UK, For example:

– In December 2018, Elizabeth Denham, the UK’s Information Commissioner launched a formal investigation into how police forces used FRT after high failure rates, misidentifications and worries about legality, bias, and privacy. This stemmed from the trial of ‘real-time’ facial recognition technology on Champions League final day June 2017 in Cardiff, by South Wales and Gwent Police forces, which was criticised for costing £177,000 and yet only resulting in one arrest of a local man whose arrest was unconnected.

– Trials of FRT at the 2016 and 2017 Notting Hill Carnivals led to the Police facing criticism that FRT was ineffective, racially discriminatory, and confused men with women.

– In September 2018 a letter, written by Big Brother Watch (a privacy campaign group) and signed by more than 18 politicians, 25 campaign groups, and numerous academics and barristers highlighted concerns that facial recognition is being adopted in the UK before it has been properly scrutinised.

– In September 2019 it was revealed that the owners of King’s Cross Estate had been using FRT without telling the public, and with London’s Metropolitan Police Service supplying the images for a database.

– In December 2019, a US report showed that, after tests by The National Institute of Standards and Technology (Nist) of 189 algorithms from 99 developers, their facial recognition technology was found to be less accurate at identifying African-American and Asian faces, and was particularly prone to misidentifying African-American females.

Impact Assessment

The 18-page EC report is said to contain the recommendation that a three to five-year ban on the public use of facial recognition technology would allow time to develop a methodology for assessing the impacts of (and developing risk management measures for) the use of facial recognition technology.

Google Calls For AI To Be Regulated

The way in which artificial intelligence (AI) is being widely and quickly deployed before the regulation of the technology has had a chance a to catch up is the subject of recent comments by Sundar Pichai, the head of Google’s parent company, Alphabet’.  Mr Pichai (in the Financial Times) called for regulation with a sensible approach and for a set of rules for areas of AI development such as self-driving cars and AI usage in health.

What Does This Mean For Your Business?

It seems that there is some discomfort in the UK, Europe and beyond that relatively new technologies which have known flaws, and are of concern to government representatives, interest groups and the public are being rolled out before the necessary regulations and risk management measures have had time to be properly considered and developed.  It is true that facial recognition could have real benefits (e.g. fighting crime) which could have benefits for many businesses and that AI has a vast range of opportunities for businesses to save money and time plus innovating products, services and processes.  However, the flaws in these technologies, and their potential to be used improperly, covertly, and in a way that could infringe the rights of the public cannot be ignored, and it is likely to be a good thing in the long term, that time is taken and efforts are made now to address the issues of stakeholders and develop regulations and measures that could prevent bigger problems involving these technologies further down the line.

Facebook Bans Deepfake Videos

In a recent blog post, ahead of the forthcoming US election, Monika Bickert, Vice President, of Facebook’s Global Policy Management has announced that the social media giant is banning deepfakes and “all types of manipulated media”.

Not Like Last Time

With the 59th US presidential election scheduled for Tuesday, November 3, 2020, Facebook appears to be taking no chances after the trust-damaging revelations around unauthorised data sharing with Cambridge Analytica, and the use of the platform by foreign powers such as Russia in an attempt to influence the outcome of the 2016 election of Donald Trump.

The fallout of the news that 50 million Facebook profiles were harvested as early as 2014 in order to build a software program that could predict and use personalised political adverts to influence choices at the ballot box in the last U.S. election includes damaged trust in Facebook, a substantial fine, plus a fall in the number of daily users in the United States and Canada for the first time in its history.

Deepfakes

One of the key concerns to Facebook this time around appears to be so-called ‘deepfake’ videos.  These use deep learning technology and manipulated images of target individuals (found online), often celebrities, politicians, and other well-known people to create very convincing videos of the subjects saying and doing whatever the video-maker wants them to. These videos could obviously be used to influence public thinking about political candidates, and as well as having an influence in election results, it would be very damaging for Facebook, which has been very public about trying to rid itself of ‘fake news’ and not to be seen as a platform for the easy distribution of deepfake videos.  No doubt Facebook’s CEO Mark Zuckerberg would like to avoid having to appear before Congress again to answer questions about his company’s handling of personal data, as he had to back in April 2018.

The New Statement From Facebook

This latest blog post statement from Facebook says that as a matter of policy, it will now remove any misleading media from its platform if the media meets two criteria, which are:

  • If it has been synthesised i.e. more than just adjustments for clarity or quality to the point where the ‘average person’ could be misled into thinking the subject of the media/video is saying words that they did not actually say, and…
  • If the media is the product of artificial intelligence or machine learning that has merged, replaced or superimposed content onto a video, in order to make it appear to be authentic.

Not Satire

Facebook has been careful to point out that this policy change will not affect content that is clearly intended to be parody or satire, or videos that have been edited just to omit or change the order of the words featured in them.

Existing Policies

Any media posted to Facebook is subject to the social media giant’s existing comply-or-be-removed ‘Community Standards’ policies which cover, among other things, voter suppression and hate speech.

What Will Happen?

Facebook says that any videos that don’t meet its standards for removal are still eligible for review by one its independent third-party fact-checkers (which include 50+ partners worldwide) and that any photos or videos rated as false or partly false (by a fact-checker) will have its distribution “significantly” reduced in News Feed and will be rejected if it’s being run as an ad. Also, those who see it and try to share it, or have already shared it, will be shown warnings alerting them that it’s false.

Measures

Facebook has taken many measures to ensure that it is not seen as a platform that can’t be trusted with user data or as a distributor of fake news.  For example:

– In January 2019 Facebook announced (in the UK) that it was working with London-based, registered charity ‘Full Fact’ to review stories, images and videos, in an attempt to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.

– In September 2019, Facebook launched its Deep Fake Detection Challenge, with $10 million in grants and with a cross-sector coalition of organisations in order to encourage the production of tools to detect deepfakes.

– In October 2019, Facebook launched the ‘News’ tab on its mobile app to direct users to unbiased, curated articles from credible sources in a bid to publicly combat fake news and help restore trust in its own brand.

– Facebook has partnered with Reuters to produce a free online training course to help newsrooms worldwide to identify deepfakes and manipulated media.

Criticism

Despite this recent announcement of policy change to help eradicate deepfakes from its platform, Facebook has been criticised by some commentators for appearing to allow some videos which some could describe as misinformation in certain situations (apparently of its choosing).  For example, Facebook has said that content that violates its policies could be allowed if it is deemed newsworthy e.g. presumably, the obviously doctored videos of Labour’s Keir Starmer and US House Speaker Nancy Pelosi.

What Does This Mean For Your Business?

Clearly, any country would like to guard against outside influence in its democratic processes and the deliberate spread of misinformation, and bearing in mind the position of influence that Facebook has, it is good for everyone that it is taking responsibility and trying to block obvious attempts to spread misinformation by altering its policies and working with other organisations. Businesses that use Facebook as an advertising platform also need to know that Facebook users have trust in (and will continue to use) that platform (and see their adverts) so it’s important to businesses that Facebook is vigilant and takes action where it can.  Also, by helping to protect the democratic processes of the countries it operates in, particularly in the US at the time of and election (and bearing in mind what happened last time), it is in Facebook’s own interest to protect its brand against any accusations of not allowing political influence through a variety of media on its platform, and any further loss of trust by its public. This change of policy also shows that Facebook is trying to show readiness to deal with the most up to date threat of deepfakes (even though they are relatively rare).

That said, Google and Twitter (with its new restrictions on micro-targeting for example), have both been very public about trying to stop all lies in political advertising on their platforms, but Facebook has just been criticised by the IPA over its decision not to ban political ads that are using micro-targeting and spurious claims to sway the opinions of voters.

Facebook Bans Deepfake Videos

In a recent blog post, ahead of the forthcoming US election, Monika Bickert, Vice President, of Facebook’s Global Policy Management has announced that the social media giant is banning deepfakes and “all types of manipulated media”.

Not Like Last Time

With the 59th US presidential election scheduled for Tuesday, November 3, 2020, Facebook appears to be taking no chances after the trust-damaging revelations around unauthorised data sharing with Cambridge Analytica, and the use of the platform by foreign powers such as Russia in an attempt to influence the outcome of the 2016 election of Donald Trump.

The fallout of the news that 50 million Facebook profiles were harvested as early as 2014 in order to build a software program that could predict and use personalised political adverts to influence choices at the ballot box in the last U.S. election includes damaged trust in Facebook, a substantial fine, plus a fall in the number of daily users in the United States and Canada for the first time in its history.

Deepfakes

One of the key concerns to Facebook this time around appears to be so-called ‘deepfake’ videos.  These use deep learning technology and manipulated images of target individuals (found online), often celebrities, politicians, and other well-known people to create very convincing videos of the subjects saying and doing whatever the video-maker wants them to. These videos could obviously be used to influence public thinking about political candidates, and as well as having an influence in election results, it would be very damaging for Facebook, which has been very public about trying to rid itself of ‘fake news’ and not to be seen as a platform for the easy distribution of deepfake videos.  No doubt Facebook’s CEO Mark Zuckerberg would like to avoid having to appear before Congress again to answer questions about his company’s handling of personal data, as he had to back in April 2018.

The New Statement From Facebook

This latest blog post statement from Facebook says that as a matter of policy, it will now remove any misleading media from its platform if the media meets two criteria, which are:

  • If it has been synthesised i.e. more than just adjustments for clarity or quality to the point where the ‘average person’ could be misled into thinking the subject of the media/video is saying words that they did not actually say, and…
  • If the media is the product of artificial intelligence or machine learning that has merged, replaced or superimposed content onto a video, in order to make it appear to be authentic.

Not Satire

Facebook has been careful to point out that this policy change will not affect content that is clearly intended to be parody or satire, or videos that have been edited just to omit or change the order of the words featured in them.

Existing Policies

Any media posted to Facebook is subject to the social media giant’s existing comply-or-be-removed ‘Community Standards’ policies which cover, among other things, voter suppression and hate speech.

What Will Happen?

Facebook says that any videos that don’t meet its standards for removal are still eligible for review by one its independent third-party fact-checkers (which include 50+ partners worldwide) and that any photos or videos rated as false or partly false (by a fact-checker) will have its distribution “significantly” reduced in News Feed and will be rejected if it’s being run as an ad. Also, those who see it and try to share it, or have already shared it, will be shown warnings alerting them that it’s false.

Measures

Facebook has taken many measures to ensure that it is not seen as a platform that can’t be trusted with user data or as a distributor of fake news.  For example:

– In January 2019 Facebook announced (in the UK) that it was working with London-based, registered charity ‘Full Fact’ to review stories, images and videos, in an attempt to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.

– In September 2019, Facebook launched its Deep Fake Detection Challenge, with $10 million in grants and with a cross-sector coalition of organisations in order to encourage the production of tools to detect deepfakes.

– In October 2019, Facebook launched the ‘News’ tab on its mobile app to direct users to unbiased, curated articles from credible sources in a bid to publicly combat fake news and help restore trust in its own brand.

– Facebook has partnered with Reuters to produce a free online training course to help newsrooms worldwide to identify deepfakes and manipulated media.

Criticism

Despite this recent announcement of policy change to help eradicate deepfakes from its platform, Facebook has been criticised by some commentators for appearing to allow some videos which some could describe as misinformation in certain situations (apparently of its choosing).  For example, Facebook has said that content that violates its policies could be allowed if it is deemed newsworthy e.g. presumably, the obviously doctored videos of Labour’s Keir Starmer and US House Speaker Nancy Pelosi.

What Does This Mean For Your Business?

Clearly, any country would like to guard against outside influence in its democratic processes and the deliberate spread of misinformation, and bearing in mind the position of influence that Facebook has, it is good for everyone that it is taking responsibility and trying to block obvious attempts to spread misinformation by altering its policies and working with other organisations. Businesses that use Facebook as an advertising platform also need to know that Facebook users have trust in (and will continue to use) that platform (and see their adverts) so it’s important to businesses that Facebook is vigilant and takes action where it can.  Also, by helping to protect the democratic processes of the countries it operates in, particularly in the US at the time of and election (and bearing in mind what happened last time), it is in Facebook’s own interest to protect its brand against any accusations of not allowing political influence through a variety of media on its platform, and any further loss of trust by its public. This change of policy also shows that Facebook is trying to show readiness to deal with the most up to date threat of deepfakes (even though they are relatively rare).

That said, Google and Twitter (with its new restrictions on micro-targeting for example), have both been very public about trying to stop all lies in political advertising on their platforms, but Facebook has just been criticised by the IPA over its decision not to ban political ads that are using micro-targeting and spurious claims to sway the opinions of voters.

Glimpse of the Future of Tech at CES Expo Show

This week, at the giant CES expo in Las Vegas, the latest technology from around the world is on display, and here are just a few of the glimpses into the future that are being demonstrated there, with regards to business-tech.

Cyberlink FaceMe®

Leading facial recognition company Cyberlink will be demonstrating the power of its highly accurate FaceMe® AI engine. The FaceMe® system, which Cyberlink claims has an accuracy rate (TAR, True Acceptance Rate) of 99.5% at 10-4 FAR, is so advanced that it can recognise the age, gender and even the emotional state of passers-by and can use this information to display appropriate adverts.

D-ID

In a world where facial recognition technology is becoming more prevalent, D-ID recognise the need to protect the sensitive biometric data that makes up our faces. On display at CES expo is D-ID’s anti facial recognition solution which uses an algorithm, advanced image processing and deep learning techniques to re-synthesise any given photo to a protected version so that photos are unrecognisable to face recognition algorithms, but humans will not notice any difference.

Hour One

Another interesting contribution to the Las Vegas CES expo is Hour One’s AI-powered system for creating premium quality synthetic characters based on real-life people. The idea is that these very realistic characters can be used to promote products without companies having to hire expensive stars and actors and that companies using Hour One can save time and money and get a close match to their brief due to the capabilities, scale/cope and fast turnaround that Hour One offers.

Mirriad

Also adding to the intriguing and engaging tech innovations at the expo, albeit at private meetings there, is Mirriad’s AI-powered solution for analysing videos, TV programmes and movies for brand/product insertion opportunities and enabling retrospective brand placements in the visual content. For example, different adverts can be inserted in roadside billboards and bus stop advertising boards that are shown in pre-shot videos and films.

What Does This Mean For Your Business?

AI is clearly emerging as an engine that’s driving change and creating a wide range of opportunities for business marketing as well as for security purposes. The realism and accuracy, flexibility, scope, scale, and potential cost savings that AI offers could provide many beneficial business opportunities. The flipside for us as individuals and consumers is that, for example, as biometric systems (such as facial recognition) offers us some convenience and protection from cyber-crime, they can also threaten our privacy and security. It is ironic and probably inevitable, therefore, that we may need and value AI-powered protection solutions such as D-ID to protect us.

AI Better at Breast Cancer Detection Than Doctors

Researchers at Good Health have created an AI program which, in tests, has proven to be more accurate at detecting and diagnosing breast cancer than expert human radiologists.

Trained

The AI software, which was developed by Good Health researchers in conjunction DeepMind, Cancer Research UK Imperial Centre, Northwestern University and Royal Surrey County Hospital was ‘trained’ to detect the presence of breast cancer using X-ray images (from mammograms) from nearly 29,000 women.

Results

In the UK tests, compared to one radiologist, the AI program delivered a reduction of 1.2% in false positives, when a mammogram was incorrectly diagnosed as abnormal, and a reduction of 2.7% in false negatives, where the cancer was missed. These positive results were even greater for the US tests.

In another separate test, which used the program trained only on UK data then tested it against US data (to determine its wider effectiveness), there was a very respectable 3.5% reduction in false positives and 8.1% reduction in false negatives.

In short, these results appear to show that the AI program, which outperformed six radiologists in the reading of mammograms and only had mammograms to go on (human radiologists also have access to medical history) is better at spotting cancer than a single doctor, and equally as good as spotting cancer as the current double-reading system of two doctors.

Promising

Even though these initial test results have received a lot of publicity and appear to be very positive, bearing in mind the seriousness of the condition, AI-based systems of this kind still have some way to go before more research, clinical studies and regulatory approval brings them into the mainstream of healthcare services.

What Does This Mean For Your Business?

This faster and more accurate way of spotting and diagnosing breast cancer by harnessing the power of AI could bring many benefits. These include reducing stress for patients by shortening diagnosis time, easing the workload pressure on already stretched radiologists, and going some way towards helping bridge the UK’s current shortage of radiologists who need to be trained to read mammograms (which normally takes more than 10 years training). All this could mean greater early diagnosis and survival rates.

For businesses, this serves as an example of how AI can be trained and used to study some of the most complex pieces of information and produce results that can be more accurate, faster, and cheaper than humans doing the same job, remembering that, of course, AI programs work 24/7 without a day off.