AI

Google Announces New ‘Teachable Machine 2.0’ No-Code Machine Learning Model Generator

Two years on from its first incarnation, Google has announced the introduction of its ‘Teachable Machine 2.0’, a no-code custom machine learning model generating platform that can be used by anyone and requires no coding experience.

First Version

Back in 2017, Google introduced its first version of Teachable Machine which enabled anyone to teach their computer to recognise images using a webcam. This first version enabled many children and young people to gain their first experience of training their own machine learning model i.e. teaching their computer how to recognise patterns in data (images) and assign new data to categories.

Teachable Machine 2.0

Google’s new ‘Teachable Machine 2.0’ is a browser-based system that records from the user’s computer’s webcam and microphone, and with the click of a ‘train’ button (no coding required), it can be trained to recognise images, sounds or poses.  This enables the user to quickly and easily create their own custom machine learning models which they can download and use on their own device or upload and host online.

Fear-Busting and Confidence

One of the key points that Google wants to emphasise is that the no-code, click-of-a-button aspect of this machine learning model generator can instil confidence in young users that they are able to successfully use advanced computer technology creatively without coding experience.  This, as Google mentions on its blog, has been identified as being important by parents of girls as girls face challenges in becoming interested in and finding available jobs in computer science.

What Can It Be Used For?

In addition to being used as a teaching aid, examples of how Teachable Machine 2.0 has been used include:

  • Improving communication for people with impaired speech. For example, this has been done by turning recorded voice samples into spectrograms that can be used to “train” a computer system to better recognise less common types of speech
  • Helping with game design.
  • Making physical sorting machines. For, example, Google’s own project has used Teachable Machine to create a model that can classify and sort objects.

What Does This Mean For Your Business?

The UK has a tech skills shortage that has been putting pressure on UK businesses that are unable to find skilled people to drive innovation and tech product and service development forward.  A platform that enables young people to feel more confident and creative in using the latest technologies from a young age without being thwarted by the need for coding could lead to more young people choosing computer science in further and higher education and seeking careers in IT.  This, in turn, could help UK businesses.

No-coding solutions such as Teachable Machine 2.0 represent a way of democratising app and software development and utilising ideas and creativity that may have previously been suppressed by a lack of coding experience.  Businesses always need creativity and innovation in order to create new opportunities and competitive advantage and Teachable Machine 2.0 may be one small step in helping that to happen further down the line.

ICO Warns Police on Facial Recognition

In a recent blog post, Elizabeth Denham, the UK’s Information Commissioner, has said that the police need to slow down and justify their use of live facial recognition technology (LFR) in order to maintain the right balance in reducing our privacy in order to keep us safe.

Serious Concerns Raised

The ICO cited how the results of an investigation into trials of live facial recognition (LFR) by the Metropolitan Police Service (MPS) and South Wales Police (SWP) led to the raising of serious concerns about the use of a technology that relies on a large amount of sensitive personal information.

Examples

In December last year, Elizabeth Denham launched the formal investigation into how police forces used FRT after high failure rates, misidentifications and worries about legality, bias, and privacy.  For example, the trial of ‘real-time’ facial recognition technology on Champions League final day June 2017 in Cardiff, by South Wales and Gwent Police forces was criticised for costing £177,000 and yet only resulting in one arrest of a local man whose arrest was unconnected.

Also, after trials of FRT at the 2016 and 2017 Notting Hill Carnivals, the Police faced criticism that FRT was ineffective, racially discriminatory, and confused men with women.

MPs Also Called To Stop Police Facial Recognition

Back in July this year, following criticism of the Police usage of facial recognition technology in terms of privacy, accuracy, bias, and management of the image database, the House of Commons Science and Technology Committee called for a temporary halt in the use of the facial recognition system.

Stop and Take a Breath

In her blog post, Elizabeth Denham urged police not to move too quickly with FRT but to work within the model of policing by consent. She makes the point that “technology moves quickly” and that “it is right that our police forces should explore how new techniques can help keep us safe. But from a regulator’s perspective, I must ensure that everyone working in this developing area stops to take a breath and works to satisfy the full rigour of UK data protection law.”

Commissioners Opinion Document Published

The ICO’s investigations have now led her to produce and publish an Opinion document on the subject, as is allowed by The Data Protection Act 2018 (DPA 2018), s116 (2) in conjunction with Schedule 13 (2)(d).  The opinion document has been prepared primarily for police forces or other law enforcement agencies that are using live facial recognition technology (LFR) in public spaces and offers guidance on how to comply with the provisions of the DPA 2018.

The key conclusions of the Opinion Document (which you can find here: https://ico.org.uk/media/about-the-ico/documents/2616184/live-frt-law-enforcement-opinion-20191031.pdf) are that the police need to recognise the strict necessity threshold for LFR use, there needs to be more learning within the policing sector about the technology, public debate about LFR needs to be encouraged, and that a statutory binding code of practice needs to be introduced by government at the earliest possibility.

What Does This Mean For Your Business?

Businesses, individuals and the government are all aware of the positive contribution that camera-based monitoring technologies and equipment can make in terms of deterring criminal activity, locating and catching perpetrators (in what should be a faster and more cost-effective way with live FRT), and in providing evidence for arrests and trials.  The UK’s Home Office has also noted that there is general public support for live FRT in order to (for example) identify potential terrorists and people wanted for serious violent crimes.  However, the ICO’s apparently reasonable point is that moving too quickly in using FRT without enough knowledge or a Code of Practice and not respecting the fact that there should be a strict necessity threshold for the use of FRT could reduce public trust in the police and in FRT technology.  Greater public debate about the subject, which the ICO seeks to encourage, could also help in raising awareness about FRT, how a balanced approach to its use can be achieved and could help clarify matters relating to the extent to which FRT could impact upon our privacy and data protection rights.

Microsoft Beats Amazon to $10 Billion AI Defence Contract for ‘Jedi’

After a long and difficult bidding process, Amazon has lost out to Microsoft in the battle to win a $10bn (£8bn) US Defence Department AI and Cloud computing contract.

For ‘Jedi’

The contract was for the Joint Enterprise Defence Infrastructure (Jedi).  This infrastructure will be designed to enable US forces to get fast access to important Cloud-held data from whichever battlefield they are on. The project will also see AI being used to enhance and speed up the delivery of data to US forces, thereby potentially giving them an advantage.

Amazon Was Thought To Be In Front…Before Trump Comments

Amazon, led by Jeff Bezos, was believed by many tech commentators to have been the front-runner of the two tech giants in the battle for the contract as it is the biggest provider of cloud-computing services.  Also, Amazon had already won an important computing services contract with the CIA in 2013 and is already a supplier of cloud services and technologies to thousands of U.S. agencies.

Unfortunately for Amazon, in August the Pentagon appeared to put the brakes on the final decision-making process following concerns expressed by President Trump.

The President is reported to have said back in July that he was concerned about the contact not being “competitively bid” and that he had heard “complaints” about the contract with Amazon and the Pentagon.

The President, however, was not the only one with concerns as tech giant Oracle (which was also in the running for the contract at one point) had gone to the federal court earlier in the year with allegations (which were dismissed) that the bidding process had been rigged in Amazon’s favour.

Difficult Relationship

Many media reports have suggested that a difficult relationship between President Trump and Jeff Bezos in the past has possibly had some influence on the outcome of the Pentagon’s decision about the project.  For example, Mr Bezos has been criticised before by President Trump, and Mr Bezos also owns the Washington Post.  President Trump has been critical of several news outlets, such as CNN, the New York Times, and The Washington Post.  For example, it has been reported by the Wall Street Journal that President Trump has now instructed his agencies not to renew their subscriptions to those newspapers.

Great News For Microsoft

Winning the contract is, of course, good news for Microsoft which will receive a large amount of U.S. Defence funds for the Jedi contact, and possibly for another defence -related multi-billion-dollar contract (‘Deos’) to supply cloud-based Office 365.

What Does This Mean For Your Business?

With a contract of this value up for grabs and the possibility of further lucrative contracts too, this was never going to be a clean and uncomplicated fight between the tech giants.  In this case, however, it being a defence contract, one of the key influencers was the U.S. President and it appears that his relationship with Amazon’s Jeff Bezos along with other factors may have played a part in Microsoft coming out on top.  The size and complexity of the contract meant that it was only ever going to be something for the big, established tech names, and Microsoft winning the contract was undoubtedly an important victory against its competitor Amazon, will add value to its brand, will bring in a sizeable source of revenue at a time when it’s already seen a 21 per cent rise in its profits on last year, and puts Microsoft in a much closer 2nd position behind Amazon’s AWS in the cloud computing services market.

AI and the Fake News War

In a “post-truth” era, AI is one of the many protective tools and weapons involved in the battles that male up the current, ongoing “fake news” war.

Fake News

Fake news has become widespread in recent years, most prominently with the UK Brexit referendum, the 2017 UK general election, and the U.S. presidential election, all of which suffered interference in the form of so-called ‘fake news’ / misinformation spread via Facebook which appears to have affected the outcomes by influencing voters. The Cambridge Analytica scandal, where over 50 million Facebook profiles were illegally shared and harvested to build a software program to generate personalised political adverts led to Facebook’s Mark Zuckerberg appearing before the U.S. Congress to discuss how Facebook is tackling false reports. A video that was shared via Facebook, for example (which had 4 million views before being taken down), falsely suggested that smart meters emit radiation levels that are harmful to health. The information in the video was believed by many even though it was false.

Government Efforts

The Digital, Culture, Media and Sport Committee has published a report (in February) on Disinformation and ‘fake news’ highlighting how “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms”.  The UK government has, therefore, been calling for a shift in the balance of power between “platforms and people” and for tech companies to adhere to a code of conduct written into law by Parliament and overseen by an independent regulator.

Fact-Checking

One way that social media companies have sought to tackle the concerns of governments and users is to buy-in fact-checking services to weed out fake news from their platforms.  For example, back in January London-based, registered charity ‘Full Fact’ announced that it would be working for Facebook, reviewing stories, images and videos to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.

Moderation

A moderator-led response to fake news is one option, but its reliance upon humans means that this approach has faced criticism over its vulnerability to personal biases and perspectives.

Automation and AI

Many now consider automation and AI to be an approach and a technology that are ‘intelligent’, fast, and scalable enough to start to tackle the vast amount of fake news that is being produced and circulated.  For example, Google and Microsoft have been using AI to automatically assess the truth of articles.  Also, initiatives like the Fake News Challenge (http://www.fakenewschallenge.org/) seeks to explore how AI technologies, particularly machine learning and natural language processing, can be leveraged to combat fake news, and supports the idea that AI technologies hold promise for significantly automating parts of the procedure human fact-checkers use to determine if a story is real or a hoax.

However, the human-written rules underpinning AI, and how AI is ‘trained’ can also lead to bias.

Deepfake Videos

Deepfake videos are an example of how AI can be used to create fake news in the first place.  Deepfake videos use deep learning technology and manipulated images of target individuals (found online), often celebrities, politicians, and other well-known people to create an embarrassing or scandalous video. Deepfake audio can also be manipulated in a similar way.  Deepfake videos aren’t just used to create fake news sources, but they can also be used by cyber-criminals for extortion.

AI Voice

There has also been a case in March this year, where a group of hackers were able to use AI software to mimic an energy company CEO’s voice in order to steal £201,000.

What Does This Mean For Your Business?

Fake news is a real and growing threat, as has been demonstrated in the use of Facebook to disseminate fake news during the UK referendum, the 2017 UK general election, and the U.S. presidential election. State-sponsored politically targeted campaigns can have a massive influence on an entire economy, whereas other fake news campaigns can affect public attitudes to ideas and people and can lead to many other complex problems.

Moderation and automated AI may both suffer from bias, but at least they are both ways in which fake news can be tackled, to an extent.  Through adding fact-checking services, other monitoring, and software-based approaches e.g. through browsers, social media and other tech companies can take responsibility for weeding out and guarding against fake news.

Governments can also help in the fight by putting pressure on social media companies and by collaborating with them to keep the momentum going and to help develop and monitor ways to keep tackling fake news.

That said, it’s still a big problem, no solution is infallible, and all of us as individuals would do well to remember that, especially today, you really can’t believe everything you read and an eye to source and bias of news coupled with a degree of scepticism can often be healthy.

AI and Facial Analysis Job Interviews

Reports of the first job interviews conducted in the UK using Artificial Intelligence and facial analysis technology have been met with mixed reactions.

The Software

The AI and facial analysis technology used for the interviews comes from US firm HireVue. The main products available from HireVue for interviewing are Pre-Employment Assessments and Video Interviewing.

For the Pre-Employment Assessments, the software uses AI, video game technology, and game-based and coding challenges to collect candidate insights related to work style, how the candidate works with people, and general cognitive ability. The Assessments are customisable to specific hiring objectives or ready to deploy based on pre-validated models. The data points are analysed by HireVue’s proprietary machine learning algorithms, and the insights gained are intended to enable businesses to save time and use recruitment resources more effectively by enabling businesses to quickly prioritise which candidates to shortlist for interviews.

The Video Interviewing product uses real-time evaluation tools and can assess around 25,000 data points in one interview.  During interviews, candidates are asked to answer pre-scripted questions with HireVue Live offering a real-time collaborative video interview that can involve a whole recruitment team. The benefits of on-demand video-based assessments, which can be conducted in less than 30 minutes, are that recruiters and managers don’t have to synchronize candidates and calendars, and can evaluate more candidates, thereby being able to spend their time deciding between the best candidates.

Who Is Using The Software?

According to HireVue, 700+ companies use the software (not all in the UK) including Vodafone, Urban Outfitters, Intel, Ikea, Hilton, Unilever, Singapore Airlines, JP Morgan and Goldman Sachs. It has been reported, however, that the technology has already been used for 100,000 interviews in the UK.

Concerns

Even though there are obvious on-demand expertise, time and cost savings for companies, and HireVue displays case studies from satisfied customers on its website, AI and facial analysis technology use in interviews has been met with criticism by privacy and rights groups.

For example, it has been reported that Big Brother Watch representatives have voiced concerns about the ethics of using this method, possible bias and discrimination (if the AI hasn’t been trained on a diverse-enough range of people), and that unconventional but still good potential candidates could fall foul of algorithms that can’t take account of the complexities of human speech, body language and expression.

Robot Interviewer

Back in March, it was reported that TNG and Furhat Robotics in Sweden have developed a social, unbiased recruitment robot called “Tengai” that can be used to conduct job interviews with human candidates. The basic robot was developed several years ago and looks like an internally projected human face on a white head sitting on top of a speaker (with camera and microphone built-in).  The robot is made with pre-built expressions and gestures as part of a pre-loaded OS which can be further customised to fit any character, and the HR-tech application software that Tengai uses means that it can conduct situation and skill-based interviews in a way that is as close as possible to a human interviewer. This includes using “hum”, nodding its head, and asking follow-up questions.

What Does This Mean For Your Business?

Like the Swedish Tengai robot Interviews, the HireVue Pre-Employment Assessment (and possibly the video) appear to be have been designed to be used at the early part of the recruitment process as a way of enabling big companies to quickly create a shortlist of candidates to focus on. As businesses become used to, and realise the value of outsourcing as a way of making better use of resources and buying in scalable and on-demand skills and resources, it appears that bigger companies are also willing to trust new technology to the point where they outsource expertise and human judgement in exchange for the promise of better, and more cost-effective recruitment management.

AI, facial recognition, and other related new technologies and algorithms are being trusted and adopted more by big businesses which also need to remember, for the benefit of themselves and their customers and job candidates that they need to make sure that bias is minimised, and that technology is unlikely to be able to pick up on every (potentially important) nuance of human behaviour and speech.  It should never be forgotten that we each have the most powerful, amazing and perceptive ‘computer’ available in the form of our own brain, and for vast amount of medium and small businesses that probably can’t afford or don’t want to use AI to choose recruits, experienced human interviewers can also make good recruitment decisions.

That said, as technology progresses, AI-based recruitment systems are likely to improve by gaining their own experience, and be augmented, and become more widely available and affordable to the point that they become a standard first challenge for job applicants in many situations.

Deepfake Ransomware Threat Highlighted 

Multinational IT security company ‘Trend Micro’ has highlighted the future threat of cybercriminals making and posting or threatening to post malicious ‘deep fake’ videos online in order to cause damage to reputations and/or to extract ransoms from their target victims.

What Are Deepfake Videos?

Deep fake videos use deep learning technology and manipulated images of target individuals (found online), often celebrities, politicians, and other well-known people to create an embarrassing or scandalous video such as pornography or violent behaviour. The AI aspect of the technology means that even the facial expressions of those individuals featured in the video can be eerily accurate, and on first viewing, the videos can be very convincing.

An example of the power of deepfake videos can be seen on the Mojo top 10 (US) deep fake video compilation here: https://www.youtube.com/watch?v=-QvIX3cY4lc

Audio Too

Deepfake ‘ransomware’ can also involve using AI to manipulate audio in order to create a damaging or embarrassing recording of someone, or to mimic someone for fraud or extortion purposes.

A recent example was outlined in March this year, when a group of hackers were able to use AI software to mimic (create a deep fake) of an energy company CEO’s voice in order to successfully steal £201,000.

Little Fact-Checking

Rik Ferguson, VP of security research and Robert McArdle, director of forward-looking threat research at Trend Micro recently told delegates at Cloudsec 2019 that deepfake videos have the potential to be very effective not just because of their apparent accuracy, but also because we live in an age when few people carry out their own fact-checking.  This means that by simply uploading such a video, the damage to reputation and the public opinion of the person is done.

Scalable & Damaging

Two of the main threats of deepfake ransomware videos is that they are very flexible in terms of subject matter i.e. anyone can be targeted, from teenagers for bullying to politicians and celebrities for money, and they are a very scalable way for cybercriminals to launch potentially lucrative attacks.

Positive Use Too

It should be said that deepfakes don’t just have a negative purpose but can also be used to help filmmakers to reduce costs and speed up work, make humorous videos and advertisements, and even help in corporate training.

What Does This Mean For Your Business?

The speed at which AI is advancing has meant that deepfake videos are becoming more convincing, and more people have the resources and skills to make them.  This, coupled with the flexibility and scalability of the medium, and the fact that it is already being used for dishonest purposes means that it may soon become a real threat when used by cybercriminals e.g. to target specific business owners or members of staff.

In the wider environment, deepfake videos targeted at politicians in (state-sponsored) political campaigns could help to influence public opinion when voting which in turn could have an influence on the economic environment that businesses must operate in.

IBM To Offer Largest Quantum Computer Available For External Access Via Cloud

IBM has announced that it is opening a Quantum Computation Centre in New York which will bring the world’s largest fleet of quantum computing systems online, including the new 53-Qubit Quantum System for broad use in the cloud.

Largest Universal Quantum System For External Access

The new 53-quantum bit/qubit model is the 14th system that IBM offers, and IBM says that it is the single largest universal quantum system made available for external access in the industry, to date. This new system will (within one month) give its users the ability to run more complex entanglement and connectivity experiments.

IBM Q

It was back in March 2017 that IBM announced that it was about to offer a service called IBM Q that would be the first time that a universal quantum computer had been commercially available, giving access to (and use of) a powerful, universal quantum computer, via the cloud.

Since then, a fleet composed of five 20-qubit systems, one 14-qubit system, and four 5-qubit systems have been made available, and since 2016 IBM says that a global community of users have run more than 14 million experiments on their quantum computers through the cloud, leading to the publishing of more than 200 scientific papers.

Who?

Although most uses of quantum computers have been for isolated lab experiments, IBM is keen to make quantum computing widely available in the cloud to tens of thousands of users, thereby empowering what it calls “an emerging quantum community of educators, researchers, and software developers that share a passion for revolutionising computing”.

Why?

The hope is that by making quantum computing more widely available, it could lead to greater innovation, more scientific discoveries e.g. new medicines and materials, improvements in the optimisation of supply chains, and even better ways to model financial data leading to better investments which could have an important and positive knock-on effect in businesses and economies.

Partners

Some of the partners and clients that IBM says it has already worked with its quantum computers include:

  • J.P. Morgan Chase for ‘Option Pricing’ – a way to price financial options and portfolios. The method devised using the quantum computer has speeded things up dramatically so that financial analysts can now perform option pricing and risk analysis in near real-time.
  • Mitsubishi Chemical, Keio University and IBM, on a simulation related to reactions in lithium-air batteries which could lead to making more efficient batteries for mobile devices or automotive vehicles.

Quantum Risk?

Back in November 2018, however, security architect for Benelux at IBM, Christiane Peters, warned of the possible threat of commercially available quantum computers being used by criminals to try and crack encrypted business data.

As far back as 2015 in the US, the National Security Agency (NSA) warned that progress in quantum computing was at such a point that organisations should deploy encryption algorithms that can withstand such attacks from quantum computers.

The encryption algorithms that can stand up to attacks from quantum computers are known by several names including post-quantum cryptography / quantum-proof cryptography, and quantum-safe / quantum-resistant cryptographic (usually public-key) algorithms.

What Does This Mean For Your Business?

The ability to use a commercially available quantum computer via the cloud will give businesses and organisations an unprecedented opportunity to solve many of their most complex problems, develop new and innovative potentially industry-leading products and services and perhaps discover new, hitherto unthought-of business opportunities, all without needed to invest in hitherto prohibitively expensive hardware themselves. The 14 hugely powerful systems now available to the wider computing and business community could offer the chance to develop products that could provide a real competitive advantage in a much shorter amount of time and at much less cost than traditional computer architecture and R&D practices previously allowed.

As with AI, just as new technologies and innovative services can be used for good, their availability could also mean that in the wrong hands they could be used to pose a new threat that’s very difficult for most business to defend against. Quantum computing service providers, such as IBM, need to ensure that the relevant checks, monitoring and safeguards are in place to protect the wider business community and economy against a potentially new and powerful threat.

Autonomous AI Cyber Weapons Inevitable Says Security Research Expert

Speaking at a recent CloudSec event in London, Trend Micro’s vice-president of security research, Rik Ferguson said that AI cyberattacks operated autonomously are an inevitable threat that security professionals must adapt to tackling.

If Leveraged By Cybercriminals

Mr Ferguson said that when cybercriminals manage to leverage the power of AI, organisations may find themselves experiencing attacks that happen very quickly, contain malicious code, and can even adapt themselves to target specific people in an organisation e.g. impersonating senior company personnel in order to get payments authorised, pretending to be a penetration testing tool, or finding ways to motivate targeted persons to fall victim to a phishing scam.

AI Vs AI

Mr Ferguson suggested that the inevitability of cybercriminals developing autonomous AI-driven attack weapons means that it may be time to be thinking in a world of AI versus AI.

Example of Attack

One close example given by Ferguson is the Emojet Trojan.  This malware, which obtains financial information by injecting computer code into the networking stack of an infected Microsoft Windows computer, was introduced 5 years ago but has managed to adapt and cover its tracks even though it is not even AI-driven.

AI Launching Own Attacks Without Human Intervention

Theresa Payton, who was the first women to be a White House CIO (under president George W Bush) and is now CEO of security consultancy Fortalice, has been reported as saying that the advent of genuine AI has posed serious questions, that the cybersecurity industry is falling behind, and that we may even be facing a situation where AI will be able to launch its own attacks without human intervention.

Challenge

One challenge to responding effectively to AI cyber-attacks is likely to be that cybersecurity and law enforcement agencies must move at the speed of law, particularly where procedures must be followed to request help from and arrange coordination between foreign agencies.  The speed of the law, unfortunately, is likely to be much slower than the speed of an AI-powered attack.

What Does This Mean For Your Business?

It is a good thing for all businesses that the cybersecurity industry recognises the inevitability of AI-powered attacks, and although it fears that it risks falling behind, it is talking about the issue, taking it seriously, and looking at ways in which it needs to change in order to respond.

Adopting AI Vs AI thinking now may be a sensible way to help security professionals, and those in charge of national security to focus thinking and resources on finding ways to innovate and create their own AI-based detection and defensive systems and tools, and the necessary strategies and alliances in readiness for a new kind of attack.

AI Destined For McDonald’s Drive-Throughs

The acquisition of AI voice recognition start-up Apprente by the McDonalds franchise gives the restaurant chain its own Silicon Valley technology division and promises an automated ordering system for drive-throughs, self-order interfaces and the mobile app.

Apprente

Apprente is a Silicon Valley-based start-up (founded 2017, Mountain View, California) that specialises in making customer service chatbots.  Its acquisition by McDonald’s gives the restaurant chain its own AI-powered voice-based conversational system that can handle human-level interactions, thereby helping improve the speed and accuracy of orders.

It is thought that the Apprente system will not completely replace the traditional front of house staff, but may be used in mobile ordering or kiosks i.e. added to drive-through kiosks or sited nearby (and added to the mobile app) so that that food can be ordered by the customer’s voice, and transcripts of the order can be given to staff to ensure that the order is correct.  The transcript may also be presented or read to the customer when they pick the order up minutes later.  The technology may, therefore, provide time-saving, accuracy, and convenience benefits to both customers and staff.

Why?

There are a few key reasons why McDonald’s has gone down the tech route with its order taking.  These include:

  1. Competition from home delivery companies.
  2. 70 per cent of the company’s orders come through its drive-throughs but some reports show that McDonald’s may be relatively slow in getting its drive-through food orders out.  For example, a recent report (Oches’ 2019) shows that while the average wait in a Burger King drive-through is over 193 seconds, the waiting time in McDonald’s is considerably longer at 273 seconds.  McDonald’s ranked the tenth and slowest fast-food company in that report, but the addition of the voice-based conversational system could help speed things up.
  3. To give McDonald’s a technology development centre, the McD Tech Labs in Silicon Valley so that the restaurant chain can keep adding value through new technology and stay ahead in the market.

Other Acquisitions

McDonald’s has also recently acquired customer services personalisation company and AI start-up ‘Dynamic Yield’. With this deal, worth more than £240 million, McDonald’s can use the decision-logic technology to create drive-through menus tailored to its customers based on the time of the day, trends, previous choices and other factors.

What Does This Mean For Your Business?

For customers, the deployment of the new voice-recognition technology in addition to the Dynamic Yield (already deployed in 8,000 US drive-throughs) should make ordering food a faster and better experience.

For McDonald’s, the addition of the new technology and of a tech base in Silicon Valley to develop more of the same should help it to compete in a market that’s getting busier with companies that are using technology to reach customers and satisfy the same need for fast gratification.  The value-adding technology (combined with the fact that McDonald’s have a restaurant in most towns with a standardised and trusted product and brand) means that McDonald’s is taking steps to ensure that it stays ahead in a future where technology is an important competitive advantage in fast food delivery.   The new technology may also help McDonald’s address its current need to get orders ready more quickly and accurately while adding a novelty factor, talking point, and perceived advantage among customers.

Major Workforce Changes Over The Next Five Years

A new global Forrester Consulting study predicts major changes in their service workforce over the next five years, including replacing call centre and customer service centre staff with automated dispatch notification.

The Study

The “From Grease To Code: What Drives Digital Service Transformation” study, commissioned by cloud-based software for service execution management company ServiceMax, highlights the opinions of 675 digital transformation decision-makers across North America, Europe, the Middle East and Asia Pacific, that are undergoing or have completed much of their digital transformation.

The Findings

The report predicts disruptive and major changes to service workforces globally in the next five years as companies make their digital transformation and where actionable intelligence becomes an important factor for competition and growth.

Changes

Predictions of the kinds of key changes that will take place with digital service transformation according to those surveyed for the study include:

  • Within just five years, asset equipment will outlive the working life of the engineers who service them (72 percent of respondents).
  • Technology will completely automate service technician dispatch, thereby replacing call centre and customer service centre staff (62 percent of respondents). This means that as soon as customer service systems identify a fault, the nearest appropriate field technician can be sent the job details directly, thereby cutting out the need for call centre staff.
  • Self-healing equipment and remote monitoring will mean that field service technicians can focus on more complex specialist tasks (85 percent of respondents). This is why just over half of firms are already investing or planning to invest in condition-based maintenance within the next two to three years.

Digital Transformation A Challenge To Many Companies

However, as noted by John Meacock, Global Chief Strategy Officer, Deloitte on the Global Economic forum website, many companies have found reaping the true benefits of digital transformation a real challenge, not least because becoming a digital enterprise requires comprehensive, systemic change and not just a new website or mobile strategy.

What Does This Mean For Your Business?

The rapidly evolving business environment has put a lot of pressure on businesses to innovate and to prioritise digital transformation in order to compete.  The results of this survey predict big changes in a relatively short period of time, and should alert businesses to the need to look at how they need to change, and ensure that they can incorporate digital solutions to help them deliver the best levels of service to customers i.e. making sure that organisational workforce strategy maps to the service data strategy.

If companies can make a good job of their digital transformation, this may bring them the benefits of being able to use their service data to make better operational decisions around predictive maintenance and customer service, and to extend the working life of capital equipment.  Also, getting to grips with the kind of systemic changes that can lead to a shift to as-a-service delivery models can help businesses to dramatically improve how they schedule, dispatch and maximize the value from their technical service talent.

Predictions of automation at the expense of jobs, and the introduction of AI into more aspects of business do appear to be becoming reality, and organisations need to consider how automation and AI could bring them new strengths and opportunities.