AI

Tech Tip – Free, Online AI Business School

If you’d like to get an understanding of what AI is and its implications for business strategy, corporate culture and business ethics, Microsoft, in partnership with global business school INSEAD has established a free, online business school.

The AI course offers a series of 10-minute lecture videos as well as academic lectures, case studies, executive perspective videos and technology talks, which combined provide a grounding in AI and its possible applications in your business.

The online school doesn’t require registration, and the course material can be accessed on demand via mobile devices or the desktop.

Access Microsoft’s AI Business School resources here: https://www.microsoft.com/en-us/ai/business

Tech Tip – Free, Online AI Business School

If you’d like to get an understanding of what AI is and its implications for business strategy, corporate culture and business ethics, Microsoft, in partnership with global business school INSEAD has established a free, online business school.

The AI course offers a series of 10-minute lecture videos as well as academic lectures, case studies, executive perspective videos and technology talks, which combined provide a grounding in AI and its possible applications in your business.

The online school doesn’t require registration, and the course material can be accessed on demand via mobile devices or the desktop.

Access Microsoft’s AI Business School resources here: https://www.microsoft.com/en-us/ai/business

AI Used To Tackle Money Laundering

Banks and financial institutions are experimenting with AI technology to tackle the multi-trillion-pound problem of money laundering, thereby hitting the traditional funding sources of major criminal gangs.

Money Laundering

Money laundering is the process of concealing the origins of illegally obtained money by passing it through legitimate business or a sequence of banking transfers.

According to figures from the UN’s Office on Drugs and Crime, money laundering accounts for up to 5% of global GDP – the equivalent of £1.5 trillion per year.  In the UK, National Crime Agency figures show that financial crime suspicious activity reports increased by 10% in 2018.

Also, in the UK for example, Companies House and estate agents (setting up new companies and investing in property) have been criticised by the government’s Treasury Committee as being key ways in which money laundering can take place in the UK.

The law in the UK (from 2017) relating to trying to tackle money laundering requires those businesses or sole traders who operate as “high-value dealers” i.e. you / your company accepts or makes high-value cash payments of €10,000 or more (or equivalent in any currency) in exchange for goods, must register with HMRC.

Money Laundering In The News

Some recent high-profile cases of alleged money laundering involving banks include:

  • Swiss bank UBS being fined a staggering £3.2 billion for helping wealthy clients based in France to hide money from tax and launder the proceeds (the bank has lodged an appeal).
  • In September 2018, Dutch bank ING Groep NV being fined €775 million euros after failing to spot that criminals had been money laundering through its accounts.
  • In December 2018, 10 former employees of the local branch of Danske Bank in Estonia being arrested as part of an international investigation into (alleged) money laundering.

How AI Can Help

AI technology is being tested in the fight against money laundering because AI can crunch vast amounts of data (i.e. the data from millions of bank transactions) very quickly and accurately, thereby making it very good at detecting patterns and deviations from patterns.  AI can, therefore, quickly detect patterns of unusual activity e.g. behaviour consistent with money laundering (AI also learns with experience), as well as being able to spot smurfing attempts (breaking down a transaction into smaller transactions to avoid being spotted), accounts that are set up remotely by bots rather than humans, and suspicious behaviour by corrupt insiders (known to be an important element in many successful money laundering operations).

What Does This Mean For Your Business?

Money laundering is often used to help organised criminals / criminal gangs continue to finance many kinds of other serious crimes which have a negative impact on society and the economy. It is, therefore, good news for businesses (particularly in the financial and property sectors) that an accurate, and reliable technology-based early detection system, that works independently from human influence and error is being set to work to crack an old problem using the very latest means.

Critics have said, however, that even though AI may be excellent at spotting unusual transaction patterns it will only be as effective as the data it is fed, and banks financial institutions, governments, and law enforcement agencies, therefore, need to share more information to get the best results from the AI tools.

Some have also been sceptical of how effective an ‘off-the-shelf’ AI-based money laundering detection tool (of which there are several on the market) could be.

Robot Programmed to Carry Out Unbiased Job Interviews

TNG and Furhat Robotics in Sweden have developed a social, unbiased recruitment robot called “Tengai” that can be used to conduct job interviews with human candidates.

Existing Robot, Modified

The robot, ‘Furhat’, was developed several years ago by Stockholm based start-up Furhat Robotics. The Furhat robot, which looks like an internally projected human face on a white head sitting on top of a speaker (with camera and microphone built-in) is made with pre-built expressions and gestures as part of a pre-loaded OS which can be further customized to fit any character.

In conjunction with Swedish recruitment company TNG, the Furhat robot was modified by developing and adding a software HR-tech application to Furhat’s OS, and the recruitment version of Furhat has been named “Tengai”.

Talks, Listens and Transcribes

In a typical interview, the Tengai recruitment robot firstly shares information in a dialogue form about the interview and how it will be conducted.  It can then ask questions and understand what a candidate is saying, regardless of the number of words and sentences used.  During the interview Tengai record candidates’ speech, which it converts into text in real time.

The HR-tech application software that Tengai uses means that it can conduct situation and skill-based interviews in a way that is as close as possible to a human interviewer. This includes using “hum”, nodding its head, and asking follow-up questions.

Although the robot is currently only able to use the Swedish language, an English-speaking version is likely to be available by the end of 2019 / beginning of 2020.

Most Useful at The Beginning of the Process

The recruitment robot is designed to be used at the beginning of the candidate selection process where it can help by being very objective and skill-focused in order to find the competencies in candidates that are needed for the job.

Unbiased

According to TNG, one of the big advantages of the Tengai recruitment robot is that it is unbiased in its assessment of candidates.  For example, Tengai only records candidates’ speech and converts this into text in real time. The robot does not consider any other variables such as a person’s accent or the pitch of their voice, their looks or gender, and Tengai is not given any information about any candidate other than their name and email address.

Also, Tengai asks questions in the same way, in the same tone and typically in the same order for each candidate, thereby making it fairer and more objective.

Creepy or Not?

TNG conducted 80 interviews to find out about peoples’ perceptions of the robot.  TNG reports that most were surprised by how ’natural’ it felt talking to the robot, which is adept at social codes.

What Does This Mean For Your Business?

It is vital that businesses can find and recruit the best possible candidate for a role. The big advantage of this kind of robot is that it can be very effective in the first part of the candidate selection process because it is very objective and skill-focused. An in-depth assessment by an experienced recruiter can then be used later on with the candidates that the robot has shortlisted in order to get the necessary detail and personalisation, giving a complete picture of a candidate’s suitability for a position.

Using an unbiased, objective and structured robot like Tengai can mean that recruiters/employers can shift the subjectivity further along the process where it is less damaging. Also, a robot interviewer can mean that more candidates can be invited to participate in the early stages of recruitment drive, allowing for greater diversity by ensuring a better and broader selection of talents. This can give a business a better chance of finding the right person to fit the role available.

New Smart App Converts Your Sketches Into To Works Of Art In Seconds

A new smart drawing app that uses deep learning can convert simple sketches and doodles into photo-realistic landscape artworks in the style of famous artists.

GauGAN

The “GauGAN” app from Nvidia, which is a play on the name of French post-Impressionist painter Paul Gauguin, is described as a “smart paintbrush” that works through the interplay of two generative adversarial networks, a generator and a discriminator, powered by deep learning.

How Does It Work?

When running the app, users can draw a simple sketch/doodle outline of a landscape in an on-screen grid/segmentation map. Users can label each segment (e.g. with sea, sky, trees etc). Using its deep learning training on a million images, GauGAN can then fill in labelled areas with photo-realistic images to create detailed artworks.

In creating the pictures, the generator network of GauGAN creates images that it presents to the discriminator. The discriminator, which is that part that has been trained on the one million real images coaches the generator with pixel-by-pixel feedback on how to improve the realism of its synthetic images, thereby enabling GauGAN to arrive at a stunning final image.

Not Just Landscapes

GauGAN can also add features such as buildings roads and people, as well as style filters.  Some filters enable users to produce an original artwork in the style of a famous artist or change the lighting of an artwork e.g. from day to night.

Like A Colouring Book Picture…

Nvidia’s blog describes it as being “like a colouring book picture that describes where a tree is, where the sun is, where the sky is,” and then “the neural network is able to fill in all of the detail and texture, and the reflections, shadows and colours, based on what it has learned about real images”.

What Does This Mean For Your Business?

The GauGAN app is a tool that can offer time and cost-saving benefits, and new creative benefits to those who need to create virtual worlds as part of their work e.g. games developers, architects, urban planners and landscape designers.  The app offers them the chance to generate better prototype ideas and make rapid changes to synthetic scenes. This could prove to be an effective and time-saving tool when it comes to taking simple brainstormed ideas to the more detailed stage quickly.

The GauGAN app may also prove to be an interesting new, experimental tool for artists and graphic designers.

Chatbot Supports Students

Lancaster University has announced that it has launched a chatbot “companion” for students which allows them to ask almost any question about their university experience, from student life, and welfare, to academic studies and more.

Ask L.U.

The chatbot service, called ‘Ask L.U.’, was built on Amazon Web Services (voice) and delivers a voice interface that interacts with users.

The chatbot companion was designed and built by Lancaster University’s Information Systems Services (ISS) and enhances the existing iLancaster mobile app with a range of student-focused voice services.

The chatbot project also includes special facilities for disabled students, developed in conjunction with the University’s Disability Service.

Asked Students

In order to make the chatbot as relevant as possible to students, the University’s developers surveyed Lancaster University students to gauge which questions they were most likely to ask. From this information, they were able to compile a list of more than 300 queries that could be divided into categories such as learning & teaching and campus activities & social.  All of these could then be put to Ask L.U.

Access

The chatbot can be accessed via the iLancaster App on mobile phones and tablets, or by asking “Alexa, Ask L.U.” on any Amazon Echo device.  Amazon Cognito is used to authenticate user data via the Echo providing a completely personalised experience.

Whole Suite of AWS Used

The Chatbot project uses the whole suite of AWS services, including AWS Cloudwatch, AWS Virtual Private Cloud and AWS ElasticSearch.  The natural speech is provided by Amazon Lex and Amazon Alexa.

Fast and Convenient

The chatbot companion is intended to enable students to get information in a fast, easy and convenient way, and delivering information via voice activation fits in well with the packed academic and social lives of students.

Chatbots

Chatbots are now used by many organisations, in conjunction with AI, to help deal with common enquiries, to save costs and resources, to free-up time for human staff to work on other aspects of the business, and to enable businesses to offer 24-hour customer service.

There has been criticism of bots where transparency is lacking and where they may possibly lead users to believe that they are talking to a human.  This is why the state of California passed laws to make AI bots ‘introduce themselves’ (i.e. identify themselves as bots).

What Does This Mean For Your Business?

Many of us are now used to encountering chatbots on websites and voice-activated digital assistants, and this innovative new chatbot from Lancaster University shows how these new technologies can be put together in a value-adding and easy to access way, and in a way that is compatible with its target market.  It may also enable the university to save time and money, and free up valuable resources, and offer 24/7 help to student users.

Bearing in mind that it has been made at a University, it is also a good way of showcasing the technology skills of the university, and the voice activation aspect means that it has been built with an eye on the future.

This kind of chatbot could also have applications in many other businesses, organisations, venues, events, and experiences, and could help improve and support services where there are large numbers of users whose experiences could be enhanced by being able to get on-the-spot spoken answers to popular questions.

Kellogg’s Uses Virtual Reality To Sell More Cornflakes

Breakfast cereal manufacturer Kellogg’s has been working with third-party VR companies to help it determine the best way to display its new products in stores.

Who?

Kellogg’s is reported to have been working on a pilot scheme with Accenture and Qualcomm.  Accenture is a Dublin-based global management consulting and professional services firm with a strong digital skill-set, and Qualcomm Inc is a US-based world leader in 3G and next-generation mobile technologies.

What?

The pilot’s aim was to determine the best in-store placement for Kellogg’s new Pop Tart Bites.  This involved the use of Accenture’s Extended Reality (XR) software and Qualcomm’s VR headsets.  This combination gave test subjects an immersive and 360-degree experience of a simulated store environment in which they were able to ‘virtually’ pick products, place items in shopping trolleys and make purchases.

Monitoring

The VR headsets and XR software enabled Kellogg’s to closely and precisely monitor the user’s eye movements.  The analytics meant that this test was also able to yield data such as which new products the test subjects looked at and how long they looked at the products.

New Insights Reveal Surprising Result

Whereas traditional understanding of in-store product placement points towards eye-level (or close to it) as an ideal spot, the new insights that the technology provided in this pilot concluded that positioning the new product on a lower shelf could increase sales of the product by 18%.

Growing Trend

The use of a combination of VR, AR and analytics in retail environments has been a growing trend among big brands in recent times.

Brick-and-mortar retail chains have, however, been criticised for reacting slowly to the introduction of technology that could help them and have found themselves at a disadvantage to online retailers who have been able to use digital technology to hyper-personalise retail experiences for their customers. The brick-and-mortar retailers have also been faced with challenges caused by economic and cultural shifts, e.g. customers moving more towards online shopping.

Change In The Landscape

It’s not just manufacturer brands that are now able to take advantage of the technological change in the landscape to benefit sales.

Retailers now have access to many affordable and relatively easy-to-use AI development tools available, such as those offered by big tech vendors e.g.  Google, Microsoft and Amazon. This means that building an AI system/machine learning system has never been easier.  Retail chains, for example, also have the advantage of having access to massive amounts of data which can be used in a value-adding way with analytics and AI.

What Does This Mean For Your Business?

This story illustrates how the combination of new technologies such as VR, AI and advanced analytics have yielded new insights which could make a greater contribution to sales than more traditional methods.

The portable nature of the technology (and the AI aspect) mean that they are also able to deliver these value-adding insights more quickly and cheaply than before, thereby contributing to faster and more effective product launches and more successful product strategies.  The superior insights gained from combining new technologies such as these mean that it is now possible for business product placement decisions to be made that could positively impact total brand sales, versus only single product sales.

Report Says 90% of NHS Jobs Will Need Digital Skills, But AI and Robotics Could Enhance Services

A report, commissioned by health secretary Matt Hancock and led by US academic Eric Topol, has found that even though AI and robotics will enhance healthcare services, 90% of NHS staff will require fresh digital skills within 20-years.

Robotics and AI Enhancements

According to the report, although there has been fear that the implementation of AI and robotics to the NHS could be a step towards replacing human practitioners, they will in fact enhance services.

Smart Speakers Could Help

For example, the use of smart digital assistants such as Alexa and Siri could free-up more time for doctors which could be spent with patients. It is anticipated that smart speakers could reduce time spent on paperwork, possibly saving 5.7 million hours of GPs’ time across the country per year.

Mental Health Triage Bots?  

The suggestion that smart speakers could somehow be used as effective “mental health triage bots” by engaging in conversations while analysing text and audio for any suicidal ideas and emotions has been dismissed by mental health professionals. A smart speaker may be capable of listening and talking but as mental health professionals point out, smart speakers can’t pick up many of the visual cues that a skilled human professional can, they can’t quickly develop a relationship with a patient (as is needed in mental health assessment situations), and they may not be particularly useful in a situation where a patient is disordered.

Patient Records

The report indicates that smart speakers could also enhance the capabilities of NHS workers to update patient records.

Three Main Changes

In the report, Mr Topol predicts how, over the next 20 years there will be three main developments that will change patients lives, and how training should begin now to ensure that NHS staff have the skills to make the most of those changes going forward.  According to Eric Topol, who is a cardiologist, geneticist, and digital medicine researcher, the three main changes will be:

  1. Patients having their genome sequenced.  This can help determine things like a person’s predisposition to certain diseases and how they will respond to medication or treatment.
  2. Patients being able to generate and interpret much more of their own health data at home.
  3. AI helping to exponentially increase the speed, accuracy and scalability of medical data interpretation.

Digital Appointments

Health Secretary Matt Hancock, who commissioned the report, has also called on GP practices in the UK to be able to offer digital appointments within five years e.g. using Skype and Google.

What Does This Mean For Your Business?

According to this report, AI, robotics and other new technologies could provide enhancements that may enable patients to be ultimately better informed about their own medical position and may help NHS staff to deliver a better quality of service while freeing them from spending too much time on paperwork and spending that time instead with patients.

There is, however, a challenge to be met in terms of making sure that NHS staff receive training that will enable them to make the best use of new digital technologies, and this will need planning and will have cost implications.

It is also important to consider, however, that the amount of data gathered about patients e.g. genomic information could be intrusive and has security and privacy risks.  Also, if AI bots are used to handle some communications with patients, those patients need to be informed that they are communicating with a bot and not a person.  Too much reliance on technological innovation could also bring some inequalities. For example, poorer people and ethnic minorities have been shown to have a lower uptake of things like digital health records.

Man Fined After Hiding From Facial Recognition Cameras

A man was given a public order fine after being stopped by police because he covered his face during a trial of facial recognition cameras in Romford, London.

What Facial Recognition Trial?

A deliberately “overt” trial of live facial recognition technology by the Metropolitan Police took place in the centre of Romford, London, on Thursday 31st January.  This was supposed to be the first day of a two-day trial of the technology, but the second day was cancelled due to concerns that the forecast snow would only bring a low level of footfall in the area.

Live facial recognition trials of this kind use vehicle-mounted cameras linked to a police database containing photos from a watchlist of selected images from the police database.  Officers are deployed nearby so that they can stop those persons identified and matched with suspects on the database.

In the Romford trial, the facial recognition filming was reported to have taken place from a parked police van and, according to the Metropolitan Police, the reason for the use of the technology was to reduce crime in the area, with a specific focus on tackling violent crime.

Why The Fine?

The trial also attracted the attention of human rights groups, such as Liberty and Big Brother Watch, members of which were nearby and were monitoring the trial.

It was reported that the man who was fined, who hasn’t been named by police, was observed pulling his jumper over part of his face and putting his head down while walking past the police cameras, possibly in response to having seen placards warning that passers-by were the subjects of filing by police automatic facial recognition cameras.

It has been reported that the police then stopped the man to talk to him about what they may have believed was suspicious behaviour and asked to see his identification. According to police reports, it was at this point that the man became aggressive, made threats towards officers and was issued with a penalty notice for disorder as a result.

8 Hours, 8 Arrests – But Only 3 From Technology

Reports indicate that the eight-hour trial of the technology resulted in eight arrests, but only three of those arrests were as a direct result of facial recognition technology.

Criticism

Some commentators have criticised this and other trials for being shambolic, for not providing value for money, and for resulting in mistaken identity.

Research Questions Reliability

Research by the University of Cardiff examined the use of facial recognition technology across several sporting and entertainment events in Cardiff for over a year, including the UEFA Champion’s League Final and the Autumn Rugby Internationals.  The research found that for 68% of submissions made by police officers in the Identify mode, the image had too low a quality for the system to work. Also, the research found that the locate mode of the FRT system couldn’t correctly identify a person of interest for 76% of the time.

Also, in December 2018, ICO head Elizabeth Dunham was reported to have launched a formal investigation into how police forces use facial recognition technology (FRT) after high failure rates, misidentifications and worries about legality, bias, and privacy.

What Does This Mean For Your Business?

It has been reported that spending over £200,000 on the deployment of facial recognition trials on 6 deployments between August 2016 and July 2018, no arrests were made.  On the surface, these figures suggest that, although the technology has the potential to add value and save costs, and although businesses in town centres are likely to welcome efforts to reduce crime, the trials to date don’t appear to have delivered value-for-money to taxpayers.

There was also criticism of the facial recognition system used in Soho, Piccadilly Circus and Leicester Square over two days in the run-up to Christmas, where freedom campaigners such as Big Brother Watch and Liberty were concerned about mixed messages from police about how those who turn away from facial recognition cameras mounted in/on police vans because they don’t want to be scanned could be treated.

Despite some valid worries and criticism, most businesses and members of the public would probably agree that CCTV systems have a real value in helping to deter criminal activity, locating and catching perpetrators, and providing evidence for arrests and trials.  There are, however, several concerns, particularly among freedom and privacy groups, about how just how facial recognition systems are being (and will be) used as part of policing e.g. overt or covert, issues of consent, possible wrongful arrests due to system inaccuracies, and the widening of the scope of its purpose from the police’s stated aims.  Issues of trust where our personal data is concerned are still a problem, as are worries about a ‘big brother’ situation for many people.

London Police Facial Recognition Trial

It has been reported that the police are conducting a trial of a facial recognition system in Soho, Piccadilly Circus and Leicester Square over two days in the run-up to Christmas in a bid to identify people among the Christmas shoppers who are wanted by the police or the courts.

Overt

Far from being used secretly, the Metropolitan Police are reported to be publicly announcing the use of the system using knee-height signs on pavements leading up to the surveillance areas, along with A4 posters on lamp posts and leaflets handed-out to members of the public by uniformed officers.

The actual surveillance using the facial recognition link-up to the police database of wanted offenders is reported to have been carried out (on Monday and Tuesday) by a green van with cameras mounted on the top. It has been also been reported that for this London trial of facial recognition, the Metropolitan Police will have been studying the crowds for 8 hours per day over the two day period, and have been specifically using a target list of 1,600 wanted people in the hope that crime and violence can be more effectively tackled.

Criticism

Criticism from privacy and freedom campaigners such as Big Brother Watch and Liberty has focused on mixed messages from police about how those who turn away from the van because they don’t want to be scanned will be treated.  For example, it has been claimed that some officers have said that this will be treated as a trigger for suspicion, whereas a Metropolitan Police press release has stated that those who decline to be scanned (as is their right) during the deployment will not be viewed as suspicious by police officers.

Concern has also been expressed by Big Brother Watch that, although the police may believe that the deployment of the system is overt and well publicised, the already prevalent signs and advertisements in the busy central London areas where it is being deployed could mean that people may not notice, thereby allowing the police to blur the line between overt and covert policing.  It has also been pointed-out by privacy groups that the deployment involves an unmarked van and plainclothes officers, which are normally associated with covert activity.

Doesn’t Work?

Big Brother Watch and Liberty are currently taking legal action against the use of live facial recognition in South Wales (the site of previous trials) and London, and ICO head Elizabeth Dunham is reported to have launched a formal investigation into how police forces use facial recognition technology (FRT) after high failure rates, misidentifications and worries about legality, bias, and privacy.

Serious questions have been raised about how effective current facial recognition systems are.  For  example, research by the University of Cardiff, which examined the use of the technology across a number of sporting and entertainment events in Cardiff for over a year, including the UEFA Champion’s League Final and the Autumn Rugby Internationals, found that for 68% of submissions made by police officers in the Identify mode, the image had too low a quality for the system to work. Also, the research found that the locate mode of the FRT system couldn’t correctly identify a person of interest for 76% of the time.

Google Not Convinced

Even Google (Cloud) has announced recently that it won’t be selling general-purpose AI-driven facial recognition technology until it is sure that any concerns over data protection and privacy have been addressed in law, and that the software is accurate.

Fooled With A Printed 3D Head!

The vulnerability of facial recognition software to errors and inaccuracy has been further exposed by a journalist, Thomas Brewster, from Forbes, who claimed that he was able to fool the facial recognition on four Android phones by using a model 3D head with his own face printed on it!

What Does This Mean For Your Business?

For the retail businesses in the physical area of the trial, anything that may deter criminal activities like theft and violence and may also catch known criminals is likely to be a good thing.

Most businesses and members of the public would probably agree that CCTV systems have a real value in helping to deter criminal activity, locating and catching perpetrators, and providing evidence for arrests and trials.  There are, however, several concerns, particularly among freedom and privacy groups, about how just how facial recognition systems are being and will be used as part of policing e.g. overt or covert, issues of consent, possible wrongful arrest due to system inaccuracies, and the widening of the scope of its purpose from the police’s stated aims.  Issues of trust where our personal data is concerned are still a problem as are worries about a ‘big brother’ situation for many people, although the police, in this case, have been clear that it is just a limited trial that has been conducted as overtly as possible with the support of literature and posters / literature to make sure the public is informed.