Technology

Tech Tip – One Handed Keyboard On An iPhone

If you’ve struggled to use the keyboard on an iPhone and found it a little unwieldy, or had difficulty reaching across the entirety of the keyboard when you have only one hand free, here’s how to adjust the size and position of the keyboard in iOS 11 so you can use it with just one hand:

– Hold down the emoji / globe icon on the keyboard.

– Three small keyboard icons will appear.

– Selecting the one with an arrow pointing to the right will shift the keyboard to the right, and selecting the one pointing to the left will shift the keyboard to the left.

– To put the keyboard back to normal, tap the arrow in the blank space that’s created by the keyboard shift, or hold down the emoji icon again and select the ‘centre’ icon.

Facial Recognition In The Classroom

A school in Hangzhou, capital of the eastern province of Zhejiang, is reportedly using facial recognition software to monitor pupils and teachers.

Intelligent Classroom Behaviour Management System

The facial recognition software is part of what has been dubbed The “intelligent classroom behaviour management system”. The reason for the use of the system is reported to be to supervise both the students’ learning, and the teachers’ teaching.

How?

The system uses cameras to scan classrooms every 30 seconds. These cameras are part of a facial recognition system that is reported to be able to record students’ facial expressions, and categorize them into happy, angry, fearful, confused, or upset.

The system, which acts as a kind of ‘virtual teaching assistant’, is also believed to be able to record students’ actions such as writing, reading, raising a hand, and even sleeping at a desk.

The system also measures levels of attendance by using a database of pupils’ faces and names to check who is in the classroom.

As well as providing the school with added value monitoring of pupils, it may also prove to be a motivator for pupils to modify their behaviour to suit the rules of the school and the expectations of staff.

Teachers Watched Too

In addition to monitoring pupils, the system has also been designed to monitor the performance of teachers in order to provide pointers on how they could improve their classroom technique.

Safety, Security and Privacy

One other reason why these systems are reported to be increasing in popularity in China is to provide greater safety for pupils by recording and deterring violence and questionable practices at Chinese kindergartens.

In terms of privacy and security, the vice principal of the Hangzhou No.11 High School is reported to have said that the privacy of students is protected because the technology doesn’t save images from the classroom, and stores data on a local server rather than on the cloud. Some critics have, however, said that storing images on a local server does not necessarily make them more secure.

Inaccurate?

If the experiences of the facial recognition software that has been used by UK police forces is anything to go by, there may be questions about the accuracy of what the Chinese system records. For example, an investigation by campaign group Big Brother Watch, the UK’s information Information Commissioner, Elizabeth Denham, has recently said that the Police could face legal action if concerns over accuracy and privacy with facial recognition systems are not addressed.

What Does This Mean For Your Business?

There are several important aspects to this story. Many UK businesses already use their own internal CCTV systems as a softer way of monitoring and recording staff behaviour, and as a way to modify their behaviour i.e. simply by knowing their being watched. Employees could argue that this is intrusive to an extent, and that a more positive way of getting the right kind of behaviour should (also) have a system that rewards positive / good behaviour and good results.

Using intelligent facial recognition software could clearly have a place in many businesses for monitoring customers / service users e.g. in shops and venues. It could be used to enhance security. It could also, as in the school example, be used to monitor staff in any number of situations, particularly those where concentration is required and where positive signals need to be displayed to customers. These systems could arguably increase productivity, improve behaviour and reduce hostility / violence in the workplace, and provide a whole new level of information to management that could be used to add value.

However, it could be argued that using these kinds of systems in the workplace could make people feel as though ‘big brother’ is watching them, could lead to underlying stress, and could have big implications where privacy and security rights are concerned. It remains to be seen how these systems are justified, regulated and deployed in future, and how concerns over accuracy, cost-effectiveness, and personal privacy and security are dealt with.

TalkTalk Super Router Security Fears Persist

An advisory notice from software and VR Company IndigoFuzz has highlighted the continued potential security risk posed by a vulnerability in the WPS feature in TalkTalk’s Super Router.

What Vulnerability?

According to IndigoFuzz, the WPS connection is insecure and the WPS pairing option is always turned on i.e. the WPS feature in the router is always switched on, even if the WPS pairing button is not used.

This could mean that an attacker within range could potentially hack into the router and steal the router’s Wi-Fi password.

Tested

It has been reported that in tests involving consenting parties, IndigoFuzz found a method of probing the router to steal the passwords to be successful on multiple TalkTalk Super Routers.

The test involved using a Windows-based computer, wireless network adapter, a TalkTalk router within wireless network adapter range, and the software ‘Dumpper’ available on Sourceforge. Using this method, the Wi-Fi access key to a network could be uncovered in a matter of seconds.

Scale

The ease with which the Wi-Fi access key could be obtained in the IndigoFuzz tests has prompted speculation that the vulnerability could be on a larger scale than was first thought, and a large number of TalkTalk routers could potentially be affected.

No Courtesy Period Before Announcement

When a vulnerability has been discovered and reported to a vendor, it is normal protocol to allow the vendor 30 days to address the problem before the vulnerability is announced publicly by those who have discovered / reported the vulnerability.

In this case, the vulnerability was first reported to TalkTalk back in 2014, so IndigoFuzz chose to issue the advisory as soon as possible.

Looks Bad After Last October

News that a vulnerability has remained unpatched after it was reported 4 years ago to TalkTalk looks bad on top of major cyber attack and security breach there back in October 2017. You may remember that the much publicised cyber-attack on the company resulted in an estimated loss of 101,000 customers (some have suggested that the number of lost customers was twice as much as this figure). The attack saw the personal details of between 155,000 and 157,000 customers (reports vary) hacked, with approximately 10% of these customers having their bank account number and sort code stolen.

The trading impact of the security breach in monetary terms was estimated to be £15M with exceptional costs of £40-45M.

What Does This Mean For Your Business?

It seems inconceivable that a widely reported vulnerability that could potentially affect a large number of users may still not have been addressed after 4 years. Many commentators are calling for a patch to be issued immediately in order to protect TalkTalk customers. This could mean that many home and business customers are still facing an ongoing security risk, and TalkTalk could be leaving itself open to another potentially damaging security problem that could impact its reputation and profits.

Back in August last year, the Fortinet Global Threat Landscape Report highlighted the fact that 9 out of 10 businesses are being hacked through un-patched vulnerabilities, and that many of these vulnerabilities are 3 or more years old, and many even have patches available for them. This should remind businesses to stay up to date with their own patching routines as a basic security measure.

Last year, researchers revealed how the ‘Krack’ method could take advantage of the WPA2 standard used across almost all Wi-Fi devices to potentially read messages, banking information and intercept sensitive files (if a hacker was close to a wireless connection point and the website doesn’t properly encrypt user data). This prompted fears that hackers could turning their attention to what may be fundamentally insecure public Wi-Fi points in e.g. shopping centres / shops, airports, hotels, public transport and coffee shops. This could in turn generate problems for businesses offering WiFi.

BYODs Linked To Security Incidents

A study by SME card payment services firm Paymentsense has shown a positive correlation between bring your own device (BYOD) schemes and increased cyber security risk in SMEs.

BYOD

Bring your own device (BYOD) schemes / policies have now become commonplace in many businesses, with the BYOD and enterprise mobility market size growing from USD $35.10 Billion in 2016 to USD $73.30 Billion by 2021 (marketsandmarkets.com).

BYOD policies allow employees to bring in their personally owned laptops, tablets, and smart-phones and use them to access company information and applications, and solve work problems. This type of policy has also fuelled a rise in ‘stealth IT’ where employees go outside of IT and set up their own infrastructure, without organizational approval or oversight, and can, therefore, unintentionally put corporate data and service continuity at risk.

Positive Correlation Between BYOD and Security Incidents

The Paymentsense study, involving more than 500 SMEs polled in the UK found a positive correlation between the introduction of a BYOD policy and cyber-security incidents. For example, 61% of the SME’s said that they had experienced a cyber-security incident since introducing a BYOD policy.

According to the study, although only 14% of micro-businesses (up to 10 staff) reported a cyber-security incident since implementing BYOD, the figure rises to 70% for businesses of 11 to 50 people, and to 94% for SMEs with 101 to 250 employees.

Most Popular Security Incidents

The study showed that the most popular types of security incidents in the last 12 months were malware, which affected two-thirds (65%) of SMEs, viruses (42%), DDoS distributed denial of service (26%), data theft (24%), and phishing (23%).

Positive Side

The focus of the report was essentially the security risks posed by BYOD. There are, however, some very positive reasons for introducing a BYOD policy in the workplace. These include convenience, cost saving (company devices and training), harnessing the skills of tech-savvy employees, perhaps finding new, better and faster ways of getting work done, improved morale and employee satisfaction, and productivity gains.

Many of these benefits are, however, inward-focused i.e. on the company and its staff, rather than the wider damage that could be caused to the lives of data breach victims or to the company’s reputation and profits if a serious security incident occurred.

What Does This Mean For Your Business?

This is a reminder that, as well as the benefits of BYOD to the business, if you allow employees or other users to connect their own devices to your network, you will be increasing the range of security risks that you face. This is particularly relevant with the introduction of GDPR on Friday.

For example, devices belonging to employees but containing personal data could be stolen in a break-in or lost while away from the office. This could lead to a costly and public data breach. Also, allowing untrusted personal devices to connect to SME networks or using work devices on untrusted networks outside the office can put personal data at risk.
Ideally, businesses should ensure that ensure that personal data is either not on the device in the first place, or has been appropriately secured so that it cannot be accessed in the event of loss or theft e.g. by using good access control systems and encryption.

Businesses owners could reduce the BYOD risk by creating and communicating clear guidelines to staff about best security practices in their daily activities, in and out of the office. Also, it is important to have regular communication with staff at all levels about security, and having an incident response plan / disaster recovery plan in place can help to clarify responsibilities and ensure that timely action is taken to deal with situations correctly if mistakes are made.

Slack ‘Actions’

Chat App ‘Slack’ has announced the introduction of a new ‘Actions’ feature that makes it easier for users to create and finish tasks without leaving by having access to more 3rd party tools.

What Is Slack?

Slack, launched way back in 2013, is a Silicon Valley-produced, cloud-based set of proprietary team collaboration tools and services. It provides mobile apps for iOS, Android, Windows Phone, and is available for the Apple Watch, enabling users to send direct messages, see mentions, and send replies.

Slack teams enable users (communities, groups, or teams) to join through a URL or invitation sent by a team admin or owner. It was intended as an organisational communication tool, but it has gradually been morphing into a community platform i.e. it is a business technology that has crossed-over into personal use.

In March 2018, Slack and financial and human capital management firm Workday formed a partnership that allowed Workday customers to access features from directly within the Slack interface. Slack is believed to have 8 million daily active users.

What Is ‘Actions’ and How Does It Help?

The new tool / feature dubbed ‘Actions’ will bring enterprise developers deeper into Slack, because it allows for better / more integration with enterprise software from third-party software providers e.g. Jira, HubSpot, and Asana.

Slack knows that many users now like to choose what software they use to get their job done, and the Actions feature will, therefore, be of extra value to the 90% Slack’s 3 million paid users who regularly use apps and integrations.

Actions can be accessed using a click or tap of any Slack message, require no slash commands, and are being made available to all developers using the platform to deploy bots and integrations. To begin with, Actions will be displayed based on what individuals use most frequently.

What Does This Mean For Your Business?

If you use / your business uses Slack, the interoperability of these systems resulting from integration between software from third-parties means that you have greater choice in what software you use to complete your tasks without having to leave Slack. This offers time and cost saving benefits, as well as a considerable boost in convenience.

Slack knows that there are open source and other alternatives out there, and the addition of Actions will help Slack to provide more valuable tools to users, thereby helping it to retain loyalty and compete in a rapidly evolving market.

Tech Tip – Enable ‘Do Not Track’ In Microsoft Edge

If you want the general added security of not being tracked when you’re browsing without having to switch to full security incognito mode, here’s how to enable ‘Do Not Track’ in Microsoft Edge:

– For Microsoft Edge, click on the three horizontal dots at the top right.

– Click on ‘Settings’ at very bottom.

– Click on ‘View advanced settings’ at the bottom.

– Scroll down to the Privacy and Services section, and toggle the ‘Send Do Not Track’ requests option.

– This should mean that all HTTP and HTTPS requests will include ‘Do Not Track’.

AI Drones: Smaller and Smarter

Researchers from ETH Zurich, Switzerland and the University of Bologna have built the smallest completely autonomous quadrotor nano-drone that uses AI to fly itself, and doesn’t need human guidance.

Neural Network

The technology at the heart of the Crazyflie 2.0 Nano Quadcopter is the DroNet neural network. This is able to processes incoming images from a camera at 20 frames per second. From this, the nano-drone is able to work out how to steer, and calculate the probability of a collision, thereby giving it the ability to know when to stop.

Fully On-Board Computation

The fact that the drone needs no external sensing and computing because all computation is fully on-board thanks to the PULP (Parallel Ultra Low Power) platform, means that it is truly autonomous, and is, therefore, a real first in terms of how a small drone can be controlled.

The new autonomous version is an improvement on the first test version, which involved putting the DroNet neural network system in a larger commercial-off-the-shelf, Parrot Bebop 2.0 drone, and using radio contact with a laptop to control it.

Trained Using Images

Since AI requires training so that it can learn to become better at a task, the drone’s neural network was trained using thousands of images taken from bicycles and cars driving along different roads.

Only Horizontal Movement

One major drawback at the current time is that, because it was trained using images from a single plane, the drone can only move horizontally and cannot yet fly up or down.

Even Smaller

Technologies involved in making drones have evolved to such a degree that even a robot ‘fly’ has now been built.

As the successor to RoboBee, the so-called RoboFly it is so small (the size of a fly) that it can’t support the weight of a battery to power it. The power for flight is currently delivered by a laser being trained on an attached photovoltaic cell.

The tiny device has wings that are flapped by sending a series of pulses of power in rapid succession and then slowing the pulsing down as it gets near the top of the wave (with the whole process in reverse for the downward flap).

The RoboFly, developed by a team of researchers based in Australia, can only just take off and travel a very short distance at present. Future plans for RoboFly reportedly include improving the onboard telemetry so it can control itself, and making a steered laser that can follow the bug’s movements and continuously beam power in its direction.

What Does This Mean For Your Business?

Up until now, the main uses for drones have been specialist applications such as within the military, in construction (viewing and mapping sites), film and TV, leisure use, and even for delivery of parcels (Amazon tests). All of these involve the use of larger drones that are remotely controlled.

The ideas that a drone can be made in a miniature size, and / or can control itself using AI could open up many more new areas of opportunity for businesses and other organisations. Such drones could be used in confined spaces or in very specialised situations.

The idea of an AI drone has, however, led to some alarm being expressed by some commentators. Even though AI autonomy could help drones to e.g. to monitor environments, be used in spying, and develop swarm intelligence for military use, some have expressed worries that they could become better at delivering lethal payloads, and could pose other unforeseen security risks.

Police Face Recognition Software Flawed

Following an investigation by campaign group Big Brother Watch, the UK’s Information Commissioner, Elizabeth Denham, has said that the Police could face legal action if concerns over accuracy and privacy with facial recognition systems are not addressed.

What Facial Recognition Systems?

A freedom of information request sent to every police force in the UK by Big Brother Watch shows that The Metropolitan Police used facial recognition at the Notting Hill carnival in 2016 and 2017, and at a Remembrance Sunday event, and South Wales Police used facial recognition technology between May 2017 and March 2018. Leicestershire Police also tested facial recognition in 2015.

What’s The Problem?

The two main concerns with the system (as identified by Big Brother Watch and the ICO) are that the facial recognition systems are not accurate in identifying the real criminals or suspects, and that the images of innocent people are being stored on ‘watch’ lists for up to a month, and this could potentially lead to false accusations or arrests.

How Do Facial Recognition Systems Work?

Facial recognition software typically works by using a scanned image of a person’s face (from the existing stock of police photos of mug shots from previous arrests), and then uses algorithms to measure ‘landmarks’ on the face e.g. the position of features and the shape of the eyes, nose and cheekbones. This data is used to make a digital template of a person’s face, which is then converted into a unique code.

High-powered cameras are then used to scan crowds. The cameras link to specialist software that can compare the camera image data to data stored in the police database (the digital template) to find a potential ‘match’. Possible matches are then flagged to officers, and these lists of possible matches are stored in the system for up to 30 days.

A real-time automated facial recognition (AFR) system, like the one the police use at events, incorporates facial recognition and ‘slow time’ static face search.

Inaccuracies

The systems used by the police so far have been criticised for simply not being accurate. For example, of the 2,685 “matches” made by the system used by South Wales Police between May 2017 and March 2018, 2,451 were false alarms.

Keeping Photos of Innocent People On Watch Lists

Big Brother Watch has been critical of the police keeping photos of innocent people that have ended up on lists of (false) possible matches, as selected by the software. Big Brother Watch has expressed concern that this could affect an individual’s right to a private life and freedom of expression, and could result in damaging false accusations and / or arrests.
The police have said that they don’t consider the ‘possible’ face selections as false positive matches because additional checks and balances are applied to them to confirm identification following system alerts.

The police have also stated that all alerts against watch lists are deleted after 30 days, and faces in the video stream that do not generate an alert are deleted immediately.

Criticisms

As well as accusations of inaccuracy and possibly infringing the rights of innocent people, the use of facial recognition systems by the police has also attracted criticism for not appearing to have a clear legal basis, oversight or governmental strategy, and for not delivering value for money in terms of the number of arrests made vs the cost of the systems.

What Does This Mean For Your Business?

It is worrying that there are clearly substantial inaccuracies in facial recognition systems, and that the images of innocent people could be sitting on police watch lists for some time, and could potentially result in wrongful arrests. The argument that ‘if you’ve done nothing wrong, you have nothing to fear’ simply doesn’t stand up if police are being given cold, hard computer information to say that a person is a suspect and should be questioned / arrested, no matter what the circumstances. That argument is also an abdication from a shared responsibility, which could lead to the green light being given to the erosion of rights without questions being asked. As people in many other countries would testify, rights relating to freedom and privacy should be valued, and when these rights are gone, it’s very difficult to get them back again.

The storing of facial images on computer systems is also a matter for security, particularly since they are regarded as ‘personal data’ under the new GDPR which comes into force this month.

There is, of course, an upside to the police being able to use these systems if it leads to the faster arrest of genuine criminals, and makes the country safer for all.

Despite the findings of a study from YouGov / GMX (August 2016) that showed that UK people still have a number of trust concerns about the use of biometrics for security, biometrics represents a good opportunity for businesses to stay one step ahead of cyber-criminals. Biometric authentication / verification systems are thought to be far more secure than password-based systems, which is the reason why banks and credit companies are now using them.

Facial recognition systems have value-adding, real-life business applications too. For example, last year, a ride-hailing service called Careem (similar to Uber but operating in more than fifty cities in the Middle East and North Africa) announced that it was adding facial recognition software to its driver app to help with customer safety.

Efail – Encryption Flaw

A German newspaper has released details of a security vulnerability, discovered by researchers at Munster University of Applied Sciences, in PGP (Pretty Good Privacy) data encryption.

What Is PGP?

PGP (Pretty Good Privacy) is an encryption program that is used for signing, encrypting, and decrypting texts, e-mails, files, directories, and disk partitions, and to increase the security of e-mail communications. As well as being used to encrypt and decrypt email, PGP is also used to sign messages so that the receiver can verify both the identity of the sender and the integrity of the content. PGP works using a private key that is kept secret, and a public key that the sender and receiver share.

The technology is also known by the name of GPG (Gnu Privacy Guard or GnuPG), and is a compatible GPL-licensed alternative.

What’s The Flaw?

The flaw, which was first thought by some security experts to affected the core protocol of PGP (which would make all uses of the encryption method, including file encryption, vulnerable), is now believed to be related to any email programs that don’t check for decryption errors properly before following links in emails that include HTML code i.e. email programs that have been designed without appropriate safeguards.

‘Efail’ Attacks

The flaw leaves this system of encryption open to what have been called ‘efail’ attacks. This involves attackers trying to gain access to encrypted emails (for example by eavesdropping on network traffic), and compromising email accounts, email servers, backup systems or client computers. The idea is to reveal the plaintext of encrypted emails (in the OpenPGP and S/MIME standards).

This type of attack can be carried out by direct exfiltration, where vulnerabilities in Apple Mail, iOS Mail and Mozilla Thunderbird can be abused to directly exfiltrate the plaintext of encrypted emails, or by a CBC/CFB gadget. This is where vulnerabilities in the specification of OpenPGP and S/MIME are abused to exfiltrate the plaintext.

What Could Happen?

The main fear appears to be that the vulnerabilities could be used to decrypt stored, encrypted emails that have been sent in the past (if an attacker can gain access). It is thought that the vulnerabilities could also create a channel for sneaking personal data or commercial data and business secrets off devices as well as for decrypting messages.

What Does This Mean For Your Business?

It is frustrating for businesses to learn that the email programs they may be using, and a method of encryption, supposed to make things more secure, could actually be providin a route for criminals to steal data and secrets.

The advice from those familiar with the details of the flaw is that users of PGP email can disable HTML in their mail programs, thereby keeping them safe from attacks based on this particular vulnerability. Also, users can choose to decrypt emails with PGP decryption tools that are separate from email programs.

More detailed information and advice concerning the flaw can be found here: https://efail.de/#i-have

Handy Location Tracker

A peanut-shaped, hand-held, smart, long-range tracking device called LynQ has been launched that can tell you how far and in what direction your friends are, all without the need for a data connection, and without monthly fees.

Why?

As well as being used for outdoor activities to replace traditional maps and location methods, a ‘LynQ’ can be used as a safety device for tracking children or pets, for rescue workers, or for making sure dementia sufferers don’t wander too far. It can also be used as a fun / leisure device e.g. to find each other in festival crowds, or to keep track of each other when hiking or skiing.

How Does It Work?

Powered by a rechargeable power cell that can offer up to three days of battery life between charges, a LynQ can reportedly track other LynQ users from up to 3 miles (5km) away.

Being marketed as a kind of smart compass for the 21st century, the LynQ doesn’t need an app, phone or Wi-Fi network. Instead, it uses what is described as “a new approach to GPS”. This means that LynQ devices send their GPS coordinates directly to each other. The GPS data has a compression algorithm applied to it in order to make it possible to send that data more frequently and reliably.

2 To 12 People Can Use

LynQ allows 2 or more people (up to 12 can link up) to use a one-button control and simple digital interface to find each other. The display shows a simple display of distance and direction that changes accurately as you move towards or away from your target, and the single button allows you to switch between people you’re tracking.

The display turns off automatically when you let it go to hang by its clip, thus saving battery life, but the LynQ is always receiving the data.

Other Features

The device allows you to create a “home” location that linked devices can point toward. It also allows you to set a safe zone (a radius from your device) that will warn you if the other person leaves that safe zone. You can also send basic preset messages like “meet up” or “help.”

The price is $154 / £114.30 per pair (early bird), going up to $200 / £148.40.

What Does This Mean For Your Business?

This is another smart device that shows how a combination of technologies can be used to create something that can meet a real need and has multiple applications e.g. leisure, sport, safety, and even defence. For example, the Thai Ministry of Defence tested LynQ and found that it helped soldiers find each other much faster while radio silent, and helped them quickly get into formation for a search mission.

This could also represent another possible way to keep track of those in the care of others e.g. dementia sufferers being tracked by carers. Back in 2016 for example, a barcode tagging system for tracking elderly dementia sufferers was being tested in Tokyo, but the LynQ could provide an even simpler and more practical system.

Quite simply as a gadget, the LynQ appears to have multiple applications, thereby offering many opportunities to business and personal users. The fact that the LynQ requires no monthly fees, and doesn’t require a data connection will increase its appeal.

The hope is that the LynQ device is secure and that signals can’t be intercepted and used by criminals to track victims e.g. for attack or abduction. There are still widespread fears about the vulnerability of many smart / IoT devices to hacking, but the fact that LynQ doesn’t need a connection could make it safer.