Social Media

Video Labelling Causes Problems

Google has already been criticised by some for not calling out China over disinformation about Hong Kong, but despite disabling 210 YouTube channels with suspected Chinese state links, Google’s new move to label Hong Kong YouTube videos hasn’t gone down well.

Big Social Media Platforms Act

Facebook and Twitter recently announced that they have banned a number accounts on their platforms due to what the popular social media platforms are calling “coordinated influence operations”. In other words, Chinese state-sponsored communications designed to influence opinion (pro-Beijing viewpoints) and to spread disinformation.  Twitter and Facebook are both blocked in mainland China anyway by the country’s notorious firewall but both platforms can be accessed in Hong King and Twitter recently suspended over 900 accounts believed to originate in China. The reasons for the suspensions included spam, fake accounts and ban evasion.

Google Labels Videos

Google’s response, which some critics have seen as being late anyway has been to add information panels to videos on its Hong Kong-facing site saying whether the video has been uploaded by media organisations that receive government funding or public funding.  The panels, which are live in 10 regions, were intended to give viewers an insight into whether the videos are state-funded or not.

Problem

Unfortunately, Google did not consider the fact that some media receives government funding, but are editorially independent, and the labelling has effectively put them in the same category as media that purely spreads government information.

Google and China

Many commentators have noted an apparent reluctance by Google to distance itself from the more repressive side of the Chinese state.  For example, Google has been criticised for not publicly criticising China over the state’s disinformation campaign about the Hong Kong protests.  Also, Google was recently reported to have a secret plan (Project Dragonfly) to develop a censored search engine for the Chinese market and it’s been reported that Google has an A.I research division in China.

Disinformation By Bot? Not

There have been fears that just as bots can be a time and cost-saving way of writing and distributing information, they could also be used to write disinformation and could even reach the point soon where they are equal in ability to human writers.  For example, the text generator, built by the research firm OpenAI, has (until recently) been considered to be too dangerous to make (the ‘trained’ version) public because of the potential for abuse in terms of using it to write disinformation.  In tests (the BBC, AI experts, and a Sheffield University professor) however, it proved to be relatively ineffective at generating meaningful text from input headlines, although it did appear able to reflect news bias in its writing.

What Does This Mean For Your Business?

The influence via social media in the last US presidential election campaign and the UK referendum (with the help of Cambridge Analytica) brought the whole subject of disinformation into sharp focus, and the Chinese state media’s response to the Hong King demonstrations has given more fuel to the narrative coming from the current US administration (Huawei accusations and trade war) that China should be considered a threat.  Google’s apparent lack of public criticism of Chinese state media disinformation efforts is in contrast to the response of social media giants Facebook and Twitter, and this coupled with reports of the company trying to develop a censored search engine for China to allow it to get back into the market over there means that Google is likely to be scrutinised and criticised by US state voices.

It is difficult for many users of social media channels to spot bias and disinformation, and although Google may have tried to do the right thing by labelling videos, its failure to take account of the media structure in China has meant more criticism for Google.  As an advertising platform for businesses, Google needs to take care of its public image, and this kind of bad publicity is unlikely to help.

Facebook Launches Martin Lewis Anti-Scam Service

Facebook has launched a new anti-scam service using the £3m that it agreed to donate to the development of the programme in return for TV consumer money champion Martin Lewis dropping his legal action over scam ads.

What Legal Action?

Back in September 2018, MoneySavingExpert’s (MSE) founder Martin Lewis (OBE) took Facebook to the UK High Court to sue the tech giant for defamation over a series of fake adverts bearing his name.  Many of the approximately 1000 fake ads, bearing Mr Lewis’ name appeared on the Facebook social media platform over the space of a year, could and did (in some cases) direct consumers to scammer sites containing false information, which Mr Lewis argued may have caused serious damage to his reputation, and caused some people to lose money.

In January 2019, Mr Lewis Facebook came to an agreement with Facebook whereby he would drop his lawsuit if Facebook donated £3 million to Citizens Advice to create a new UK Scams Action project (launched in May 2019) and if Facebook agreed to launch a UK-focused scam ad reporting tool supported by a dedicated complaints-handling team.

How The New Anti-Scam Service Works

Facebook users in the UK will be able to access the service by clicking on the three dots (top right) of any advert to see ‘more options’ and “report ad”.  The list of reasons for reporting the ad now includes a “misleading or scam ad” option.

Also, the Citizens Advice charity has set up a phone line to help give advice to victims of online and offline scams.  The “Scams Action Service” advisers can be called on 0300 330 3003 Monday to Friday, and the advisers also offer help via live online chat.  In serious cases, face-to-face consultations can also be offered.

What To Do

If you’ve been scammed, the Citizens Advice charity recommends that you tell your bank immediately, reset your passwords, make sure that your anti-virus software has been updated, report the incident to Action Fraud, and contact the new Citizens Advice Scams Action service: https://www.citizensadvice.org.uk/scamsaction/

What Does This Mean For Your Business?

It is a shame that it has taken the threat of a lawsuit over damaging scam ads spread through its own platform to galvanize Facebook into putting some of its profits into setting up a service that can tackle the huge and growing problem of online Fraud.  Facebook and other ad platforms may also need to take more proactive steps with their advertising systems to make it more difficult for scammers to set up adverts in the first place.

Having a Scams Action service now in place using a trusted UK charity will also mean that awareness can be raised, and information given about known scams, and victims will have a place to go where they get clear advice and help.

US Visa Applicants Now Asked For Social Media Details and More

New rules from the US State Department will mean that US visa applicants will have to submit social media names and five years’ worth of email addresses and phone numbers.

Extended To All

Under the new rules, first proposed by the Trump administration back in February 2017, whereas previously the only visa applicants who had needed such vetting were those from parts of the world known to be controlled by terrorist groups, all applicants travelling to the US to work or to study will now be required to give those details to the immigration authorities. The only exemptions will be for some diplomatic and official visa applicants.

Delivering on Election Immigration Message

The new stringent rules follow on from the proposed crackdown on immigration that was an important part of now US President Donald Trump’s message during the 2016 election campaign.

Back in July 2016, the Federal Register of the U.S. government published a proposed change to travel and entry forms which indicated that the studying of social media accounts of those travelling to the U.S. would be added to the vetting process for entry to the country. It was suggested that the proposed change would apply to the I-94 travel form, and to the Electronic System for Travel Authorisation (ESTA) visa. The reason(s) given at the time was that the “social identifiers” would be: “used for vetting purposes, as well as applicant contact information. Collecting social media data will enhance the existing investigative process and provide DHS greater clarity and visibility to possible nefarious activity and connections by providing an additional toolset which analysts and investigators may use to better analyse and investigate the case.”

There had already been reports that some U.S. border officials had actually been asking travellers to voluntarily surrender social media information since December 2016.

2017

In February 2017, the Trump administration indicated that it was about to introduce an immigration policy that would require foreign travellers to the U.S. to divulge their social media profiles, contacts and browsing history and that visitors could be denied entry if they refused to comply. At that time, the administration had already barred citizens of seven Muslim-majority countries from entering the US.

Criticism

Critics of the idea that social media details should be obtained from entrants to the US include civil rights group the American Civil Liberties Union which pointed out that there is no evidence it would be effective and that it could lead to self-censorship online.  Also, back in 2017, Jim Killock, executive director of the Open Rights Group was quoted online media as describing the proposed as “excessive and insulting”.

What Does This Mean For Your Business?

Although they may sound a little extreme, these rules have now become a reality and need to be considered by those needing a US visa.  Given the opposition to President Trump and his some of his thoughts and policies and the resulting large volume of Trump-related content that is shared and reacted to by many people, these new rules could be a real source of concern for those needing to work or to study in the US.  It is really unknown what content, and what social media activity could cause problems at immigration for travellers, and what the full consequences could be.

People may also be very uncomfortable being asked to give such personal and private details as social media names and a massive five years’ worth of email addresses and phone numbers, and about how those personal details will be stored and safeguarded (and how long for), and by whom they will be scrutinised and even shared.  The measure may, along with other reported policies and announcements from the Trump administration even discourage some people from travelling to, let alone working or studying in the US at this time. This could have a knock-on negative effect on the economy of the US, and for those companies wanting to get into the US marketplace with products or services.

GCHQ Eavesdropping Proposal Soundly Rejected

A group of 47 technology companies, rights groups and security policy experts have released an open letter stating their objections to the idea of eavesdropping on encrypted messages on behalf of GCHQ.

“Ghost” User

The objections are being made to the (as yet) hypothetical idea floated by the UK National Cyber Security Centre’s technical director Ian Levy and GCHQ’s chief codebreaker Crispin Robinson for allowing a “ghost” user / third party i.e. a person at GCHQ, to see the text of an encrypted conversation (call, chat, or group chat) without notifying the participants.

According to Levy and Robinson, they would only seek exceptional access to data where there was a legitimate need, where there that kind of access was the least intrusive way of proceeding, and where there was also appropriate legal authorisation.

Challenge

The Challenge for government security agencies in recent times has been society’s move away from conventional telecommunications channels which could lawfully and relatively easily be ‘tapped’, to digital and encrypted communications channels e.g. WhatsApp, which are essentially invisible to government eyes.  For example, back in September last year, this led to the ‘Five Eyes’ governments threatening legislative or other measures to be allowed access to end-to-end encrypted apps such as WhatsApp.  In the UK back in 2017, then Home Secretary Amber Rudd had also been pushing for ‘back doors’ to be built into encrypted services and had attracted criticism from tech companies that as well as compromising privacy, this would open secure encrypted services to the threat of hacks.

Investigatory Powers Act

The Investigatory Powers Act which became law in November 2016 in the UK included the option of ‘hacking’ warrants by the government, but the full force of the powers of the law was curtailed somewhat by legal challenges.  For example, back in December 2018, Human rights group Liberty won the right for a judicial review into part 4 of the Investigatory Powers Act.  This is the part that was supposed to give many government agencies powers to collect electronic communications and records of internet use, in bulk, without reason for suspicion.

The Open Letter

The open letter to GCHQ in Cheltenham and Adrian Fulford, the UK’s investigatory powers commissioner was signed by tech companies including Google, Apple, WhatsApp and Microsoft, 23 civil society organisations, including Big Brother Watch, Human Rights Watch, and 17 security and policy experts.  The letter called for the abandonment of the “ghost” proposal on the grounds that it could threaten cyber security and fundamental human rights, including privacy and free expression.  The coalition of signatories also urged GCHQ to avoid alternate approaches that would also threaten digital security and human rights, and said that most Web users “rely on their confidence in reputable providers to perform authentication functions and verify that the participants in a conversation are the people they think they are and only those people”. As such, the letter pointed out that the trust relationship and the authentication process would be undermined by the knowledge that a government “ghost” could be allowed to sit-in and scrutinise what may be perfectly innocent conversations.

What Does This Mean For Your Business?

With digital communications in the hands of private companies, and often encrypted, governments realise that (legal) surveillance has been made increasingly difficult for them.  This has resulted in legislation (The Investigatory Powers Act) with built-in elements to force tech companies to co-operate in allowing government access to private conversations and user data. This has, however, been met with frustration in the form of legal challenges, and other attempts by the UK government to stop end-to-end encryption have, so far, also been met with resistance, criticism, and counter-arguments by tech companies and rights groups. This latest “ghost” proposal represents the government’s next step in an ongoing dialogue around the same issue. The tech companies would clearly like to avoid more legislation and other measures (which look increasingly likely) that would undermine the trust between them and their customers, which is why the signatories have stated that they would welcome a continuing dialogue on the issues.  The government is clearly going to persist in its efforts to gain some kind of surveillance access to tech company communications services, albeit for national security (counter-terrorism) reasons for the most part, but is also keen to be seen to do so in a way that is not overtly like ‘big brother’, and in a way that allows them to navigate successfully through the existing rights legislation.

GDPR Says HMRC Must Delete Five Million Voice Records

The Information Commissioner’s Office (ICO) has concluded that HMRC has breached GDPR in the way that it collected the biometric voice records of users and now must delete five million biometric voice files.

What Voice Files?

Back in January 2017, HMRC introduced a system whereby customers calling the tax credits and Self-Assessment helpline could enrol for voice identification (Voice ID) as a means of speeding up the security steps. The system uses 100 different characteristics to recognise the voice of an individual and can create a voiceprint that is unique to that individual.

When customers call HMRC for the first time, they are asked to repeat the vocal passphrase “my voice is my password” to up to five times to register before speaking to a human adviser.  The recorded passphrase is stored in an HMRC database and can be used as a means of verification/authentication in future calls.

It was reported that in the 18 months following the introduction of the system, HMRC acquired 5 million peoples’ voiceprints this way.

What’s The Problem?

Privacy campaigners questioned the lawfulness of the system and in June 2018, privacy campaigning group ‘Big Brother Watch’ reported that its own investigation had revealed that HMRC had (allegedly) taken the five million taxpayers’ biometric voiceprints without their consent.

Big Brother Watch alleged that the automated system offered callers no choice but to do as instructed and create a biometric voice ID for a Government database.  The only way to avoid creating the voice ID on calling, as identified by Big Brother Watch, was to say “no” three times to the automated questions, whereupon the system still resolved to offer a voice ID next time.

Big Brother Watch highlighted the fact that GDPR prohibits the processing of biometric data for the purpose of uniquely identifying a person, unless there is a lawful basis under Article 6, and that because voiceprints are sensitive data but are not strictly necessary for dealing with tax issues, HMRC should request the explicit consent of each taxpayer to enrol them in the scheme (Article 9 of GDPR).

This led to Big Brother Watch registering a formal complaint with the ICO.

Decision

The ICO has now concluded that HMRC’s voice system was not adhering to the data protection rules and effectively pushed people into the system without explicit consent.

The decision from the ICO is that HMRC now must delete the five million records taken prior to October 2018, the date when the system was changed to make it compliant with GDPR.  HMRC has until 5th June to delete the five million voice records, which the state’s tax authority says it is confident it can do long before that deadline.

What Does This Mean For Your Business?

Big Brother Watch believes this to be the biggest ever deletion of biometric IDs from a state database, and privacy campaigners have hailed the ICO’s decision as setting an important precedent that restores data rights for millions of ordinary people.

Many businesses and organisations are now switching/planning to switch to using biometric identification/verification systems instead of password-based systems, and this story is an important reminder that these are subject to GDPR. For example, images and unique Voiceprint IDs are personal data that require explicit consent to be given, and that people should have the right to opt out as well as to opt-in.

Slack Builds Email Bridge

Chat App and collaborative working tool Slack appears to have given up the fight to eliminate email by allowing the introduction of new tools that enable Slack collaboration features inside Gmail and Outlook, thereby building a more inclusive ‘email bridge’.

What Is Slack?

Slack, launched ‘way back’ in 2013, is a cloud-based set of proprietary team collaboration tools and services. It provides mobile apps for iOS, Android, Windows Phone, and is available for the Apple Watch, enabling users to send direct messages, see mentions, and send replies.

Slack teams enable users (communities, groups, or teams) to join through a URL or invitation sent by a team admin or owner. It was intended as an organisational communication tool, but it has gradually been morphing into a community platform i.e. it is a business technology that has crossed-over into personal use.

Email Bridge

After having a five-year battle against email, Slack is building an “email bridge” into its platform that will allow those who only have email to communicate with Slack users.

Aim

The change is aimed at getting those members of an organisation on board who have signed up to the Slack app but are not willing to switch entirely from email to Slack. The acceptance that not everyone wants to give up using their email altogether has made way for a belief by Slack that something at least needs to be built-in to the app to allow companies and organisations to be able to leverage the strengths of all their workers, and at least allow those organisation and team members who are separated because of their Slack vs email situation to be connected to the important conversations within Slack. It will also now mean that companies and organisations have time to make the transition in working practices at their own pace (or not ) i.e. migrate (or not migrate) entirely to Slack.

How?

The change supports Slack’s current Outlook and Gmail functionality, which enables users to forward emails into a channel where members can view and discuss the content and plan responses from inside Slack. It also allows anything set within the Outlook or Gmail Calendar to be automatically synced to Slack.

The new changes will allow team members who have email but have not committed to Slack to receive an email notification when they’re mentioned by their username in channels or are sent a direct message.

What Does This Mean For Your Business?

Slack appears to have listened to Slack users who’d like a way to keep connected with their e-mail only / waiting to receive credentials colleagues, and the email bridge is likely to meet with their approval in this respect.  For Slack, it also presents the opportunity gently for those people who are more resistant to change into eventually making the move to Slack.

This change is one of several announced by Slack, such as the ‘Actions’ feature last year, and the two new toolkits (announced in February this year) that will allow non-coders to build apps within Slack.

Slack knows that there are open source and other alternatives in the market, and the addition of more features and more alliances will help Slack to provide more valuable tools to users, thereby helping it to gain and retain loyalty and compete in a rapidly evolving market.

‘ManyChat’ Raises $18 million Funding For Facebook Messenger Bot

California-based startup ‘ManyChat’ has raised $18 million Series A funding for its Facebook Messenger marketing bot.

ManyChat

ManyChat Inc. is now the leading messenger marketing product, reportedly powering over 100,000 bots on Facebook Messenger.

ManyChat lets you use visual drag`n`drop interface to create a free Facebook Messenger bot for marketing, sales and support.  The bot is essentially a Facebook Page that sends out messages and responds to users automatically.

The ManyChat bot allows you to welcome new users, send them content, schedule posts, set up keyword auto-responses (text, pictures, menus), automatically broadcast your RSS feed and more.

The bot, which is a blend of automation and personal outreach also incorporates Live Chat that notifies you when a conversation is needed with a subscriber.

Facebook Messenger

ManyChat says it has focused on Facebook Messenger because it is the #1 app in the US and Canada with over 1 billion active users, and it is the most engaging channel with average 80% open rates and 4 to 10 times higher CTRs compared to email.

The Funding

The $18 million funding for ManyChat was led by Bessemer Venture Partners, with participation from Flint Capital, and means that Bessemer’s Ethan Kurzweil will be joining the board of directors, and Bessemer’s Alex Ferrara becomes a board observer.

1+ Million Accounts Created

ManyChat reports that more than 1 million accounts have been created on the platform already by customers in many different industry sectors.  The platform has also reported that these 1+ million customers have managed to enlist 350 million Messenger subscribers and that there are now a staggering 7 billion messages sent on the platform each month.

What Does This Mean For Your Business?

Bots provide a way for businesses to reduce costs, make better use of resources and communicate with customers and enquirers 24/7.

As ManyChat points out, it’s becoming increasingly difficult for businesses to effectively reach their audience because people open less email and social media is ‘noisy’ to the point where messages become lost in the crowd.  A key advantage of ManyChat, therefore, is that it uses Facebook Messenger as a private channel of communication with each user, it’s instant and interactive, no message is ever lost, and Messenger has huge user numbers. Other advantages that businesses will appreciate is that it’s free and easy to set up the bot (no coding skills are required), and it offers the best of both worlds of automated communications, and the option to jump in with Live Chat when it is needed.

This kind of bot could enable businesses and organisations to make their marketing more effective while maximising efficiency.

ManyChat is also good news for Facebook which owns Messenger as it appears to be boosting user numbers by finding an improved, business-focused use for the app.

For ManyChat, its Facebook Messenger bot appears to be only the beginning (hence the funding), with investors looking at platforms like Instagram, WhatsApp, RCS, and more to further expand bot marketing services in the future.

New UK ‘Duty of Care’ Rules To Apply To Social Media Companies

The new ‘Online Harms’ whitepaper marks a world first as the UK government plans to introduce regulation to hold social media and other tech companies to account for the nature of the content they display, backed by the policing power of an independent regulator and the threat of fines or a ban.

Duty of Care

The proposed new legal framework from the Department for Digital, Culture, Media and Sport (DCMS) and the Home Office aims to give social media and tech companies a duty of care to protect users from threats, harm, and other damaging content relating to cyberbullying, terrorism, disinformation, child sexual exploitation and encouragement of behaviours that could be damaging.

The need for such regulation has been recognised for some time and was brought into sharper focus recently by the death in the UK of 14-year-old Molly Russell, who was reported to have viewed online material on depression and suicide, and in March this year, the live streaming on one of Facebook’s platforms of the mass shooting at a mosque in New Zealand which led Australia to suggest fines for social media and web-hosting companies and imprisonment of executives if violent content is not removed.

The Proposed Measures

The proposed measures by the UK government in its white paper include:

  • Imposing a new statutory “duty of care” that will hold companies accountable for the safety of their users, as well as a commitment to tackle the harm caused by their services.
  • Tougher requirements on tech companies to stop the dissemination of child abuse and terrorist content online.
  • The appointment of an independent regulator with the power to force social media platforms and tech companies to publish transparency reports on the amount of harmful content on their platforms and what they are doing to address the issue.
  • Forcing companies to respond to users’ complaints, and act quickly to address them.
  • The introduction of codes of practice by the regulator which will include requirements to minimise the spread of misleading and harmful disinformation using dedicated fact checkers (at election time).
  • The introduction of a “safety by design” framework that could help companies to incorporate the necessary online safety features in their new apps and platforms at the development stage.

GDPR-Style Fines (Or A Ban)

Culture, Media and Sport Secretary Jeremy Wright has said that tech companies that don’t do everything reasonably practicable to stop harmful content on their platforms could face fines comparable with those imposed for serious GDPR breaches e.g. 4% of a company’s turnover.

It has also been suggested that under the new rules to be policed by an independent regulator, bosses could be held personally accountable for not stopping harmful content on their platforms. It has also been suggested that in the most serious cases, companies could be banned from operating in Britain if they do not everything reasonably practical to stop harmful content being spread via their platforms.

Balance

Although there is a general recognition that regulation to protect, particularly young people, from harmful/damaging content is a good thing, a proportionate and predictable balance needs to be struck between protecting society and supporting innovation and free speech.

Facebook is reported to have said that it is looking forward to working with the government to ensure new regulations were effective and have a standard approach across platforms.

Criticism

The government’s proposals will now have a 12-week consultation, but the main criticism to date has been that parts of the government’s approach in the proposals are too vague and that regulations alone can’t solve all the problems.

What Does This Mean For Your Business?

Clearly, the UK government believes that self-regulation among social media and tech companies does not work.  The tech industry has generally given a positive response to the government’s proposals and to an approach that is risk-based and proportionate rather than one size fits all.  The hope is that the vaguer elements of the proposals can be clarified and improved over the next 3 months of consultation.

To ensure the maximum protection for UK citizens, any regulations should be complemented by ongoing education for children, young people and adults to make sure that they have the skills and awareness to navigate the digital world safely and securely.

Facebook Rolls Out ‘Why Am I Seeing This Post?’ Tool

In an attempt to be more transparent and give more control to its users, Facebook is about to roll out a new “Why am I seeing this post?” tool, which will give users insights into their newsfeed algorithm.

Algorithm Explained

The new tool essentially goes some way to explaining how the algorithm that decides what appears where in a user’s Facebook newsfeed works.  The tool will give a view of the inputs used by the social network to rank stories, photos and video, and in doing so will enable users to access the actions that they may want to take if they want to change what they see in their newsfeed.

How?

The new tool, which was developed using research groups in New York, Denver, Paris and Berlin, will show users the data that connects them to a certain type of post e.g. they may be friends with the poster, or they’ve liked  a person’s  posts more than others, they’ve frequently commented on that type of post before, or that the post has proved to be popular with users who have the same interests.

Although the tool will enable users to see how the key aspects of the algorithm work, in the interests of convenience, simplicity, speed and security, users will not be shown all the many thousands of inputs that influence the decision.

Additional Details

Facebook is also updating its existing “Why Am I Seeing this Ad?” feature with additional details such as explaining how ads work that target customers using email lists.

Newsfeed Strategy Shift

Early last year, Facebook changed its newsfeed strategy so that posts from family and friends were given greater priority, and non-advertising content from publishers and brands was downgraded.

Bad Times

Facebook’s reputation has reached several low points in recent times in matters relating to the data security and privacy of its users, and how the company has responded to calls for it to clean up content such as hate speech, certain types of video, and political messages from other states.

Most famously, Facebook was fined £500,000 for data breaches relating to the harvesting of the personal details of 87 million Facebook users without their explicit consent, and the sharing of that personal data with London-based political Consulting Firm Cambridge Analytica, which is alleged to have used that data to target political messages and advertising in the last US presidential election campaign. Also, harvested Facebook user data was shared with Aggregate IQ, a Data Company which worked with the ‘Vote Leave’ campaign in the run-up to the Brexit Referendum.

In September last year, Facebook engineers discovered that hackers had used a vulnerability in Facebook’s “View As” feature to compromise an estimated 50 million user accounts.

Additionally, last February the governor of New York, Andrew Cuomo, ordered an investigation into reports that Facebook Inc may have been using apps on users’ smartphones to collect personal information about them.

What Does This Mean For Your Business?

After a series of high profile privacy scandals, Facebook has been making efforts to regain the trust of its users, not just out of a sense of responsibility, but to protect its brand and pave the way for the roll-out a single messaging service which combines Facebook messenger, WhatsApp and Instagram that could make Facebook even more central to users’ communications. Facebook bought Instagram as a way to retain users who were moving away from Facebook, but these users jumped straight onto WhatsApp.  This new service will be a way for Facebook to join all these pieces together, make the best use of what it has, and maximise the value and appeal to users.

The new “Why am I seeing this post?” tool does sound as though it will cover both bases of giving users more control and improving transparency, and it is one of many things that Facebook has been trying to do (and to be seen to do) in order to make the headlines for the right reasons.  Other measures have included announcing the introduction of new rules for political ad transparency in the UK, working with London-based fact-checking charity ‘Full Fact’ to review stories, images and videos, in an attempt to tackle misinformation, and even developing its own secure blockchain-based cryptocurrency that will enable its users to have a PayPal-like experience when purchasing advertised products, as well as providing authentication and an audit trail.

Facebook boss Mark Zuckerberg has also recently written an opinion piece in the Washington Post offering proposals to address the issues of harmful content, election protection, privacy and data protection, and data portability in his own platform and the wider social media and Internet environment.

New York’s Governor Orders Investigation Into Facebook Over App Concerns

The Governor of New York, Andrew Cuomo, has ordered an investigation into reports that Facebook Inc may be using apps on users’ smartphones to collect personal information about them.

Alerted By Wall Street Journal

The Wall Street Journal prompted the Governor to order New York’s Department of State and Department of Financial Services (DFS) to investigate Facebook when the paper reported that Facebook may have more access than it should to data from certain apps, sometimes even when a person isn’t even signed in to Facebook.

Health Data

It has been reported that the kind of data that some apps allegedly share with Facebook includes health-related information such as weight, blood pressure and ovulation status.

The alleged sharing of this kind of sensitive and personal data, whether or not a person is logged-in Facebook, prompted Governor Cuomo to call such practice an “outrageous abuse of privacy.”

Defence

Facebook’s defence against these allegations, which appears to have prompted a short-lived but noticeable fall in Facebook’s share value, was to point out that WSJ’s report focused on how other apps use people’s data to create ads.

Facebook added that it requires other app developers to be clear with their users about the information they are sharing with Facebook and that it prohibits app developers from sending sensitive data to Facebook.

The social media giant also stressed that it tries to detect and remove any data that should not be shared with it.

Lawsuits Pending

This appears to be just one of several legal fronts where Facebook will need to defend itself.  For example, Facebook is still facing a U.S. Federal Trade Commission investigation into the alleged inappropriate sharing of information belonging to 87 million Facebook users with now-defunct political consulting firm Cambridge Analytica.

Apple Also Accused By Governor Over FaceTime Bug

New York’s Governor Cuomo and New York Attorney General Letitia James have also announced an investigation into Apple Inc’s alleged failure to warn customers about a bug in its FaceTime app that could inadvertently allow eavesdropping as iPhones users were able to listen to conversations of others who have not yet accepted a video call.

DFS Involvement

The Department of Financial Services (DFS), which is one of the two agencies that have been ordered to investigate this latest Facebook app sharing matter has only recently begun to get more involved in digital matters, particularly by producing the country’s first cybersecurity rules governing state-regulated financial institutions such as banks, insurers and credit monitors.

Some commentators have expressed concern, however, about the DFS saying last month that DFS life insurers could use social media posts in underwriting their policies, on the condition that they did not discriminate based on race, colour, national origin, sexual orientation or other protected classes.

What Does This Mean For Your Business?

You could be forgiven for thinking that after the scandal over Facebook’s unauthorised sharing of the personal details of 87 million users with Cambridge Analytica, that Facebook may have learned its lesson about the sharing of personal data and may have tried harder to uncover and plug any loopholes that could allow this to happen. The tech giant still has several lawsuits and regulatory inquiries over privacy issues pending, and this latest revelation about the sharing very personal health information certainly won’t help its cause. Clearly, as the involvement of the FDS shows, there needs to be more oversight of (and investigation into) apps that share their data with Facebook, and possibly the need for more legislation and regulation of the smart app / smart tech ecosystem.

There are ways to stop Facebook from sharing your data with other apps via your phone settings and by disabling Facebook’s data sharing platform.  You can find instructions here: https://www.techbout.com/stop-facebook-from-sharing-your-personal-data-with-other-apps-37307/