MyCustomer.com

The dos and don'ts of moderating your firm's social network pages

by
5th Aug 2010

Tamara Littleton looks at the legal and etiquette risks and issues surrounding the moderation of your organisation's social network pages.

In July 2010, Pepsi launched its 'Do good for the Gulf' campaign: awarding $1.3m in grants to help clean up initiatives as a result of the BP oil disaster. This initiative is part of the wider decision by Pepsi to ditch its Superbowl advertising to invest more in social media; the result is the $20m Pepsi Refresh project.
Like many brands, Pepsi has invested in a combination of its own community and a Facebook community. It is increasingly the case that social media campaigns for consumer brands use a combination of owned and third-party media; of this third-party media, it is likely that one element will be a social network: creating profiles, pages, groups, competitions and other digital content within a social network such as Facebook, MySpace, or a YouTube channel.
Of course, the 'big four' social networks – MySpace, Facebook, YouTube and Bebo (sold by AOL in June to Criterion Capital Partners) – were not originally designed as marketing platforms, and so the rules that govern them are not always clear to brands. To be fair, the rules aren't always clear to anyone.
One of the biggest misconceptions of brands is that everything posted by users to their social network pages is moderated by the networks; and so they (and their users) are protected from illegal, libellous, abusive or otherwise inappropriate content. Not so. Social networks do not accept legal liability for everything on their sites – it simply wouldn't be practical to do so, given the volume of content uploaded every day.
This stance was upheld in October 2009 in the US, when a defamation suit brought by a teenager against Facebook was dismissed by a New York County judge. The media website Media Post reports how fellow students of the teenager has posted abusive remarks about her on Facebook; but that Facebook was considered to be 'categorically immune' from defamation lawsuits like this. Facebook's own terms of use now clearly state that content posted is the property of the person posting, and not of the site; and Facebook does not accept responsibility for misuse of its 'utility'.
What does this mean for brands?
Under US law, Facebook is not responsible for user-generated defamatory content against the brands on users' own pages, nor for illegal or abusive content posted on branded sites. So, if you're creating a branded page on Facebook, and you don't want any illegal, spammy or abusive content associated with that page, sort it out yourself.
Things get a little murkier in Europe, however, in February 2010, three senior Google executives were convicted of privacy violations for hosting an appalling video posted in September 2006 that shows an autistic teenager being bullied by a group of students in Turin. The clip was taken down in November, after complaints were made. Although the original case of defamation failed; a quirk of Italian law meant that a case could be made on the grounds of privacy violation. Google is appealing the case, and the grounds for the ruling at the time of writing are still unclear.
The argument about responsibility seems to rest on whether social networks are content hosting sites, in which case they have no responsibility for content (in the same way that BT wouldn't be responsible for a nuisance phone call); or whether they are media sites that produce content – in which case they do have a responsibility. This is an important difference for brands: the implication here is that you own anything produced on your own media-based community (but not necessarily on a social network). But there is no absolute clarity in international laws that pre-date Web 2.0 and the concept of social networks; nor in local laws (which as we saw in the Italy case, can be applied to international networks that market themselves in those countries). This is a huge topic – well beyond the scope of this article – but if you're interested, you can find out more in our whitepaper on social networks, here.
But leaving the law to one side, companies have a moral duty to their users on social networks, and a common sense requirement to protect the brand from association with illegal, defamatory or abusive content. Even the social networks are going beyond their legal requirements and are starting to tackle the moral duty to keep users safe (Facebook, for example, finally agreed in July to work with CEOP to provide a ‘ClickCEOP’ button for children between 13 and 18 to report suspected grooming or inappropriate sexual behaviour). But the focus for the networks is on reporting inappropriate behaviour or content, not preventing it happening in the first place.
What are the risks of not moderating content?
This begs the question for brands: what are the risks of allowing unmoderated content onto your social network pages? The most important, of course, is the safety of users, particularly for brands marketing to children. The importance of providing a safe environment for children goes without saying, and brands have a duty to ensure that children are not exposed to abuse, bullying or even illegal content posted by unscrupulous users of their social network pages. There is another responsibility, too – that of protecting some children from their own naivety, from sharing personally identifiable information that could be used to target them. Particularly if you choose to face the elephant in the room: the fact that many children under the minimum age of 13 do actually use social networks. (In May 2010, a survey of 1,000 8 to 15-year-old-year girls said that Facebook was one of the most important parts of their lives, clearly indicating that the site was being used by underage children.)
There is also a reputation risk. Like it or not, content posted on a branded page will be associated with that brand. No responsible company wants to be associated with bullying or inappropriate content on their social network pages. Many users will assume that brands check the content that goes onto their pages and so, if (for example) racist comments were to appear on a YouTube channel, users might assume the brand endorses those comments. (Coca Cola learnt this lesson the hard way by unknowingly including a reference to a porn film in a Dr Pepper campaign targeting teens on Facebook.) On a practical level, users won't come back to a site that is rendered unusable by people posting comment spam, or irrelevant messages.
Should brands moderate content on a social network?
How well do you know your audience? Many brands think they know their audience well enough to trust them – and in some cases, this might be true (we moderated the 'Teens Speech' campaign for Barnardo's, where hardly any moderation was needed; it was really heartening to see how seriously the children involved took the project). But the nature of social networks means that they are open to public content (and therefore public abuse), not just to the trusted fans of a brand.
Can you stop people saying negative things about you?
Moderation is not censorship. We believe absolutely that if you are using social media to engage with consumers, you should listen to those consumers – whether what they have to say is good, or bad. Social media is about listening and engaging, and negative feedback should not be censored because the brand doesn't want to hear negative things about itself. Apart from anything else, consumers do not respond well to censorship, and you could be creating an even bigger problem. Nestle found this out to its cost, when it removed negative posts from its Facebook page during the Greenpeace campaign against its use of Palm Oil. While legally defensible, the result was to fan the flames of an already damaging situation.
What should you look for when moderating content?
The obvious issues to avoid are bullying, abuse and illegal content. But there are other, possibly less obvious things that brands should look out for. Some examples that we've come across include:
  • Users with names that include abusive or obscene words. Starbucks faced this problem when a user had included a swastika in their profile picture. Options to tackle this are: block the user outright; or contact them and ask them to change their avatar. If they refuse, or they change it back, block them (sometimes it may be necessary to involve the social network in this action, although it should always be possible).
  • Obviously off-topic posts. If a user posts something that is obviously off-topic, it should be considered to be spam (particularly if it includes a link – this could take the use to a website that is infected with malware or contains offensive images, for example). Spam will disengage users and make your site less relevant and interesting to them. The solution? Don't publish the post, or delete it. If necessary, block the user.
  • Non-fans. By 'non-fans' we mean people who are leaving harassing messages (threatening 'chain-mail' style messages, for example) or people who are just trying to sell fans a product. Options to tackle this are: block the post and if appropriate, contact the user to explain why; or block the user if this is possible.
Is it possible to moderate social networks?
It is possible for brands to moderate the content that is uploaded to their pages on social networks. The rules for each are different. We've laid out some of the key points for each of the 'big four' in our Moderation in Social Networks guide, but to summarise:
 
  • YouTube – you can both pre- and post-moderate content that is posted, and all comments on videos and channels. However, you can't moderate avatar images (but you can block people with offensive avatars from your channel). But note: this only applies to content that is uploaded to the brand's channel, not to other users' channels.
  • MySpace – MySpace is a little harder to moderate, but it is still possible. All comments, messages, friend requests, fan requests, videos etc. can be pre- or post-moderated. However, you can't moderate live feeds (although you can remove them completely).
  • Bebo – we'll wait to see what changes are made to Bebo after its recent sale. But at the time of writing, the process is very similar to MySpace. However, Bebo doesn't currently check usernames, so it's worth keeping an eye out for unsuitable ones.
  • Facebook – by far the most problematic of the networks. Facebook doesn't allow pre-moderation, so the only option is to remove offensive content after it has been posted, and this can be a large drain on resources. Recently-developed third-party moderation tools can greatly aid efficiency, though at a cost. It is worth giving special mention to Facebook's Community Pages. Recently introduced by Facebook in order to give 'fans' a place to talk about non-brand topics, because they aggregate all mentions of the topic (or brand) onto the page, the Community Pages pose a severe reputational threat about which a brand can do nothing as they have no editorial control over the page.
Tamara Littleton is CEO of eModeration. For more information on working with social networks, see eModeration's guide to Moderation in Social Networks.

Replies (0)

Please login or register to join the discussion.

There are currently no replies, be the first to post a reply.