A Discussion On User Generated Content Ratings

A Discussion On User Generated Content Ratings

The Internet is full of content rated by users. One star for a meal over here. Five stars for this book over there. We have come to accept these ratings, assigned by people we don’t know and probably never will, as a reasonably reliable gauge of whether or not something is good, bad and most importantly worth our time. This practice actually seems to work quite well in civilian life, but what about applying this practice within your enterprise? Is this a good idea and if it is what are some of the best practices being applied?

This topic made an appearance on stage during an Intranet Benchmarking Forum member meeting awhile back and a passionate discussion ensued. The following is a summary of what was discussed and some key findings. Use these learnings in good health and, equally important, let me know what you think.

How to use User Generated Content Ratings

One consideration in applying user generated content ratings in an enterprise environment is type of content. For the purpose of this discussion we defined two: corporate communication and user generated content. Corporate communications are those created, edited and published by a central team. User generated content is exactly as the term implies with a key difference being unstructured, unregulated and often unpredictable topics, ideas and discussion threads. This delineation begs the question, “Are user generated content ratings appropriate for corporate communications?” Take for example, the corporate communication team posting an article about a recent restructuring. Do we really want everyone’s ratings and how will these ratings be used? Is this value added interaction or as one attendee suggested, “giving people a voice without giving them a say.”

After much discussion, two schools of thought emerged. On the one hand, these ratings are useful for getting an immediate response from your user community showing how they feel about a particular topic. Using our “restructuring” example, one could easily extrapolate from an extremely high percentage of one star ratings that many employees are not warmly embracing the resent restructuring and as a result, management should consider a more detailed, personal communication to improve morale. On the other hand, using the same example, wasn’t this sentiment already known before everyone fired off his or her one star salutes and what exactly can we do about this situation? It’s not like management is going to undo the restructure based on these ratings, and even if they did, is this the way we want the company run? Besides what does a low rating actually mean? Does one star mean that people did not agree with the restructuring? Or does it refer to the fact that the article has poor sentence structure and multiple spelling errors?

So which camp is right? They both are actually. Deciding what is right for your organization is a matter of cultural alignment and desired impact. Allowing your users to rate content sends the message that management wants to know how they feel about a particular topic. These ratings are also useful for uncovering some often counter intuitive findings about what your users actually care about. A deluge of ratings assigned to an article announcing your company’s green initiatives versus the dearth of ratings submitted to the program rules of your company’s sales incentive program sends an interesting message. The challenge is accurately identifying the message being sent and responding in a recognized and appropriate fashion. All in attendance agreed that the following three best practices are paramount in successfully implementing user generated content ratings.

1. Make comments mandatory.

Requiring users to generate comments solves two problems. For one, you now know what the rating means. A less obvious, but equally important, side effect of requiring comments is that it encourages people to think before they rate. Users are less likely to provide haphazard ratings if they are required to comment on the number of stars they select.

2. No anonymity allowed.

It was also suggested that a key to receiving useful comments is requiring the identity of the author. People write differently when their name is prominently tattooed above their prose. This information also comes in handy when responding, which leads us to our next best practice.

3. Visibly do something with the information you are gathering.

Keeping this communication channel alive and well requires bi-directional care and feeding. People need to see that their comments are being read and receiving a reasonable response, with reasonable being the operative word. No one expects management to respond to every comment, but continual silence on the other end of the line generally causes users to disengage.

So are there any reasons for not implementing user generated content ratings? Of course, they are quite simply:

1. You are not interested in the information provided.

2. You are not prepared to invest the time and effort needed to respond.

3. This practice simply does not fit with your company’s culture.

If any of these statements resonate more strongly then the preceding pro-rating arguments, user generated content ratings may not be appropriate for your company. However, be forewarned, the views of the attendees on hand were quite clear. Information is expected to be free and bi-directionally flowing… even in the enterprise.