We all know (or at least suspect) that when you survey a group and allow people to reply if they are moved to do so that you will generally get responses at either end of the spectrum and not many from the middle. People will really only go to the trouble of filling out a survey if they love or hate the survey subject.
Over on Salon.com’s Machinist blog, they dealt with the same issue with online rating systems like TripAdvisor, IMDb and Amazon.com. (Read the comments section on the entry for a lot of other insight into other weaknesses in these rating systems.)
“To see how an Amazon star-rating compares to society’s “true” opinion, Hu, Pavlou and Zhang conducted their own survey of one product, singer-songwriter Jason Mraz’s 2005 album, “Mr. A-Z.” In a survey of 66 college students, about two-thirds gave the album three or four stars. There were also a bunch of twos, some ones, and very few fives. On Amazon the picture is completely different. More than half of reviewers judge “Mr. A-Z” a five-star CD, while there are only a small number of threes, twos and ones.
Pavlou explains the lovefest by citing a specific kind of response bias, what he calls “purchasing bias.” In order to review something, you must have already purchased it. But people buy stuff they think they’re going to like — that’s why they buy stuff…Purchasing bias, Pavlou points out, is related to the price of a product; a higher price reduces the probability that someone who is unlikely to enjoy a product will buy it and review it anyway….If the Jason Mraz album was $200 rather than $11, then only die-hard fans would buy it and rate it, skewing its average review higher…The more expensive a product, Pavlou says, the more you should discount its high reviews.”
The article talks about a company called Summize which is translating all the star ratings into a thermometer bar like this one for the aforementioned Mraz album. Clicking on the various colors representing the good, bad and ugly number of ratings gives you direct access to the reviews with those ratings. Seeing all the reviews of each star category together rather than interspersed with ratings of other stars aids a little more in decision making. It can also reveal if marketing departments have tried to seed in good reviews at intervals when comments with similar syntax and spelling errors pop up side by side.
As the entry also points out, considering the source is still paramount when dealing with critiques. A reviewing site called Yelp allows you to cross reference reviewers with other reviewers of similar minds to evaluate if you share their taste and thus, have a higher degree of confidence in their opinions.
Refining software to compare our taste to those of others for us is what Web 3.0 is projected to be all about. (Web 2.0 is user generated content like blogging, Wikipedia, YouTube.) It is speculated that the next generation of web applications will search the internet for what we want and like a TIVO, will gradually learn what our preferences are in order to make suggestions. Presumably, we will be able to trade these specs with loved ones to aid them in Christmas shopping for us.
I imagine that as with Tivo, advertisers will be scrambling to figure out how to position their products in ways that the next generation of search agents will suggest them to consumers. (I am guessing they will pay software developers to have the agents favor them.)
The potential good news for arts organizations is that even if they don’t try to be manipulative in the type of Metawords they use in their web design, the artificial intelligences of the search agents may inform their masters that they have a high degree of confidence that they will enjoy a performance based on the years of criteria the agent has indexed even though they have never gone to see a show before.
I am sure that large corporations will see to it that software is developed enabling the agent to inform the arts organization website that this is the first time its master has purchased tickets to a show allowing the arts organization to offer great seats at reduced prices and perhaps flag the purchaser to receive free background information about the show and special attention by the front of house staff.
If the companies that develop these agents are smart, they won’t allow the users to be so specific in their criteria that they close themselves off from seemingly out of left field recommendations synthesized by the agent based on a profile it has compiled.