Why Amazon Didn't Just Have a Glitch

This a Guest Post by Mary Hodder, founder of Dabble.com, a social video search site and blogger for Napsterization. Hodder is a veteran Silicon Valley technologist and was most recently VP of Products at Apisiphere, a geolocation mobile company building an enterprise platform for mobile developers. This post is in response to the Amazon’s removal of sales rankings on a number of gay- and lesbian-themed books due to a glitch.

Webopedia defines an algorithm as:

(al´g&-rith-&m) (n.) A formula or set of steps for solving a particular problem. To be an algorithm, a set of rules must be unambiguous and have a clear stopping point. Algorithms can be expressed in any language, from natural languages like English or French to programming languages like FORTRAN.

We use algorithms every day. For example, a recipe for baking a cake is an algorithm. Most programs, with the exception of some artificial intelligence applications, consist of algorithms. Inventing elegant algorithms — algorithms that are simple and require the fewest steps possible — is one of the principal challenges in programming.

The ethical issue with algorithms and information systems generally is that they make choices about what information to use, or display or hide, and this makes them very powerful. These choices are never made in a vacuum and reflect both the conscious and subconscious assumptions and ideas of their creators.

The ethics bar in creating algorithms and classification systems should be very high. In fact I would suggest that companies with power in the marketplace, both commercial and ideas, should consider outside review of these systems’ assumptions and points of view so the results are fair.

Algorithms are often invisible, and difficult to detect by design, because technologies that use them are designed not to share the methods for providing information. This is mainly because users are focused on the tasks at hand in information systems, and given good information, they don’t need to know everything under the system’s hood, and because technology makers like to keep the ‘secret sauce” hidden from competitors, not to mention people who would game systems for their own devices such as spammers or other bad actors.

However, the flip side of this is the lack of notice to users of the assumptions that a system is making, which may not even be apparent to those building the system. Systems that filter out useless, spamming or other uninteresting information also can filter out other things without notice. When a page is full of data, it’s often very difficult for someone to recognize what is missing. This is a sort of “where’s Waldo” situation except that in the mass of data shown, there isn’t a Waldo. Waldo is invisible.

This weekend it was discovered that Amazon had filtered out some books as “adult” even though those books had minor sexual or erotic themes (fiction) or were non-fiction discussion classes of people associated with particular sexual orientations. Apparently, these filters began to pop up two months ago and while Amazon was notified at the time, many — including me — weren’t aware of the problem until Mark Probst, author of “The Filly,” realized his book was missing it’s “sales rank” from Amazon. He questioned them, and they replied:

In consideration of our entire customer base, we exclude “adult” material from appearing in some searches and best seller lists. Since these lists are generated using sales ranks, adult materials must also be excluded from that feature.

Hence, if you have further questions, kindly write back to us.

Best regards,
Ashlyn D
Member Services
Amazon.com Advantage

Sunday afternoon, I noticed after investigating the issue, that twitter had about 500 posts every 15 minutes with the tag: #AmazonFail. A petition appeared with 400 signatures and quickly jumped to 2,000 within a couple of hours, and then to 10,000 by Monday morning. Tweets were at a high of 6,500+ midday Monday over a 45 minute period.

Using the tagging system in Amazon, users tagged about 1,000 books that generally have gay and lesbian themes with AmazonFail (whether non-fiction historical or fictional stories, whether soft-erotic — think Jackie Collins level stuff — or documentary on things like the military policy “don’t ask don’t tell”). Books like Lady Chatterley’s Lover, and The Mayor of Castro Street and Maurice by EM Forester and Conduct Unbecoming, a history of gays and lesbians in the military, were included.

Here is a comparison example for the details section of Amazon pages, where one book has it’s sales rank by Amazon, and another is missing:
Amazon Sales Rank comparison.png

Note that the second book, A Parent’s Guide to Preventing Homosexuality (Paperback), has many reviews (most of which have gone up in the past 48 hours and are negative) in part I think because this is the top result for the search: “Homosexuality” in Amazon’s system.

Also, see Dear Author’s excellent documenting of Amazon’s classification of books, placed in Amazon’s database partly by the publishers and partly by Amazon.

Note that Amazon came out late Monday with the explanation that this was all “a glitch” in a statement to Publisher’s Weekly, however that contradicts earlier email from them to authors stating that they were in the “adult” category simply for including positive gay and lesbian themes in their works and that’s why they lost their “Sales Rank” statistic that would keep them in search results. It was a very targeted glitch for sure. Targeted to, among other things, “positive references to sexual orientation == gay” placing them into the “adult” category, which allowed the other minor “glitch” by the programmer to be possible.

If all this seems like a problem, and it should, it’s because Amazon is using algorithms, which rely on their classification system, with various statistics like “Sales Rank” to rank products in search results on the site. These algorithms and classifications have points of view. Their point of view, revealed this week, is that “positive references to sexual orientation == gay” is “adult” in nature. And that classifications will be used in the algorithms to sort out what is shown and what does or does not get to have “sales rank,” which then orders items in search results. And we all know search result order can lead to big sales, or invisibility. The SEO industry and Google bank big on that point.

Search for “homosexuality” in Amazon and this is the top result:

AmazonSearchResultsHomosexuality.png

While Amazon “Sales Rank” numbers in part determine the ordering of search results, I’m guessing that the number of user activities around a listing, such as reviews, may also be a factor because yesterday when I did this search, the 5 books ranked following “A Parents Guide…” were also anti-homosexuality. Today, the four that follow are positive toward homosexuality (at a glance), and from what I can tell there are many new reviews and comments on these books.

The issue with #AmazonFail isn’t that a French Employee pressed the wrong button or could affect the system by changing “false” to “true” in filtering certain “adult” classified items, it’s that Amazon’s system has assumptions such as: sexual orientation is part of “adult”. And “gay” is part of “adult.” In other words, #AmazonFail is about the subconscious assumptions of people built into algorithms and classification that contain discriminatory ideas. When other employees use the system, whether they themselves agree with the underlying assumptions of the algorithms and classification system, or even realize the system has these point’s of view built in, they can put those assumptions into force, as the Amazon France Employee apparently did according to Amazon.

This of course doesn’t explain how the problem arose two months ago, and why when Amazon was notified, they didn’t look into it then. I would suggest that the same underlying assumptions that drove their classification and algorithm system to be built to filter “gay” into “adult” also led their investigations in February and March to lead to nothing. It was only public outrage this past weekend that caused them to look harder, beyond their own assumptions, to find the underlying problem.

The bar for ethics in creating algorithms and classification systems should be very high. #AmazonFail proved it’s not, at least at Amazon. I would venture that Amazon’s classification and algorithm system have more of these discriminatory assumptions, and while their tagging system does allow users to correct for some of this, Amazon is using it’s internal classification system in it’s filters, not user tagging, that I can tell.

I would suggest that the company, because of its position in the market and power over both authors and publishers, as well as users and the intellectual marketplace of idea, ought to be doing a complete and public review of their classification and algorithm assumptions. Publishers and authors should push for it, and so should users.