One of the most important things you can do to optimize the chance of getting a positive or successful outcome is to align incentives at the beginning. We think of this often at a micro level when designing compensation such as commission plans or bonus plans but it is true at a macro level in general organizational design and even for marketing and business strategy. The aligned incentives need to be inherent and structural to the strategy and organization. When you get it right it’s like biking downhill – everything takes less effort. When incentives are not aligned there is a resultant continual management overhead dealing with the consequences.
My experience managing both advertising-based businesses such as Yahoo Mail as well as subscription businesses such as SugarSync and Catch taught me that aligned incentives between company and customer make it so much easier to manage day-to-day implementation against strategy. At Yahoo mail we were constantly trying to balance competing interests – we needed a large enough volume of advertising impressions and impression formats that were aggressive enough to yield clicks but not too many or to obstructive so as to detract from the user experience beyond the point where it would reduce usage. There was no science behind this balance leading to endless organizational thrash and, arguably, poor decisions and eventual loss of market share.
A freemium business model approach such as the one SugarSync took while I was CEO from 2009-2013 had the opposite dynamic. The more our customers used the product, the more data they would store, the more likely they were to run out of storage and upgrade from free to paid or to a higher paid plan. The marketing tactic in this situation was simple – improve usability and/or add features such that people will want to use it more. Unlike the ad supported example, usability and revenue are tightly aligned. Day-to-day decisions were therefore more straightforward and easier to delegate. Other freemium businesses have experienced this same phenomenon, certainly it was true for Catch.com and for Evernote as explained here by Phil Liblin’s.
If you find yourself as a constant arbiter of small decisions and prioritization questions ask yourself where incentives or goals might be misaligned.
Just as organizations can be misaligned I believe that same misalignment can apply to the inherent design of products. We have seen it frequently in particular in apps that focus on anonymous communication. I read an interesting article about the army of labor being employed to fight bullying and other harmful behaviors on the various anonymous apps (Secret, Whisper etc). They simply can’t keep up with the volume of the problematic behaviors that are rife in these apps. While of course there are many well-intentioned posts, anonymous apps are a draw to those who want to harm. Almost any application will suffer from abuse and the app provider will need to come up with some method, usually a combination of automated and manual intervention, to manage it. But the situation described by Gigaom shows, in my view, that the incentives built into the app actually encourage abuse. It is, as the underwriters say, a form of adverse selection. Sick people are quicker to buy health insurance and the bullies are quicker to join apps and troll on sites that allow them to abuse with impunity.
Recent examples have only served to heighten my concern about the harm caused by anonymous apps and anonymous commenting by trolls. People who are obviously vulnerable and suffering such as Zelda Williams just after the death of her father are attacked. In fact any public figure is likely to suffer at the hand of internet trolls. But you don’t even need to be famous to be a victim of trolls. We are witnessing a dramatic chilling effect – misogynistic trolls have silenced many serious articulate female voices.
Does anonymity encourage bad behavior? Psychologists and sociologists have long observed that we restrain ourselves from self-interested bad behavior based on two systems – our internal conscience or “superego” as designated by Freud as well as societal pressure and feedback. Our relationships, commitments, values, norms, and beliefs and desire to participate fully in society encourage us to meet societal behavioral norms. Take away the societal element through anonymity and we’re left with only our individual consciences. For most people our conscience and empathy is enough to keep us following the “golden rule” but the internet is so vast that a small percentage of the population can make things miserable for many people.
Yes, there are some excellent reasons to allow anonymity (as described by the EFF here and further discussed here by Fred Wilson). But the harm from anonymity enabled trolling and messaging is a very real, even deadly problem.
I suppose that any technology that can be used for good can be used for harm. Twitter is a great example of this. There has been harm, as in the Zelda Williams example, but also very important and positive social benefits have occurred where the cloak of anonymity has protected the vulnerable. But we must not stand behind the shield of the legitimate benefits of anonymity when there are some technologies and settings that seem to be at worst, designed for harm or, at best, designed in such a way that the ratio of harm to good is negatively balanced.
It does not have to be this way. I find it amazing that smart application design and community standards can make seemingly scary things like selling valuable goods over the internet or renting your guest room to strangers surprisingly secure while poor design can make you the “go to” app for cyberbullies. If your app requires an army of labor in the Philippines to police user behavior its time to question what are you really trying to do. It comes down to aligning incentives.