Apparently, all that's stopping the music industry from returning to its former glory is its failure to punish people who download music without paying for it. But if that's the case, why did music sales in Japan fall when downloaders of unlicensed content were slammed with draconian penalties?
The same reverse effect applies to open source. Why do open source projects with a vendor tightly controlling the code usually fail to grow? Why do open source projects with relaxed licenses still get plenty of code contributions, though the license does not require them?
The systems of human interaction that surround both cases is complex and connected. Take music -- while there are undoubtedly free riders stealing music instead of paying for it, the example in Japan (and in many other similar instances) shows that the people who actually pay for music also engage in other downloads. When you undermine their confidence in the freedom to be fans of the artists they enjoy, they buy less music. An effort to pursue bigger profits endangers them instead.
So it is with open source software. A focus on an immediate cause-and-effect relationship -- we'd better add restrictions to protect our ability to grow revenue -- leads to unintended results, and a larger, systemic goal is missed: creating a healthy community around the project.
Dueling philosophies of cause and effect
There are two views of the place of cause and effect in the world. One believes in direct causality, where the things you can see and control are the ones that matter most. The other believes in systemic causality, where outcomes are determined by long, complex, inter-related chains of cause and effect. Both are correct much of the time, so their differences rest beneath the surface of most realities. Both can be useful tools in guiding behaviour and predicting consequences, and they have value as a lens to bring decisions into focus.
In most circumstances, direct causality seems the obvious interpretative lens for the past and predictive lens for the future. We are most comfortable when we can draw clear circles around causes and thick lines between them and their consequences. We admire the "chess players" of society who can draw long chains of clear circles and thick lines; for most of us, the ability to mentally calculate chains of cause and effect is limited to a few steps.
But certain systems involve a longer chain of lesser causes and effects that renders a focus on the individual steps unhelpful. Evolution, national economics, global warming, and terrorist motivations all need a systemic view if they are to be properly understood. A focus on what the individual can prove directly themselves in these cases may well lead to bad choices. These systems are especially difficult for people with "just do it" attitudes, who find it hard to take it on faith that they should act in a contrarian way because of a larger system that can't be seen and computed in its entirety.
When our outlook is dominated by direct causality, we seek control over causes. When our outlook is dominated by systemic causality, we seek influence over the network of causes and effects. In many simpler situations, both outlooks lead to the same decision. But as we've moved to a meshed society, the importance of systemic causality has risen. Every cause has an immediate effect, but to believe that effect is the only consequence is increasingly a risk.
If the distance to the effect we seek is short and that effect is the only outcome that matters, control is obviously desirable. But if the distance to the desired effect is large and filled with many connections, it's better to collaborate and cooperate with other participants and prioritize influence over control.