Recently, A personal experience showed me how something widely accepted might be wrong. I didn’t want to lose this opportunity to write about it.
If you haven’t been living under a cave, you know data-driven decision making is a must these days. So, if you cannot make a decision just run both under some experiment and decide based on the result. If you have a problem that is hard to answer, you just try different answers. Then you pick the one that makes more money or whatever thing you want.
Imagine you don’t know what’s the maximum price you can sell a book that people keep buying it. You can sell it more expensive and expect less sales while having more profit on each book or the other way around. Now the data boss comes up with this neat idea. Let’s sell the book with two different prices in two markets and see how much profit do we make. Sounds good right? Imagine maybe one market was Inevitably going to like the book less. You can imagine a million reasons that this might happen. You know how accidentally stuff go viral.
When you are done with the experiment you try to find the patterns and decide what to do based on the data. Unless something is terribly wrong(10x books sold in one area) you don’t suspect the result. “Oh, maybe it’s expensive and people didn’t buy it.”, you say forgetting how many things can affect this. Another problem with these experiments is how long are you going to continue this. There is no answer, and you can slow yourself down significantly.
Few days ago I was working on some piece of code. Buried down deep in codebase everyone forgot about it once it was launched and the metrics looked good. Only one point was missed, the calculations were wrong. Now the person working on this before me definitely relied on the metrics to assure the change is making good impact. And indeed the change was good overall because the metrics showed that compared to the product before change.
This is why I prefer to use thinking instead when it’s possible to decide about something. When one solution is obviously worth (code that does not work) why bother trying it? The moment you see the charts go up you attribute that to the code you changed no matter correct or wrong. Could it be that the code is wrong, but the result is better due to some other thing that we don’t know? So you lose the opportunity to think more, maybe the wrong code is better because something else is broken.
My experience is limited, but you can check more examples on why you might misunderstand a phenomenon and come up with wrong answers and how data can fool you
Remember learning multiplication tables? Your teacher didn’t say “just keep guessing numbers until your test scores improve.” That would be absurd. Yet somehow, that’s exactly what we’re doing with our code: throw something at the wall, check if the metrics went up, ship it if they did.
Maybe imposing constraints would get us out of this situation. You don’t have all the time to try everything, just think what would be useful.
Here’s the thing about understanding versus just observing: if you see something work and think you know why, you should be able to do it again. That’s the difference between actual understanding and just having a good story about what happened. Let me put it this way: if you run an A/B test and conclude “Oh, users love blue buttons!” then you should be able to predict when blue buttons will work again.
So, data is good, but why not combine it with some thinking?