Beware Overreliance on Theory

By Brian Tomasik

First published: 14 Jun 2017. Last nontrivial update: 21 Jul 2017.

Summary

This post summarizes a transition in my epistemology from a focus on theoretical modeling of situations to a preference for empirical observation of messy details.

Contents

Epigraphs

"If people do not believe that mathematics is simple, it is only because they do not realize how complicated life is." --John von Neumann

"Nobody knew health care could be so complicated." --Donald Trump

My past privileging of theory

A clickbaity title for this article could be: "Too much math is bad for your epistemological health". While exaggerated, I think this statement captures some truth about my past self, especially before and during college.

While I always knew that reality was very complicated, in the past, I tended to assume that mathematical modeling was perhaps the best tool we had for grappling with that complexity. This belief was probably reinforced by my academic studies, in which even messy subjects like economics or psychology seemed to focus on developing elegant theories to explain phenomena. Indeed, developing theories is arguably the main point of science. And when data are presented in a clean fashion in textbooks, the idea that theory can in general powerfully answer empirical questions becomes tempting.

With respect to utilitarian calculations about the best course of action, I also formerly took the approach of trying to fit everything into a single expected-value calculation. I understood that there was enormous uncertainty, but I assumed that plugging complexities into a massive expected-value formula was the best (maybe even the only) way to address the problem. Exact Bayesian inference was intractable, but we just had to approximate it as best we could.

My move toward empiricism

A few factors shifted my thinking toward a more empirical approach. One was my data-science work at Bing. Instead of spending my time learning elegant and powerful theories in academia, I confronted messy, real-world data that disobeyed our predictions more often than I expected. I began to feel that the hypotheses we concocted to explain our results were mostly just-so stories without much hope of being true. A few large-scale trends were consistent, but beyond that, it seemed as though we were often fumbling around without being able to explain what was going on.

Another big influence on my thinking was the epistemology of GiveWell, such as the post "Maximizing cost-effectiveness via critical inquiry". That post concludes: "As long as the impacts of charities remain relatively poorly understood, we feel that focusing on robustness of evidence holds more promise than focusing on quantification of impact." I also learned about the success of heuristics over precise calculation in artificial intelligence and related domains.

A final factor may have been that I simply learned more total facts about the world over time, including about messy domains like nutrition, neuroscience, biology, and world affairs. It's easier to presume that a cost-effectiveness calculation can be made cleanly when you know about fewer complicating factors that could screw it up. The more you think about the interrelatedness of all sorts of activities across the world, the more messy and (in the worst case, intractable) quantification begins to look.

Alternatives to theory

In response to these points, my past self might have said: "Okay, I know, the world is super complicated. But we have to do the best we can to estimate the effects of different possible actions. Trying to estimate those impacts is better than throwing up our hands."

I agree, but this leaves open the question of how to make those assessments. Mathematical modeling is not the only option. It can be one useful tool in the toolkit, but other tools include

When thinking about complex topics theoretically, it's often easy to imagine tens, hundreds, or millions of possible ways reality might unfold. But reality only picks out one actual trajectory. I find historical case studies useful as a way to test whether a theoretical possibility is likely to actually be realized. One example is with the idea of hard-takeoff intelligence explosion. It seems plausible as a theoretical idea that once artificial intelligence reaches a "human level" (assuming that designation makes sense), it might reach significantly superhuman levels of intelligence within hours or days by recursive self-improvement. How can we tell if this is likely? I think the best way is to look at existing examples of self-improving systems (growth of economies, profitability of corporations, etc.) and ask what fraction of them have shown hard-takeoff dynamics. My impression is that for most historical examples, growth has been more analogous to "slow artificial-intelligence takeoff". Growth that contains positive feedback is often exponential, but it's usually a slow exponential without a single, decisive spike point. For example, the world economy has grown at a pretty steady rate for hundreds (thousands?) of years, without a "hard takeoff" at any one point.

I think asking whether any structures or events of a given type have existed historically is a useful strategy in general for futurists. The more future predictions draw on the kinds of trends and organizations that we see in our own world, rather than predicting sharp breaks from the past, the more probable I tend to find them.

As an example, suppose you wanted to predict whether accidental or intentional artificial-intelligence catastrophes will be a bigger problem in the future. One approach is to construct a bunch of scenarios for how the future might unfold and how agents might behave. This kind of thinking is useful. But we can also take an outside view, such as by looking at how much global economic productivity is currently lost to accidental software bugs ($59.5 billion per year for the USA) vs. malicious software ($13.3 billion per year for the whole world, counting only "direct damages"). Of course, there's lots of fine print on these numbers, and estimates can be noisy or misleading.

The outside view is sort of like cheating on a test. Instead of trying to directly compute the outputs of an insanely complex system (the world), we look at other answers that the world has already computed for us in the past.

When theory is useful

Theory can sometimes be extremely powerful. The classic case is in theoretical physics, and this success gives other disciplines "physics envy". In some other fields, like chemistry or engineering, theoretical calculations are also often quite reliable.

Empiricism has its flaws as well, especially when data are noisy. Good data may be worth a thousand theories, but poor data may sometimes be worse than a priori theory. For example, suppose you hear that "I made $300/hour by filling out surveys from home." Theoretical reasoning suggests that this noisy datum is almost certainly bullshit, because if this job really did pay that well, more people would be doing it, until it stopped being so remunerative. Of course, we could alternatively score this example as a victory for empiricism, because the more you're familiar with other scams and misleading claims, as well as with typical wages for a variety of types of jobs, the more readily you'll identify this statement as bogus.

Playing with toys vs. seeking to learn

One argument against giving police officers fancy new weapons is that some cops will be looking for opportunities to use their new "toys". As a result, some cops will engage in violence when non-violent methods would have been more appropriate. When all you have is a hammer, you seek out nails to hit on the head.

Similarly, when students learn fancy new tools—such as how to do integrals or how to write computer programs—they may seek out opportunities in which to play with their new toys. Looking back on my writings from high school and college, I notice this pattern a lot. After learning new things in school, I would write ponderous articles with excessive equations or grandiloquent vocabulary words in order to exercise my newfound tool set. This is presumably healthy, in a similar way as childhood play is a form of learning. Probably I still do this to some extent whenever I learn new things.

In contrast, "mature" writings tend to make less of a show out of using fancy tools and instead apply tools only when needed. The focus is more on solving a problem at hand without regard to whether its solution uses cool toys. (Of course, there are some academic journal articles that look to me like authors seeking a chance to play with their toys.)

Relatedly, in the past I would often think about my career in the following way: I enjoy and have skills in X, so what careers can I pursue that will best utilize X skills? Now I think this is mostly backwards. Today I would reason as follows: career Y seems like a fun and impactful place to spend my time, so what new skills and information do I need to learn to move towards career Y? My experience is that one can learn the basics of a new field relatively quickly, and it makes little sense to anchor one's long-run future plans to somewhat random choices of college major made in one's past. The future is wide open, and there are many new domains with new sets of tools waiting for you.

When I was obsessed with mathematical modeling and programming in the past, I sometimes viewed learning background material about a new domain (say, gene sequencing) as the "vegetables" that one had to eat before the "dessert" of applying mathematical/programming tools to the problem domain. Now I would tend to approach the situation in the opposite direction. Rather than looking for opportunities to play with my existing tools, I would ask: "What is this new domain like in general? How do biologists who aren't in love with math study this topic? What non-mathematical, non-programming skills does this field have for me to learn?" And then I would whip out the math/programming only when it seems the best tool for the job. This change in perspective is sort of like the difference between being the person talking in a conversation (my old approach) vs. the person listening (my new approach). Instead of learning just enough broad outlines of a topic to be able to apply a mathematical model, I would instead seek to learn lots of messy details that likely won't fit cleanly into models.

The picture I just painted is probably exaggerated for rhetorical effect. Even in the past I loved to learn new details and techniques that new domains had to offer. But I think there is some truth to the general trend I just described.

This section was partly inspired by the last answer in this interview with Robin Hanson.

Acknowledgments

Parts of this piece were inspired by conversations with Pablo Stafforini, Tobias Baumann, and others.