by Brian Tomasik
This page collects some random thoughts from 2017 that are too short or too unimportant to deserve their own essays.
Should goals always be open to revision?
29 Mar. 2017
Evan O'Leary asked me about the idea that a fundamental principle of ethics is that we should keep open our ability to improve our moral views, because our current views may be suboptimal (relative to some meta-level criterion, such as what we would endorse upon learning more).
I would say there are two conflicting imperatives, and we need to trade off among them. One imperative is to make updates to our values using update procedures that we approve of. The other imperative is to prevent corruption of our values in ways that we don't like. Leaving values open to change allows for both of these, and it's a messy empirical question what the balance of benefits vs. risks is for what kinds of brain updates in a given situation.
As an analogy, one might say it's a fundamental principle of using a personal computer that we should keep open our ability to connect to the Internet. This is often true when we want to browse the web, download applications, etc. But being Internet-connected also allows for computer viruses and other malware. Sometimes not connecting a computer to the Internet is better for security reasons. Moreover, some people may prefer to avoid connecting to the Internet for a period of time to avoid distractions and temptations (which are analogous to unwanted moral goal drift).
When OCD is rational
16 Mar. 2017
In the late 1990s, I watched As Good as It Gets with my family. This was my first introduction to obsessive-compulsive disorder (OCD), and I noticed that I could weakly identify with a few of the main character's tendencies, such as wanting to step over sidewalk cracks. (In my case, that crack-skipping behavior may have been partly playful and partly due to a hard-to-verbalize desire for orderliness or something.)
Today, I no longer notice sidewalk cracks, although I do watch my footsteps in order to avoid painfully killing bugs under my feet. My checking for bugs can appear like mild OCD to onlookers, but I think this behavior is fairly rational given my belief that bug suffering is important and bad.
I'm somewhat OCD when checking important things, such as verifying that the stove is off before I go to bed, or checking that I've recovered all my belongings after going through airport security. I feel these behaviors are fairly rational, which is why I maintain them. I sometimes do rough Fermi estimates of the expected cost of failing to check carefully, and these estimates usually suggest that my checking behavior is worth its cost. If this isn't the case, I tend to stop obsessing so much. (That said, it's tough to get precise answers with Fermi calculations, and I may have a tendency to overestimate risks in order to be on the safe side.)
In my experience, OCD checking behaviors are a response to carelessness and automaticity. If I don't check something carefully, I lose track of it from time to time. For example, I sometimes unthinkingly put down my iPod in a weird location and then can't immediately find it. I've occasionally been similarly careless in more important contexts, such as when I forgot a key and got locked out of my building until I was able to get help. After this experience of getting locked out, I've become more "OCD" about checking that I have a key before I leave a locked building.
The OCD feeling involves a small degree of "anxiety", which I find is necessary in order to jolt my brain out of complacency and actually pay attention to rote tasks. It would be very easy to, say, glance briefly at my wallet and assume I saw my house key, even though I had actually forgotten it. Being OCD helps force me to devote my full attention to whether I actually have my house key.
OCD can also prompt you to go back and check something over again. I do this sometimes, such as when checking the stove before bed, if I've forgotten whether I actually checked the stove tonight or whether I'm only remembering checking it on prior nights and didn't actually check it this night. When you do a task routinely, different memories of it can blur together, which makes it hard to be sure if you've actually done it on this occasion or not. Memory is unreliable.
Subjectively, when I check something routinely, like that I have my house key before leaving the house, I feel something like semantic satiation. I've already checked for my key a thousand times in the past, so when checking again today, it feels like I can't notice things to the same degree I would if this were my first time checking. I need a "jolt" out of this daze in order to pay attention. A similar phenomenon applies to checking for grammatical errors in an essay that you've already read through many times.
Finally, the OCD practice of counting can be a useful way to check that you have everything without needing to remember a whole list of what you need. For example, at the airport, I might remember that I need my backpack, my passport, and a few other items. Say it's five items total. Then you can just count your items (perhaps twice to be sure) rather than remembering each specific item you need. (One of the many reasons why I dislike travel is the stress required for me to pay attention so as to avoid losing things, missing my flight, etc.)
Note: By discussing my very mild cases of OCD-like behavior, I don't intend to downplay the suffering caused by less trivial instances of OCD.
Using novelty detection when researching
3 Mar. 2017
Recently I was trying to revise an article (call it "A") that I had written years ago by adding important information from a piece by another author (call it "B") that wasn't already in A. There are two ways to do this:
- Read A first (to refresh my memory of what I had written), then read B and notice what information in B wasn't already in A.
- Read B first, then read A, and try to remember what information in B wasn't also mentioned in A.
Cognitively, strategy #1 seems easier and more accurate. All you need to do in that case is build a mental "hash table" of the contents of A and then for each fact in B, check whether that fact is in the hash table. This strategy relies on the brain's "novelty detection" abilities to notice information from B that's "new" relative to what the brain remembers from reading A.
Strategy #2 is harder. You need to build a mental model of B, and then while reading A, you need to mentally "cross out" the facts from B that are already in A, and then go back through B and see what facts aren't crossed out. This requires storing comprehensive representation of article B.
Here's an analogy. Suppose A and B aren't articles with facts but are lists of 50-digit random numbers.
- To do strategy 1, you could read through list A and just store, say, the first 5 digits of each number you see, which should most of the time be sufficient to uniquely check whether you already saw a given number. Then when reading through B, you check whether the first 5 digits of each number match something in your first-5-digits-from-A list, and if not, add that number from B to A.
- In contrast, to do strategy 2, you would have to store the full B list, and then as you're going through A, cross out the B items that are in A, and then add the non-crossed-out B items to A. Alternatively, you could store just the first 5 digits of the B list and cross out the first-5-digits-from-B items that are already in A. But then when going back to add the non-crossed-out items, you have to use the 5-digit abbreviations to look up the full 50-digit number in the original B list.
Tyranny of adults
14 Feb. 2017
From 2000 to 2005, I was a political junkie. For a time, I read the Albany Times Union newspaper every day after school, and I also followed a number of political blogs. Politics was a main subject of conversation among my friends. Yet, because I was only 17 in 2004, I wasn't able to vote in the 2004 US elections. Another political-junkie friend of mine commented on the unfairness of this.
I don't have a great solution to this problem, although plausibly the voting age should be lowered to 16 or 14. In principle, one could create a knowledge test to determine eligibility to vote, but this would be open to corruption and would probably disenfranchise many poor voters. The history of literacy tests for voting in the US is not pleasant. Still, perhaps voting eligibility could be disjunctive: either you're at least age 18, or you pass a knowledge test.
Beyond voting, there are many instances where adults may not actually be more qualified than children but have power over children anyway. Adults can be extremely immature in their own ways but unlike children don't have anyone (other than the government, employers, etc.) enforcing discipline. Adults mock the way children may want sugary foods and TV, but we could similarly mock adults for sometimes not having self-control in cases like sex and alcohol. (Of course, I'd rather not mock either group.) The difference is that adults get to indulge their cravings, while children are prevented from doing so (to varying degrees depending on the whims of their parents).
People sometimes express a desire to go back to childhood. I feel the opposite way. Being an adult is way more fun, since I don't have someone else dictating rules about how I have to live. As an adult, I can do basically anything I want whenever I want as long as I get my work done, whereas children are forced to follow strict and often exhausting daily schedules in school and then at home when completing homework.
Some children are lucky and have parents who don't impose too many rules. Other children are controlled by more stringent autocrats whose commands may be fairly arbitrary and hypocritical. (For example, "You have to go to bed, but I get to stay up late.") And of course, some of those autocrats use physical and emotional violence whenever they want to, without repercussion. Adults may also be ruled by the whims of their bosses, but adults can usually switch employers or report abuse to HR, while children have no similar recourse (except in cases of serious child abuse).
I'm not an expert on the youth-rights movement, but I probably support many of its proposals—at least pending empirical examination of what consequences those policies would actually have.
In an episode of the sitcom Dinosaurs, Earl (the father) reads the following statement: "Teenagers learn to make choices by having choices". I roughly agree with this, except in cases of potentially irreversible exploration, like trying an extremely addictive drug or injuring oneself by doing a stupid stunt.
There is legitimate science behind the idea that teen brains are still developing, and teenagers don't always have as much executive control over their actions. But, as with other forms of discrimination, these generalizations don't apply in every case. I think I was about as mature by age 13 as I am today at age 29 (in terms of good judgment and self-control, though not raw life experience and wisdom). Meanwhile, some 29-year-olds may not be mature enough to make their own choices. (And the example of the strikingly immature 70-year-old US President Donald Trump is well known.)
Why I personally don't like prize contests
22 Jan. 2017
A few years ago, a friend of mine was seeking proofreaders for some chapters of his book. He offered a prize to whomever discovered the most writing errors in his texts. I suggested changing the payout scheme to instead give the prize to a random person in proportion to that person's fractional contribution to the total number of errors that were discovered. For instance, if N total errors are discovered, and you personally found 5, you would win with probability 5/N. In my opinion, this is superior because then the marginal incentive for finding each additional error is roughly constant. (Only "roughly" constant because finding one more error also increases the value of N, but if N is already large, this increase of the denominator is small.) It's good for the marginal incentive to find errors to be roughly constant because the marginal value to the book author of finding each error is also roughly constant (assuming that, e.g., having 10 undiscovered errors left in your book is about twice as bad as having 5). Having marginal incentives roughly match marginal value helps avoid problems that might arise from more lumpy payout schemes. For example, suppose you only paid for every 100th error discovered. Then, if someone had only discovered 2 errors so far and didn't expect to find many more, there would be no incentive to find a 3rd error, even if doing so were really easy. Meanwhile, if someone has found 99 errors so far, the person may put in extraordinary amounts of work to find just one more error, perhaps wasting lots of time in the process.
In the case of winner-take-all contests, there's almost no incentive for people who know they won't win to try. For instance, if I know that someone else will find at least 10 errors, but I don't expect to have time to find more than 3 errors, there's no point in my finding any errors. Even if I do notice one or two errors, I may not bother to even mention them (assuming I don't care intrinsically about the book's success) because of the hassle of doing so.
A similar point applies to other sorts of contests, such as essay prizes or innovation challenges, in which other highly skilled participants will be competing. I expect that I my performance probably won't be the best, and if it's not the best, it doesn't have any payoff, so why bother trying?
My feelings on this point don't seem to be universal, given that many people are highly motivated by contests. People who have a decent shot at winning have reason to try hard. Perhaps some people are also overconfident. Maybe some are energized by competition for primal reasons. (Personally, I'm demotivated by competition. I prefer to work on something that other people aren't looking into.) Finally, if participants can see their relative positions, such as in a race, then if a competitor is neck-and-neck with you, this may lead each side to hit the gas pedal as much as possible because only a little bit of extra effort can allow for victory.
Another downside of contests is that the payoffs are risky, which is bad for risk-averse participants. Rather than randomly awarding the prize for finding writing errors, perhaps my friend should have simply paid people a fixed marginal rate for finding errors. The main reason not to do this is the transaction cost of, e.g., setting up PayPal payments for each person. Also, for small amounts of money and rational participants, randomness may not matter, because people should be roughly risk-neutral with respect to small payoffs.