A Collection of Quick Observations (2018)

By Brian Tomasik

This page collects some random thoughts from 2018 that are too short or too unimportant to deserve their own essays.

Contents

Day of the week is a checksum

First written: 2018 Dec 5.

When communicating dates over the phone, such as to schedule a doctor visit, it's common to say both the day of the week and the date, like "Tuesday, July 11". One reason to say the day of the week is just to give extra context on when the day is. For example, if someone is usually busy on Tuesdays, then when the person hears "Tuesday, July 11", she can switch to another date. However, another benefit of saying the day of the week is as a "checksum" of sorts on the date. Day of the week is like a hash function computed from the date that

Recommending your friend's charity

First written: 2018 Aug 28.

In politics, business, and other domains, conflicts of interest can sometimes be a sign of corruption. For example, if you're friends with someone and then recommend his company for a government contract, you might have recommended the company to help out your friend rather than because it's the best applicant.

However, in other cases, trying to avoid conflict of interest is wrongheaded. For example, the charities that I'm most involved with are the ones that I think have highest impact relative to my values. If I then recommend those charities to donors, one could allege: "You're only recommending them because you help out with them and know the people on the teams." But this gets the arrow of causation backwards. In fact, I help out with them and know the people on the teams because I recommend the charities in the first place. If I came to believe that other charities had higher impact, I would go become friends with them.

This is a common problem for individuals or organizations that aim to recommend charities or interventions. Charity recommender Alice will naturally communicate most with those people—call them Bob and Carol—who share Alice's goals and favored approaches for making a difference. But then when Alice recommends Bob and Carol, she may get called out for having a conflict of interest, due to being friends with them. This might lead to a perverse incentive for Alice to avoid interaction with the people she thinks are doing the best work.

The only solution I can see for this problem is for a critic, call him Dan, to get to know Alice, Bob, and Carol up close for a long time, in order to verify whether the relationship is primarily corruption or primarily camaraderie based around a shared vision. (Of course, there can always be some of both, due to our natural biases to favor our friends even if we don't intend to.)

Editing out "umm"s

First written: 2018 Aug 17

I say "umm" a lot during speeches and interviews. Many people don't notice, but some people who do notice find it very annoying. I "umm" a lot because my speech is pretty irregular: I think of a small chunk of words, say them, and then have to think about the next chunk. Some people are able to speak more continuously, perhaps by planning what they're going to say next while simultaneously speaking their current thought. Maybe I could do this if I practiced a lot, but I haven't prioritized it yet because I don't do that much public speaking. I've noticed that Sam Harris is an example of someone who seems to be able to compose flawless sentences on the fly and speak them without pauses, which seems to me like a superhuman ability.

I think I've partly been trained to say "umm" based on my experiences in conversation. If I try to pause instead of "umm"ing during dialogue, interlocutors tend to interrupt me. However, it seems that most people dislike "umm"s in the context of a long speech by a single person.

Some podcasts I've been on edit out "umm"s, "you know"s, and false starts in speaking. One example was an interview I did with the Future of Life Institute. During the actual discussion, I literally said "umm" about every 10 to 15 seconds. I also often started a sentence, stopped after a few words, and then rephrased how I said the sentence. Most of these speech irregularities were removed in the final podcast by editing. The flow of the audio is obviously improved as a result, though I feel a bit awkward about listeners assuming that I'm much better at speaking than is actually the case. It's the audio equivalent of airbrushing photos.

Regular eating for regular sleeping?

First written: 2018 Jul 11

For many years I've followed the advice: "Eat when you're hungry, and stop when you're full." I try to avoid eating when I'm not hungry. Unfortunately, this can sometimes lead to irregular eating patterns. And I find that a big determinant of my sleeping schedule is my eating schedule. I can't fall asleep unless I've eaten within the last few hours, or if I do fall asleep hungry, the sleep is of poor quality because my body is more stressed than normal. So irregular eating may contribute to irregular sleeping.

An example of disruption of my sleeping schedule in this way occurred on 2018 Jul 9. For several days before that date, I had been waking up and sleeping at roughly the same time each day. On 2018 Jul 9, I for some reason wasn't hungry following exercise the way I normally would be. I waited a few hours until I got hungry, and during this time, my body kept me from getting tired due to not eating. I ate dinner a few hours later than normal and therefore went to sleep a few hours later than normal. However, since my body was still used to getting up at the time it had previously woken up at, I awakened early the next day, with only about 5.5 hours of sleep, unable to get back to bed. This sleep disruption reduced my alertness for parts of the next ~2 days until things became somewhat more regular again.

Probably what I should have done in the above situation was eat at least a bit following exercise, even if I wasn't very hungry, to keep my rhythm more regular. If I eat when I'm not that hungry, I usually eat less than if I eat when I am hungry, so it's not clear to me that eating when not hungry is actually bad from an overeating perspective, and presumably the benefits in terms of regularity of rhythm are worth it.

Should you proofread a friend's application?

14 May 2018

Suppose a friend is applying to college, or for a job, or for a grant. Should you offer to proofread or otherwise improve the writing in your friend's application, ignoring the time that it takes to do so?

If you take a relatively impartial view of the situation, it's not obvious. If you want the admissions committee to make the best possible choices, then you should want the applications they receive to accurately reflect the abilities of the applicants. If most other applicants aren't getting proofreading help, then if you do provide proofreading help to your friend, you may be giving your friend an unfair advantage and (slightly) distorting the judgment of the admissions committee. On the other hand, if everyone else is getting proofreading help too, you should help your friend with proofreading in order to prevent the admissions committee from mistakenly penalizing your friend. It's also possible that, for example, you know your friend would be an excellent choice, but your friend has really bad grammar. In that case, it might improve the accuracy of the committee's decision for you to proofread the application.

You might have reasons to want your friend to be admitted even if he's less qualified than other people. For example, maybe your friend will, unlike most other applicants, use the position to earn lots of money to donate to effective charities. In this case you might want to help your friend with the application even if it distorts the admission committee's judgment.

The idea of letting the admissions committee use their best judgment about whom to admit can be a helpful attitude if you find yourself rejected from a college or job. Maybe the committee was right that you weren't the best fit. You might appreciate the fact that you were rejected so that someone else who was a better fit could get in instead.

Getting a tiny bit of Debrox in my mouth

14 May 2018

Today I used Debrox ear drops in preparation for an upcoming ENT visit that will likely include earwax removal. As I was putting drops in my right ear, a tiny bit of Debrox slid down my right cheek and touched my mouth. Apparently a tiny bit got into my mouth because I tasted a mild amount of the substance. I promptly rinsed out my mouth without swallowing.

I assumed this was almost certainly harmless, but I wanted to check online to make sure. The Debrox box said: "If swallowed, get medical help or contact a Poison Control Center [...] right away." I assumed this message was intended for kids who accidentally ingest large portions of the Debrox bottle, rather than for getting one or two drops in your mouth without even swallowing. So I didn't want to bother calling a Poison Control Center only to clog up their lines with an almost certainly pointless question.

However, I decided to try out the webPOISONCONTROL® tool anyway. If nothing else, I could learn how to use the tool, which might be useful in the future. I was pleasantly surprised to find that the tool gave me an answer without bothering a human. After a series of questions, the tool said: "Based on the information you provided, it is unlikely that significant toxicity will develop. You do NOT need to go to the Emergency Room." I was glad to get a realistic answer—rather than a cover-your-ass answer saying that I should seek professional help anyway.

Making fixed identifying information changeable

7 May 2018

In the aftermath of the 2017 Equifax data breach, Sarah Jamie Lewis quipped: "Don't forget to change your name, date of birth, home address and social security number regularly." While probably intended as humorous, I think this is actually a good idea. Fixed pieces of data should generally never be used for identification. If the US government were like a tech company, you would be able to log in to your account and change your Social Security number when you suspected that someone else might have it. You might also be able to change your username. And birthdays could instead be another arbitrary identifier, or perhaps a two-factor authentication key.

Home address is a bit different because it's not just used for identification but also for sending mail to you, which means it can't just be an arbitrary string of characters. Still, one improvement idea could be as follows. The United States Postal Service (USPS) would create an ID that refers to your address. When you order a package on Amazon, you only tell Amazon your address's ID. Only USPS or other delivery services would know your actual address. The address ID could be also changeable. This would be an easy way to combat junk mail, because if you keep getting junk mail sent to your old address ID, you could just change your ID. (Perhaps a similar approach ought to be used for email addresses: you should be able to change your email address without having to close out your entire email account.)

"Lottery of birth" without souls

2 May 2018

One way to evoke empathy for other organisms experiencing hardship is to think, "If the lottery of birth had turned out differently, I could have been in that person's position."

A naive way to understand this idea is to invoke souls. I can imagine the lottery of birth as the process of deciding which body my soul will be attached to.

However, when we take a non-dualist view of identity and consciousness, we need a different account of the lottery of birth. Strictly speaking, if I identify "me" with my exact current brain state, then it would be very unlikely that "I" could have been in someone else's position, because if "I" (i.e., someone with my exact genes and epigenetic parameters) had grown up in some other situation, my brain would have almost certainly have been different than it is now. Still, there might be enough similarity between "a person with my genes growing up in this environment" and "a person with my genes growing up in that environment" that I could still attribute the identity of "me" to both of them.

Another approach could be to imagine a "body swap" (a common storytelling device), which would help overcome the problem that if I had grown up in a different environment, my brain would be different. In a body swap, we can put my exact current brain into a new situation. Of course, there remain steep technical difficulties about hooking up one person's brain to another person's body in a functional way. And body swapping also has the problem that I wouldn't know what the person I was swapped with would know, so my experience in his/her body wouldn't match his/her own experience. (This problem is precisely what makes body-swap fiction fun.)

A third approach is to swap not just brains but entire bodies, as in the film Trading Places. This is arguably the least hypothetical way to think about lottery of birth, because these kinds of trades of position could theoretically be done in real life without any fancy tech. There still remains the issue that I would lack the memories and skills of the person I'm trading with. Plus, other people would notice the swap and would treat me differently than they would treat the other person.

Ultimately it doesn't matter if we come up with an exact physicalist description of the lottery of birth, because the concept is just an exercise in empathy-building. Even if we simply use a naive picture with souls, maybe that's good enough most of the time, if the point is just to make us care more about each other.

Facebook etiquette

23 Apr. 2018

Some random suggestions for etiquette on Facebook:

Analogies to link trading

21 Apr. 2018

The following idea occurred to me spontaneously after waking up halfway through sleep one night a few weeks ago, apropos of nothing.

"Link trading" is when two or more people link to each other's content in the hopes of boosting their search-engine rankings. (I've never done it because it's dishonest and would be time-consuming; it seems better to just produce content worth linking to.)

The general idea of "you promote me, and I'll promote you" is implicit in some other practices that people do. For example:

I prefer college lectures over seminars

5 Apr. 2018

When I was at Swarthmore College, there were two main types of courses: lectures and seminars. A similar distinction is common at other colleges. In lecture courses, the professor would explain the course material at the blackboard, with some degree of student participation when asking and answering questions. In contrast, seminars involved students doing the presenting. In math and physics seminars, students were assigned one or a few homework problems to work out on their own, and they would then present the solutions in front of the class.

Seminars are presumably considered more advanced, because students have to learn the material on their own rather than hearing it from a professor's lecture. Swarthmore also had "first-year seminars" to give new students a taste of this instructional format. I wonder whether seminars are partly an excuse for professors to do less work, since professors don't have to lecture much and mostly watch student presentations.

The big complaint I had with seminar courses is that most students are terrible at explaining things. :) Some students didn't understand the material and presented it wrong. Other students wanted to show off their smarts by rushing through the answer to a problem with limited explanation of the intermediate steps. And most students have less practice with how best to teach the material than a professor does. (This is especially true at a small, non-research-heavy school like Swarthmore where teaching ability is an important criterion in the hiring process.)

Another problem with seminars is that, because students are evaluated on their presentations of a limited set of problems, they have less incentive to study all the course material in a balanced way. Instead, it's more effective to focus on mastering just a few homework problems.

Seminars might make more sense in the humanities, where a lot of the goal is to have multivocal discussions rather than univocal lectures. Even in this case, I would imagine that hearing a lecture on what different experts think of a topic would often be more enlightening than hearing the opinions of non-expert classmates (some of whom have only skimmed the day's reading material).

Mere exposure and celebrities

1 Apr. 2018

Why do people prefer to see movies with big-name celebrities rather than unknown but equally talented actors? One possible explanation is that the presence of a celebrity you like is a predictor of other things, such as that the movie's script will be interesting or that the acting will be high-quality. However, I suspect that the main reason people like celebrities is closer to the "mere-exposure effect": we like what's familiar.

I find that as I get to know a celebrity (or author, or anyone) more, that person begins to feel more like a friend or family member. Once that happens, it doesn't matter as much if the person performs well, because you love the person unconditionally. Parents don't go to see their children perform in the school musical because of the quality of the singing but just because they love their kids. Similarly, even if a celebrity is only mediocre at acting, you might enjoy seeing that person in movies because you feel kinship with the person, even if some random stranger could do a better job at acting the part. The same idea probably partly explains why particular bloggers, politicians, and other public figures gain more popularity than the quality of their work by itself would warrant.

In the past I might have felt embarrassed about being subject to the mere-exposure effect, but now I'm happy to accept it and enjoy things based on it. The only case in which it's problematic is when it biases your judgment on important issues. (Which movie to watch next is generally not an important choice that requires unbiased judgment.)

Ordinary morality ignores population size

27 Mar. 2018

I think the most powerful lever for changing the moral value of the world is often to change the number of sentient individuals the world contains. For example, it's much easier to decrease the number of wild animals than it is to modify on a large scale the lives of those animals to be better or worse. Similarly, to reduce suffering, it would be much easier to refrain from colonizing space than it would be to massively reduce suffering in the lives of the multitudinous beings created as a result of space colonization. Likewise, to increase happiness, it's often much easier to create new beings than to improve the lives of existing beings (especially until the hedonic treadmill is overcome).

My sense is that common-sense morality, and the kinds of moral values that inform political discourse, almost entirely omit population size as a consideration, whether for trying to reduce suffering or increase happiness. In political discourse or personal ethics, it's as though you have to take the population size to be whatever it will be, and the only allowable levers are those that change the quality of life of whatever people exist. (One exception to this point may be veg*ism, which aims to reduce the number of farmed animals that will be born.) For example, if you say you want to "reduce suffering", ordinarily this will be interpreted to mean "reducing the suffering of whatever people exist" rather than "causing fewer people to exist". And likewise if you say your goal is to "increase happiness"; the idea of increasing population size to increase happiness is never considered.

Perhaps part of the reason that ordinary morality omits consideration of population size is that past political efforts to influence the human population size have often been morally problematic or worse. (That said, some levers on population size, such as government subsidies for parenthood, are less likely to lead to dark places.)

However, it remains unexplained why an individual's morality (rather than political morality) rarely considers the decision to create a new life as being very morally good or very morally bad, at least not until that life is conceived. Instead, the decision of how many children to create is regarded as a personal choice, and the ethics of parenthood doesn't really begin until after the children already exist. (There may be some exceptions, such as when considering one's environmental impact by creating new people.)

I love mindless chores

26 Mar. 2018

When I was a kid, I hated doing housework, like sweeping my room. I think the main reason was that it was boring. But as an adult, I generally look forward to mindless chores.

One reason for my change in attitudes was the invention of the iPod, which allowed me to be mentally stimulated while doing mechanical tasks. (For noisy tasks, you can wear earmuffs over the iPod earbuds.) Formerly boring tasks were now an opportunity to relax and listen to something interesting.

Sometimes I don't listen to podcasts while doing housework but instead just reflect on various topics. Housework provides an excuse to do this kind of relaxing contemplation that I rarely get a chance to do otherwise. Housework is also a low-end form of exercise, which makes it more rewarding than doing contemplation while sitting still.

I also enjoy mindless computer tasks, such as making some repetitive change to my websites that's too complicated or risky to do in an automated way. These mindless tasks give me an opportunity to listen to music, which I almost never do otherwise, because music distracts me from focused mental work.

Theoretically I could also listen to music while doing physical chores, but I typically avoid that for two reasons:

  1. Most physical chores are even more mindless than computer tasks, which means I can listen to verbal podcasts rather than just music while doing them. Even mindless computer tasks require some carefulness to ensure that you don't mess something up on your computer, especially if you're editing a document. In contrast, physical tasks are typically more forgiving of mistakes; it's ok if you drop your laundry on the floor from time to time. Plus, physical tasks seem to require less of the verbal part of my brain than computer tasks, which means that verbal podcasts are less likely to interfere with my performance of physical tasks than computer tasks.
  2. Computer tasks can sometimes be tedious, so it's nice to save up the pleasure of music as a reward for those chores in particular, rather than "spoiling myself" by listening to music more often.

I imagine that if all I did was manual / mindless labor, I would get tired of it. Plausibly the reason I enjoy it so much is that I don't get very much opportunity to do it.

Distractors revert you to your priors

20 Mar. 2018

When I walk on the sidewalk, I monitor the ground for bugs in order to avoid stepping on them.

At my old apartment, there was a patch of sidewalk near a tree. At certain times of the year, the tree dropped something (berries? seeds? I don't remember) that looked a lot like bugs. When walking through the fallen tree litter, I would intuitively be nervous about stepping on bugs, until I reminded myself that the tree litter was a "distractor" that provided no evidence of bugs being present.

In the presence of tree litter, the probability of observing the visual scene that I observe is roughly the same whether a bug is present or not, since whether a bug is present or not, there will be lots of bug-like things littering the sidewalk. In other words, P(observations | bug) ≈ P(observations | no bug), so that P(bug | observations) / P(no bug | observations) = P(observations | bug) * P(bug) / [P(observations | no bug) * P(no bug)] ≈ P(bug) / P(no bug). In other words, in the presence of distractor cues, we can basically revert back to the priors we would have before looking at the data. The probability of stepping on a bug while walking on the tree-litter-covered sidewalk is about the same as the probability of stepping on a bug while walking without looking over an ordinary sidewalk (ignoring real-world complications like the possibility that bugs are particularly attracted to or repelled by the tree litter).

On a non-litter-covered sidewalk, observations do provide evidence for the presence or absence of bugs. For example, if you look at a sidewalk step and don't notice any bug-like things on it, then P(observations | bug) is lower than P(observations | no bug), since if a bug were present, it's unlikely you'd see an empty sidewalk step. (Obviously here I'm only talking about visible bugs like ants. The ground may contain tiny bugs like springtails that are too small to see without looking extremely closely.)

Plot anti-twists

12 Feb. 2018

Note: This section contains spoilers for the movies Scream 2 and Scream 3.

When I was in 6th grade, my homeroom classroom had an ancient computer. My teacher found an old collection of computer games stored on large floppy disks. Each day, the student who got to the computer first was allowed to play games for the 10-15 minutes before school started. I often got to the computer first and played several games. One of them was a rock–paper–scissors game. While I don't know for sure what the computer's strategy in this game was, my best inference for how it worked was as follows.

  1. On the first round, the computer picks randomly.
  2. Suppose you play "rock" the first time. Then on the next round, the computer assumes you'll play "rock" again, so it plays "paper" in order to beat "rock".
  3. What about on the third round? Suppose you played "paper" on the second round. The computer doesn't naively assume that you'll play "paper" again. Instead, it assumes that you'll assume it'll follow its second-round strategy. That is, the computer assumes you'll figure that it'll play "scissors" in order to beat you if you play "paper" again. If you think the computer will play "scissors", you should play "rock". Based on this, the computer decides to play "paper" in order to beat the "rock" that it expects you to play.
  4. On the fourth round, the computer assumes that you'll assume that it'll play its third-round strategy.
  5. And so on....

A similar idea in mystery stories can be the following. Imagine that you're the computer, and the human rock–paper–scissors player is the writer of a murder mystery.

  1. In some stories, it's made rather obvious who the killer is. The strategy of the human player (story writer) is rather simple: give away clear clues about who the killer is. The computer (the audience) can win the game (solve the mystery) merely by assuming a very simple strategy on the part of its opponent (the story writer).
  2. Usually a mystery writer doesn't make it quite so easy to solve the mystery. The writer builds up a seemingly innocent character who turns out to be evil in the end. This is like the human player (the writer) assuming that the computer (the audience) will assume the human player (the writer) will follow a simple strategy of making people who appear innocent turn out innocent. Given this, the human player (the writer) can make an innocent person turn out guilty in an effort to fool the computer (the audience). In turn, the audience can counteract this by expecting a plot twist in which a seemingly innocent player turns out guilty.
  3. A third level is for the writer to assume the audience will expect a plot twist, and then make what seems like an upcoming plot twist turn out not to be a plot twist. In other words, the writer can make it seem like there's an innocent-seeming character who will turn out to be guilty, but in fact that innocent character is innocent after all.

I found myself falling for what seemed like this third-level strategy on the part of the writers in both Scream 2 and Scream 3. In Scream 2, cameraman Joel Jones appears innocent because he's "worried when he finds out about the fate of Gale's former cameraman". Because he's African American, audiences might also expect that it would be politically incorrect for a movie from the 1990s to make him the killer. Based on these points, it would be quite a surprise if he turned out to be the killer, which is what I expected would be the plot twist. However, it turned out that Joel was indeed innocent after all.

In Scream 3, Angelina Tyler is portrayed "as a sweet ingenue actress from the Midwest", which makes her a good candidate for a plot twist. The main killers in the previous two Scream movies were male, so I also expected a plot twist in which a female would be the killer. However, this was not the case, and Angelina turned out to be innocent after all. (That said, the Wikia page reports: "In an early version of the script, Angelina was a second Ghostface, Roman's lover and accomplice[...]. The idea was later scrapped.")

Update (13 Feb. 2018): After writing the above, I began watching Scream 4 and came across the following line about horror movies (49m54s into the film): "The unexpected is the new cliché. [...] Modern audiences get savvy to the rules of the originals, so the reversals become the new standard." We might say that when the unexpected becomes expected, the expected becomes unexpected.

Gender clusters

10 Feb. 2018

Many techniques for data clustering require choosing an arbitrary number of clusters (such as the parameter k in k-means clustering). While there are various techniques for choosing the number of clusters to use, "The correct choice of k is often ambiguous, with interpretations depending on the shape and scale of the distribution of points in a data set and the desired clustering resolution of the user" (Wikipedia "Determining ...").

Genders are clusters into which we categorize people based on various traits. Like almost all categories above the level of fundamental physics, these clusters have arbitrary boundaries. Likewise, the number of clusters to use is arbitrary. For this reason, I find it interesting when people are adamant that there are only two genders, as though this was an issue that somehow had an objective answer. Of course, presumably people get worked up over the topic because it reflects deeper underlying emotions on either side. It is true that (ultimately arbitrary) matters of language use, such as whether there are two or more than two genders, affect people's self-esteem and feelings of identity, and in this sense, the discussion is far from trivial.

Mindless music?

9 Feb. 2018

In 1984, the protagonist Winston watches a large woman as she sings:

It was only an 'opeless fancy.
It passed like an Ipril dye,
But a look an a word an the dreams they stirred!
They 'ave
stolen my 'eart awye!

The narrator explains: "The tune had been haunting London for weeks past. It was one of countless similar songs published for the benefit of the proles by a sub-section of the Music Department. The words of these songs were composed without any human intervention whatever on an instrument known as a versificator. But the woman sang so tunefully as to turn the dreadful rubbish into an almost pleasant sound."

Personally, I enjoyed these lyrics, and in general, I don't think pop music is an opiate of the masses. However, I've noticed that if I have a song stuck in my head and I'm mentally humming it during the day, this is one of my few waking moments in which I'm not actively generating new verbal thoughts. The rest of my day is filled with an inner monologue about something or other. I guess I also don't generate my own thoughts when I'm reading or listening to words from an external source, but unlike repetitive song lyrics, these external words tend to teach me new things.

For people who meditate, I suppose that meditation is another period of waking life during which (ideally) no inner monologue is present. Perhaps having some such periods is mentally useful.

Mental addition of small numbers

6 Feb. 2018

Add 6 + 7 in your head. How do you go about it? Maybe you have this sum memorized, but if so, try another pair of small numbers that you don't have memorized. I notice that when I add 6 + 7, I do it as follows. I very roughly picture a "ruler" that goes up to 10, and I see that 7 is 7/10 of the way to 10. 6 is 3 + 3, so I can "fit" 3 from the 6 into the ruler to make it 10 out of 10 high, sort of like fitting blocks in a game of Tetris. Then I add the leftover 3 to 10 to make 13.

Here's another example with subtraction. What's 11 - 4? Well, 4 is 1 + 3, and 11 - 1 = 10. Then, I can see that 10 - 3 = 7.

I don't think I was taught or even explicitly decided to do mental addition and subtraction in this way. Instead, it seems to be a trick I've converged upon unconsciously.

Language evolution and file-format rot

First written: 28 Jan. 2018; last update: 7 Mar. 2018

Pogue (2017) explains "file-format rot" with an example: "I spent years of my life creating musical scores with early sheet-music software such as Professional Composer, Deluxe Music Construction Set and HB Engraver. Each one took hours and hours and hours. And now? I can't look at those scores. Apart from the ones I have as printouts, I'll never see them again. The parent software programs are long gone—and with them, all of the notes and chords locked forever in their documents."

Of course, with great effort, it may be possible to decode what a series of bits in an old file meant, especially if we have some clues to work with.

This process of painstakingly decoding past data is reminiscent of modern scholars carefully trying to understand ancient texts. We can think of a human brain's language-understanding abilities as a "software program" that can read texts written in its language. Over time, as language evolves and people lose touch with the cultural context in which past texts were written, we lose the "software" needed to read past "files". Of course, since language evolution is gradual, modern software programs (i.e., modern human brains) can still often decode some portions of old files (i.e., old texts). For example, modern English speakers can make sense of many of the words in Shakespeare plays, even if some parts of Shakespeare's data files are uninterpretable by untrained modern ears.

In a talk on file-format rot, Vint Cerf discusses (7m2s) historical human writings on vellum: "The fact that you can see the vellum means that you don't need any additional software. You just need wetware [i.e., human brains] in order to correctly interpret what's on the vellum." While it's true that we don't need computer software to decode writings on vellum, we do need linguistic "software" within the human brain capable of deciphering patterns of symbols into meaning.

Pogue (2017) offers a solution to file-format rot for digital data: "Had I opened those Word 1.0 documents and resaved them every few years, with successive versions of Word, I'd still have them." However, the same approach doesn't work perfectly for human texts, because the written word is a "lossy compression" of the speaker's ideas, and a translation is a lossy copy of the original text. Therefore, successive translations of texts are liable to show generation loss, similar to what we see with the game of Chinese whispers.

Magic and software

24 Jan. 2018

I find that typing Unix commands into a command prompt feels like casting spells. You have to find the right "magic words", and then as if by magic, your wish is the computer's command. Similar themes have apparently been discussed by a number of authors.

Arthur C. Clarke's third law is that "Any sufficiently advanced technology is indistinguishable from magic." We might rephrase this as "If it's a sufficiently advanced technology, then it's magic." I find that roughly the converse is also often true: "For a given instance of magic, there often exists a hypothetical technology that could implement it." Of course, this is usually trivially true if we imagine implementing magic within a computer simulation à la The Matrix. However, it's often applicable even to our actual world.

I'm a complete novice regarding The Lord of the Rings, but in the 2001 movie The Fellowship of the Ring, I noticed that some of the magic in the story can be made sense of from the lens of information technology. Examples:

Another example is the Magic Mirror in Snow White. It tells the Evil Queen "who is the fairest in the land". In the 2007 film Sydney White, this reflective surface takes the form of a computer screen showing a campus "Hot or Not" website that ranks people by attractiveness.

Finishing long projects

16 Jan. 2018

Working on a long-term project can be difficult because your initial passion for the work may fade over time. It can also be stressful to feel like you're never going to finish the task.

There's a famous quote about finishing long projects: "When eating an elephant take one bite at a time." I find this advice can sometimes be helpful. In particular, if I get stressed about how far I am from completing something, I stop thinking about completion and just enjoy biting off whatever tiny chunk I can today. (This is similar to "living in the moment" as an approach to life.) I also allow myself to take breaks from a project, as long as the breaks aren't so long that I later forget what I was doing when I pick it back up.

A possible downside with the "one bite at a time" approach is that it may cause the project to take a long time, because you're not taking shortcuts or cutting off new possible todo items in order to make sure you finish soon. Whether this is in fact a downside depends on your personality and how important the project is. I highly value the feeling of not being stressed or pressured to finish something by a fixed deadline, and I would feel less positively about my work if I tried to force myself to finish it by a given date.

Painfulness as a heuristic for unhealthiness

9 Jan. 2018

Particularly when I was a teenager, I sometimes had an attitude that I didn't mind enduring discomfort, such as being cold, because I was mentally strong and could tolerate such inconveniences. Similarly, I thought schoolwork was more important than relaxation, so I stayed up as late as needed to finish my assignments.

Over time, I came to realize that caring about discomfort is not merely "wimping out", but there's often a good reason to avoid discomfort. Evolution usually made things painful on purpose. For example, being too cold can lead to frostbite, putrid-tasting food may be filled with a higher-than-normal density of bacteria, inadequate sleep leads to a variety of health issues, and stress is bad for life expectancy. In the absence of strong scientific evidence to the contrary, following the heuristic of "avoid painful things" seems like reasonable advice for not damaging yourself or shortening your lifespan. (Maybe this is obvious to most people, but it took me a while to learn it.)

There seem to be some exceptions to this heuristic. For example, it seems like some degree of fasting on occasion is plausibly healthy, even though it's uncomfortable. And of course, getting shots or having your teeth cleaned are counterexamples in the modern world. (That said, I personally sort of enjoy getting shots.)

Signals of comfort or discomfort from one's body are often far more available and precise than scientific findings. For this reason, I think you should usually trust what your body is telling you rather than advice on what you're "supposed" to do. For example, standard wisdom says that you shouldn't eat a lot of saturated fats, but I find anecdotally that I feel happier, healthier, and more ready to exercise when I have a nontrivial amount of saturated fats in my diet. In general, I think eating based on what makes you feel good is superior to eating based on "what science says", both because scientific findings in this area are so mercurial and because a generic finding about what the average American should do may not apply in your particular case. Of course, I don't have data to suggest that eating based on bodily signals leads to longer lifespans, but at least it does seem to lead to improved vigor in the short run, which is also important.

It's interesting to remember that all non-human animals, and many humans, have almost no "book learning" about how they're supposed to behave. Many animals don't even have social instruction. Yet these animals act appropriately in most situations. While one can argue that humans' modern environments are different from ancestral environments, so that adaptive instincts and reward signals no longer apply, I think it's still generally safest to err on the side of assuming that inborn reward signals do still apply unless you have very strong evidence to the contrary.