A Collection of Quick Observations (2015)

By Brian Tomasik

This page collects some random thoughts from 2015 that are too short or too unimportant to deserve their own essays.

Contents

Concept search engines

6 Dec. 2015

One of my middle-school teachers tried to show his class how to think rather than how to memorize. He found learning lists of facts out of context to be useless. Rather, he challenged his students—in homework and on tests—to apply what they knew to new situations they hadn't encountered before, relying on reasoning skills to figure out the answers. While Jeopardy!-style questions are useful for game shows because the answers are uncontroversial and precise, facts out of context do little to enhance one's understanding of the world.

When I learn new material, I try to get the gist—absorbing the high-level concepts and thinking patterns that the domain has to offer. Then I can pattern-match those ways of thinking to other contexts where they might apply.

In one Q&A on C-SPAN, Steven Pinker mentioned that one topic that he found interesting was how a person could draw analogies between domains. I assume that the brain's connections between clusters of neurons help encode conceptual relations, like "If something like X is true and something like Y is false, this tends to lead to a dynamic process in which Z is the long-run equilibrium outcome." Then the brain can pattern-match this structure against various concrete things it observes in the world and notice when a given thing matches the concept. Presumably this pattern matching works because if a given input thought matches the concept, then a lot of the concept-encoding neurons will be activated simultaneously, allowing the concept to become prominent and broadcast itself to other parts of the brain.

Production systems capture this idea of pattern matching to some extent. (Actually, it was this kind of conceptual pattern matching in my brain that led me to think of production systems as being a relevant analogy here.)

While knowing facts is less important in the age of Google, knowing concepts remains useful, because we can't—yet—Google a concept. One existing way to identify conceptual relations is via Wikipedia, especially the "See also" section. For example, hylozoism is the view that all matter is alive. This concept has the general structure that "All matter is X". A related idea is panpsychism—the claim that all matter is conscious. And the "See also" section for hylozoism references the article for panpsychism.

Concept search is an existing field of study. Improving it would push slightly closer to artificial general intelligence (which would probably be bad), because drawing analogies is something the brain does well but current computers typically don't.

Insofar as most words already represent concepts to lesser or greater degrees, current word-based search engines are already conceptual search engines. The main advantage of a purer conceptual search engine would be the ability to query for structural relationships that don't yet have names or that are too obscure to ever be named. We could call these "anonymous concepts", in analogy with anonymous functions.

Deflating extraordinary claims

11 Nov. 2015

As I've gotten older, I've become increasingly inclined to deflate astonishing ideas. For example, I'm skeptical that artificial general intelligence will reach vastly superhuman abilities within hours or days after reaching human-level intelligence. I don't think different charities differ millions of times in cost-effectiveness. Building a movement now is not (in general) many times more valuable than building it a decade later. I don't think focusing on the far future is obviously astronomically more valuable in expectation than focusing on short-term suffering.

The trend over time is toward greater humility. This is consistent with the stereotype that "young people think they know everything and believe past generations were doing things all wrong, but as they grow older, they better appreciate why things are as they are." Of course, there are plenty of topics that I still think are sorely neglected. And many extraordinary facts, such as about physics or biology, do indeed stand the test of scrutiny.

In general, the optimizer's curse suggests that those working on a given issue will tend to regard it as having more outsized importance than it actually does, because those who, for whatever reason, conclude that the issue has the most significance will be more likely to end up working on it.

Don't feel bad; be awesome

2 Nov. 2015

The character Barney Stinson from How I Met Your Mother is not a good role model in most respects, but the following sentiment is one I quite like:

Barney: You see, whenever I start feeling sick, I just stop being sick and be awesome instead.

I don't think this is a good idea when one is physically ill, since allowing one's body to recuperate is important. But I think this is good advice with regard to many forms of mental uneasiness.

For example, sometimes I feel a bit tired but not tired enough to go to sleep. In such cases, it helps to think to myself "I should be awesome instead" and thereby restore myself to a more flow-like state of activity. Or I might be a bit upset by a comment someone made to me. In such cases, rather than ruminating on hurt feelings, I

  1. ask whether there's something I can learn from the incident
  2. think of the situation as an interesting movement of particles in the spacetime of the universe
  3. and then mostly forget about it and go on being awesome instead.

Why is music pleasurable?

27 Aug. 2015

Much has been written in academic literature about why the brain enjoys music. I haven't had the time to read it, but here's my first-pass speculation about a possible answer, partly inspired by Jürgen Schmidhuber's writings. Browsing online, I see that some similar hypotheses have already been developed.

The brain tries to predict sequences and feels reward upon correctly doing so. This reward motives us to learn new things. Music is a relatively clean-cut sequence-prediction task.

When we hear a song the first time, we develop sequential connections among clusters of neurons to encode the progression of the notes. As we hear the song over again, those connections become strengthened. The memory of the song's sequence is used to predict what we'll hear next, and once we know the song, we can better predict the next notes. This is why songs can become more enjoyable the more they're heard, as well as why we enjoy internal repetition in a song (e.g., a chorus).

Why, then, do we tire of songs after hearing them too often? Presumably the neurons that mapa from correct sequence prediction to pleasant sensations display habituation and fire less strongly to continued stimulation. This habituation wears off after time, which is why you can once again enjoy hearing an old favorite song after not hearing it for a few days/weeks/months. Habituation to pleasant stimuli is a general brain phenomenon and shows up for virtually any type of reward, from sensory-specific satiety to the Coolidge effect.

This discussion also helps explain why many people prefer harmonic music over "modern music": Modern music is harder to predict. (For example, see "Unpredictable Continuity" in this piece.) Likewise, this hypothesis may partly explain cultural differences in music preferences, since people raised in a given culture will have had more exposure to their own kind of music and so will have more refined "n-gram models" for predicting the notes of new songs within that culture and genre. (Of course, preference for the familiar and nostalgia can also explain a significant portion of cultural differences in music taste.)

Emotional impact of past vs. present words

9 Jun. 2015

You work hard on an essay for English class and hand it in with pride. A few weeks later, you get it back, and your teacher tells you that it was a "poor piece of writing". You're upset all afternoon, feeling like a failure. Eventually you get over it. Three months later, you think back on the incident and remember how bad you felt, but the memory doesn't make you feel bad for hours afterward ("time heals all wounds"). Why the difference?

For some emotions, there's a relatively straightforward explanation. For instance, suppose you feel grumpy due to not having eaten. If you look back on that situation later (when you're not hungry), you don't feel the same way because your physiology is different. In contrast, being told that your essay was "poor" only hurts because of the words involved. The words alone trigger the physiological response. So why doesn't recollecting the words revive that response?

One possibility is that the brain checks whether a thought is already known in memory or not. If it's already known, then the emotional impact is dampened relative to if it's a new thought. That said, there are times when I read old emails with emotionally charged content, and even though I've since forgotten the contents of those emails, they don't have quite the same force as if the emails were recent. Maybe my brain also modulates the emotional impact of thoughts based on knowledge of whether they correspond to situations in the present or in the past.

Dampening the emotional impact of past memories relative to present experiences seems adaptive, since if you relived the full emotions of a memory, you might end up double- or triple-counting its impact on you for purposes of reinforcement learning. You might also become a lotus eater who passes the time only by reliving your most pleasant of memories, rather than doing evolutionarily useful work.

There are some exceptions to the principle against double-counting strong emotional experiences, like in PTSD or feeling sad over the end of a relationship, where even the memory of an event can bring back the emotions of the event with full force.

In Sonnet 30, Shakespeare comments on how past memories are still as strong as when the events first happened:

Then can I grieve at grievances foregone,
And heavily from woe to woe tell o'er
The sad account of fore-bemoaned moan,
Which I new pay, as if not paid before;

The idea of "paying" for sorrows as if they were a debt to be eliminated aligns with the idea that the brain normally habituates to or otherwise downweights past negative events.

Unconscious thought production in dreams

19 May 2015

A few nights ago, I had a dream in which I had written a list of random suggestions on my website. Someone else reblogged it. I was re-reading the list I had written and noticed these entries:

These aren't the best suggestions ever, and I don't necessarily agree with them in real life. But they're not garbage either. What's most amazing is that

One possible explanation, inspired by Dennett's model of language production in Consciousness Explained (Ch. 8), is that the process of consciously deciding what to say involves (1) unconsciously generating lots of candidates and then (2) deciding which of them to say or sending them back to the drawing board for improvements. The process of generating candidates for what to say seems in my experience to always fall below the level of conscious awareness, and what I am conscious of is just my opinions about the quality of each candidate speech act. So maybe what I experienced in my dream represented the raw speech-act candidates coming through, while the conscious editor of those candidates was turned off and so didn't generate any recollection of consciously having come up with those candidates. Because the editor was turned off, the quality of the speech acts wasn't as high as what I would say in waking life.

Maybe a similar explanation could account for dialogue in dreams, where I'm often genuinely taken by surprise by what other dream characters say to me.

In any case, this is just a general hypothesis, and I don't know it's right.

Update, June 2015: Another new thought that I came up with in a dream was the following: "If society tried to build only AGI, people would work on a relatively narrow set of problems. In contrast, when society also has many other software needs, people build a more comprehensive and robust set of tools, and those tools are tested thoroughly by the millions of people using them." As usual, this isn't the most insightful observation in the world, but it's not garbage either.

Dinosaur-footprint tires

19 May 2015

When I was around ~10 years old, my dad and I came up with an idea: Truck tires should have imprints on them that look like dinosaur footprints, so that when those trucks drove over mud or snow, it would look like dinosaurs had walked there. I'm not sure this could be done well for big footprints, since presumably the big foot indentation would reduce the tire's effectiveness, but it might work for little, Compsognathus-like tracks.

At the time I wanted to patent the idea to make money from it. I saw commercials on TV saying that if you have an idea, you should call some number to get patent assistance. I never followed through.

Note: I think it's generally bad to drive on dirt/mud roads, especially in woods, because doing so may crush tons of bugs.

Math debuggers

17 Apr. 2015

The number of mathematicians in the world is far smaller than the number of programmers. Software development is typically seen as easier than advanced math, and to some extent this is true: It requires more genius to be a great mathematician than a great programmer. However, there's much that's shared between math and programming, especially defining abstract structures that can be composed into more complex structures. A mathematician's axioms for an abstract structure are like a programmer's interface.

I think one of the main reasons that programming is easier to learn than abstract math is that programming allows for better feedback. You can type programming expressions into an interpreter or dummy program and see what comes out. You don't have to refer back to the definition and imagine an example in your head—uncertain whether you're constructing the example properly. In contrast, a program compiler points out your exact errors for you. Worked examples in math textbooks help enhance the concreteness of the subject matter, but they're not interactive.

When I was in college, I worked on Unix-based computers and mostly wrote programs in plain text, using Vim or Emacs. My feedback came from compiler errors and debugging "print" statements that I added to my programs. In 2009, when I began working at Microsoft, I started to use Visual Studio, and it transformed my relationship with programming. (Had I used Eclipse or other IDEs, I probably would have had a similar experience.) Now instead of waiting for the compiler to complain about errors, the IDE alerted me to problems right away. I could set variables to watch as the program evolved, and I could try entering expressions in an "Immediate Window". But the biggest improvement in my programming experience was in the realm of debugging. Instead of manually typing commands and printing variables one by one in a command-line debugger, I could just hover over variables and see their values. If I hovered over an object, I could see its data fields and sub-objects. For instance, if an object contained a dictionary, I could hover over the object and then hover over its dictionary, and keep unpacking down the hierarchy.

This ability to hover over variables to see their values and how they're constructed makes understanding code easier. I prefer to read code by running it in a debugger—so that I can see what values variables have at a given point—rather than making assumptions based on my memory of the sequence of steps so far. Learning math would be facilitated if there were similar functionality for mathematical structures. For example, if you hovered over a probability space, you could see that it's made of a sample space, a set of events, and a probability function. The object could also contain a statement of its constraints, such as that the sum of probability values over the sample space is 1. In an Immediate Window, you could try entering P(x) for some event x and see the probability value. You could then zoom in to the probability function and see that it's actually a set of ordered pairs (x,p), with x being an event and p being its probability. If you zoomed into a single real-valued probability number (say, 0.6), you could see that a real number is itself an object whose interface is defined by the real-number axioms, and there are several possible concrete real number "classes" that can implement this interface.

Armed with a tool like this, abstract mathematical structures (random example: K3 surface) would become less intimidating. They could be unpacked in a similar way as complex software objects are unpacked, and the user could experiment with manipulating (finite) examples of the objects. There would of course remain a lot of hard work in understanding the theorems about the objects, but at least grasping definitions would be easier.

Programming/debugging tools would also make tax codes vastly simpler to understand, and legal codes in general.

Watering alligators?

29 Mar. 2015

When I was a child, I and a friend were visiting a new area in upstate New York. We were on a lawn, and across the street from us, we saw another house. A man was in his driveway with a hose, spraying water over the driveway. This would have been ordinary, except that the man also had one or two alligators on his driveway! They were basking in the sun and moving around slightly. My friend and I agreed that this was weird.

Many years later, when I was around age ~22, I asked my friend if s/he remembered the incident. S/he said s/he did! That was odd, because this seems like the kind of fantasy memory that I might have just made up in my childhood.

I still don't know how to explain this. Maybe the man was watering an air-filled alligator doll. Maybe my friend and I were pretending at the time, and we somehow managed to convince ourselves that what we had made up was real. Or maybe, with low probability, the man really did own alligators.

Incidents like these drive home the fallibility of memory and help clarify how religious visions and paranormal observations are possible. Some people tell me they believe in the supernatural because of personal experiences with it. I can't recall any meaningful supernatural experiences, only a few random unexplained events. If seeing alligators in a driveway has supernatural import, I'd love to hear what it is. :)

Childhood hospital visits

29 Mar. 2015

I remember only two visits to the hospital when I was young.

The first happened when I was maybe ~3 years old. My mom was about to put on a new diaper. She left the room temporarily. I saw a metal diaper clip on the bed and played with it in my mouth. Soon enough, I had swallowed it. My parents rushed me to the hospital, where I got an x-ray to verify that the clip wasn't causing problems. It turned out to be fine, and it probably came out in my stool soon thereafter. (Interestingly, my parents had been reading me Curious George Goes to the Hospital a lot around that time. In that story, George swallows a puzzle piece and then goes to the hospital for an x-ray. I don't think there was a causal connection between the book and my own incident, but who knows.) BTW, I wore diapers longer than most children. Eventually, my dad offered me $20 if I would poop in the toilet instead. I accepted the offer and dispensed with diapers thereafter.

The second situation happened when I was maybe ~4 to 6 years old. I was crawling along the counter in my grandmother's kitchen. The counter led up to the stove. The corner burner on the stove was red hot, but I didn't have much experience with stoves, so I went ahead and put my left hand on the burner as I continued crawling forward. My hand sizzled, and I probably cried with pain. I went to the hospital and got a cast on my arm. (This was the only one of two times when I got a cast; the other was for a slightly fractured bone in 2010. I've never broken a bone, so I don't know how painful that feels.)

Forum spammers becoming more sophisticated

29 Mar. 2015

On the Felicifia forum a few years ago, we human participants noticed something interesting: Random new users would leave comments that had somewhat relevant words to the post topic and that didn't contain any spam links. We couldn't tell whether these were actual spam posts. What would the spammers gain from them? Soon enough, the answer was revealed: The spammers later on added spammy links to their posts, presumably waiting until after forum moderators had seen the posts and ok'ed them. In late 2014, we disabled new users to Felicifia because almost all of the new members were spammers.

For the Facebook groups I manage, one tiresome chore is checking that new people to be added are real people rather than spammers who will make posts to advertise some product. Usually it's enough to check that the person to be added has mutual friends with me or at least several friends in the group. If not, I need to click through on the person's profile and guess manually whether the person is a spammer (and if I still can't decide, I message the person to ask). In 2013 to early 2014, the signals tended to be obvious: Usually spammers had profile pictures of attractive young females, and for some odd reason, they usually had their genders listed as "Male" (maybe because the spamming program forgot to change it to "Female"?). I've seen fewer of these recently. In fact, I'm now seeing some apparently fake profiles whose timeline posts have engagement in the comments. Maybe the spammers have become sophisticated enough to create fake dialogues among their fake profiles to hinder Facebook's anti-spam efforts? One signal that remains helpful is looking at the profile's groups. Sometimes almost all the groups that a person belongs to start with the same letter, which signals that the spam bot has been trying to join groups in alphabetical order starting from some initial letter. It'll only be a matter of time until the spammers get smart about this as well.

Matching moods with tasks

18 Mar. 2015

I try to do things when I'm most in the mood to do them. This reduces the effort required to do a given task and increases my productivity when doing it. I have a decent sense of what fraction of the time I'll have various moods in the future based on past statistics (which I collect implicitly in my brain rather than explicitly on paper).

I try to save up tasks until I have the best mood for doing them. Here are examples of moods I may have at different times:

Some tasks are intrinsically more fun than others, so I try to reserve them as "junk food" to do when I'm not in the mood for something more substantive. The "junk food" comparison is not just metaphorical: If I do too many "junk food" tasks at once, I physically feel as though I've eaten too much candy and need to do some harder mental work to "burn off" the sensation. I would be surprised if similar brain chemistry were not at play both in the case of junk food and fun non-food tasks.

There are rare occasions when I feel like I can do any task, no matter how unpleasant. At these times, I imagine myself as Super Mario after he's just absorbed an invincibility star: I want to kill as many of the hardest enemies as I can before the star wears off.

There are other times when I'm in no mood to do anything useful and let myself recuperate rather than over-straining my emotions by doing something difficult when I'm in the wrong mood and thereby creating even more negative associations with the harder task.

I like setting my own schedule as much as possible in order to better organize tasks to fit with moods. When I have to do something quickly, I have less ability to sort moods with tasks and so incur more stress cost as a result.

Why I wouldn't eat only Soylent

First written: 17 Mar. 2015; last update: 14 May 2018

It takes me 10-15 minutes to prepare food for a meal. The main reason I don't spend much time on food preparation is that my meals aren't fancy. Most of the foods I eat don't require any heating at all: raw fruits, raw vegetables, peanuts or peanut butter, mushrooms out of the package, tofu out of the package, cheese, bread, pretzels, canned olives, and so on. I occasionally microwave frozen items. The one thing I do cook on the stove regularly is beans/lentils, but I can cook enough to last for the week all at once. I collect my food for a meal into two or three large bowls that I can take to my computer, and I only use a fork to eat them; I don't need a knife because I pre-slice everything in the kitchen. While preparing food, I either listen to my iPod or think about an open question that I need to resolve, so the time is not wasted. I use the same bowls at every meal and only wash them every ~two days, because I store them in the fridge between uses to slow bacterial growth.

Since I don't spend large amounts of time on food preparation, and the time I do spend isn't really wasted, I don't see much motivation behind Soylent, a food replacement. Maybe one selling point of Soylent is that it's nutritionally complete, but I take a multivitamin to help in that regard, and there's no solid evidence that multivitamins improve health anyway.

Even if Soylent has a slightly more optimized balance of nutrients than "real food", I would still bet it's less healthy in expectation on account of unknown unknowns: There's so much we still don't understand about nutrition, and you can't tell that Soylent isn't missing something essential that science hasn't uncovered yet:

"We don't have a thorough understanding of how these nutrients and plants and food items interact with one another," she said.
"There's no particular reason why a compound couldn't include everything essential for human sustenance," said Katz at Yale University. "The only problem is we may not yet know what that inventory is in its entirety."
"This formula contains what we know we need but not what we might need and don't know how to measure or quantify yet," said Ayoob, at Albert Einstein. "There are hundreds of antioxidants and anti-inflammatory compounds, for example, that we're still learning about."

Another nutritionist:

says that, if Soylent is formulated properly, a person could certainly live on it, but she doubts they would experience optimal health. She fears that in the long-term, a food-free diet could open a person up to chronic health issues.

Nutrients from real food are much more useful to the body than fortified nutrients. Maybe the Soylent developers have looked into this issue, but in general, I'd want to be really certain that Soylent was as healthy as real food before switching to it.

Also, I feel mildly sick when I go for more than a day without eating vegetables. The fiber, nutrients, moisture, etc. of vegetables feel wholesome and refreshing, and fiber helps the stomach feel full. On a typical day, I might eat ~2 cucumbers, ~2 carrots, ~2 apples, ~2 tomatoes, and ~1 bell pepper, for example. The USDA recommendations for fruit and vegetable consumption are pretty high, and most people don't meet them.

Soylent is cheaper that fruits and vegetables, but at the cost of (1) possibly lower mental productivity due to worse nutrition and (2) possibly greater long-term health costs.

I prefer to pull content rather than have it pushed

15 Mar. 2015

My favorite way to learn is usually to become excited about a given topic and then explore it when I'm in the right mood. It's like eating the kind of food that I'm most hungry for.

However, many websites encourage a push model of learning: You subscribe to blog posts, Facebook/Twitter feeds, email newsletters, etc. and have the updates presented to you at the publisher's pace. This leads to the unfortunate situation where I sometimes read materially I'm not optimally interested in at the moment only because it's current. Fortunately, I can usually delay reading things until I become interested in them, though this doesn't work for emails and other time-sensitive communications.

The pressure for a push approach is that every individual publisher wants readers to sign up so that they'll continue reading. Pushing might also have some benefits if the pushed content is more valuable than what the puller would be pulling by default, or if the pushed content is more diverse than what the puller would seek on her own.

Theory = junk food

15 Mar. 2015

When I was in high school, I loved math problems that I could solve with just pencil and paper. The idea that I could figure things out without needing to know messy real-world details was empowering.

When writing up science labs, my favorite part was doing the calculations, especially if that meant deriving a formula. I thought of math as the "junk food" part of science—like the sweet frosting inside the more dull Oreo shell. I was excited by the field of statistics, because it was like eating frosting all the time while other people did the hard work of setting up the experiment and collecting data. "Big Data" problems have similar appeal, because in many cases, the data comes cheaply, without requiring careful experimental protocols and many hours of tedious lab work. I love science, but I would not want to be a biologist, chemist, or experimental physicist because so much of the day-to-day work requires repetitive manual labor.

Once I began studying computer science in college, I felt a similar sense of gratitude that this subject didn't require manual labor but was purely "mental" work that could be done anywhere. Like math, computer science was a diet of junk food.

After college, I became increasingly appreciative of empirical sciences. Maybe part of the reason was that I had seen how ineffectual pure theory was compared with experimentation. Theory just gets way too complicated too quickly without empirical grounding and so is often wrong when applied more than a few steps away from observational data. Maybe another reason was just that, I was now able to explore different fields than those I majored/minored in at college, including psychology, neuroscience, physics, and ecology. Where my love of theory showed me the beauty of simplicity, empirical sciences showed me the beauty of complexity—the awesomeness of seeing the millions of unique variations that emerge from geology, biological evolution, social organization, elaborate software systems, etc. (Unfortunately, beauty does not equate with goodness. Greater complexity brings with it greater potential for suffering, and I would prefer from a moral perspective that the universe had less complexity.)

I like the way computer science offers a language for coping with complexity. (Marvin Minsky sometimes makes this point.) Computational models can make complex processes simple. When I read about neuroscience, I feel that discussions of physiology and correlational data are fine, but I only finally understand what's going on once I see a hypothetical algorithm. I'm somewhat addicted to algorithms.

Why I never got a mobile phone

13 Mar. 2015

When I was in high school, some people had cell phones. My family didn't have them mainly because they were expensive. This continued through college, as smart phones began to emerge.

In 2010, Microsoft offered employees a free Windows Phone 7. I still didn't get one because the main expense of a mobile phone is not the hardware but the cost of the service, which I still find really high (typically >$60/month).

I don't have much use for a mobile phone because I don't go many places, and when I do go somewhere, it's not for more than a few hours. When I go somewhere I take my iPod to listen to podcasts. My life would be much worse without an iPod, but phone service on top of that would be unnecessary.

Moreover, even if I did have a mobile phone, I'd probably turn it off a lot of the time, because I dislike distractions. My brain is bad at context switching, since my flow and concentration get disrupted. I prefer to let messages accumulate for a few hours and then check them rather than checking them constantly, as some people with mobile phones do.

Also, if I see an email, my brain immediately generates a reply, and I want to write back before I forget what my brain came up with. But typing on a mobile phone is slower than typing on keyboard. The small screen is also harder on the eyes, and the web-browsing experience is inferior. Computing is worse on mobile, so why would I want to spend time using a mobile device when I could use a desktop one? Since I basically live at my treadmill desk most of the day when I'm not sleeping, I can use my desktop most of the day.

I also dislike having to maintain two computers (desktop and phone), keep them both updated, install applications on both, and sync documents between them.

The only case where I would have benefitted from a mobile phone was a few occasions where I had to go to a new place and needed directions. Since I lacked a phone, I looked at Google Maps before the trip, drew a picture of my route on paper, and took the paper with me. This was a little inconvenient, but avoiding doing it ~15 times in the last 10 years would not have been worth $65/month in subscription fees.

Why I never got a driver's license

13 Mar. 2015

Stereotypically, teenagers can't wait to learn to drive at age 16, and this was true for many of my friends in high school. I didn't care about driving at that point, because I had neither the time nor inclination to go to many places besides school.

Once the marathon of high school ended, I had more time to think about learning to drive. I studied for the driving test in the summer of 2005 and got my learner's permit.

My family owned a "shale bank", where I could drive in circles on a rocky area. For an hour per day over the course of a few days, I practiced driving around the shale bank and performing various maneuvers. Eventually I was ready to try driving on a low-traffic road, which I did a few times.

I found driving on a real road to be terrifying. A friend of mine compared driving with video games, noting that because I was in the past pretty good at video games, I should be pretty good at driving. But with video games, if you make a mistake, it's not a big deal, whereas with driving, if I made one wrong move, I'd crash into the car driving toward me. Whenever I drove past a car, I realized that my life was potentially in my hands.

I drove one last time on a longer route than before—probably ~10 miles. I succeeded, but the experience was very stressful. Soon thereafter, college began, and I stopped my driving practice. I didn't resume driving since then, because

  1. I was always occupied during the summers of my college years
  2. I found driving to be dangerous
  3. I decided I might be able to get away with not learning to drive by living in a city area.

Some people told me I should learn to drive so that I could take over in emergency situations, but this didn't seem particularly compelling.

In 2007, I saw news about self-driving cars and hoped that eventually I'd be able to use them. However, I knew that even after self-driving cars came to market, there would probably be a requirement that the human using them know how to drive in order to take over in emergencies. Probably it'll be decades before this requirement is weakened.

In the meantime, I get by fine without driving, because I still don't go to many outside events, and when I do, I get a friend of parent to take me. When I worked at Microsoft, I walked to the office, the grocery store, the doctor's office, the post office, the barber, and the dentist, which were all within a mile of my apartment. The bus station was also close by, which allowed me to take a bus easily if I needed to go farther.

Not owning a car avoids

  1. paying for a car
  2. paying for car insurance
  3. paying for parking space
  4. one of the highest risks of death for young people
  5. CO2 emissions
  6. splatting bugs and driving over worms, slugs, etc. on the road.

Not owning a car is awesome!

Ordinary people aren't stupid

9 Mar. 2015

Experts in a given field often laugh at how little those outside the field know about it. Such condescension sometimes takes on a moral tone when the topic at hand is politics, history, religion, or some other topic that people "should" know about.

The fact is that many "normal" people don't have the privilege to spend their time learning about abstract topics. They may work long hours, have many kids to look after, or confront personal/psychological problems. If you're an expert on an erudite subject, that's because you're lucky enough to be able to spend your time on it rather than doing more mundane tasks. People who do mundane tasks are experts in their own right about their own areas of proficiency, such as how to handle customer-support calls well, how to clean bathrooms efficiently, or how to interact with children.

Of course there are some people who could know about politics and society but spend all their free time on more trivial matters, but these aren't the majority of the population. So expectations that people "should know more" about abstract matters are often unreasonable.

Taking fiction seriously

8 Mar. 2015

Some fictional movies keep viewers on the edges of their seats, and cliffhanger novels keep readers turning the pages. It's interesting that we're able to become so engrossed in fiction and want to "know how the story turns out", even though the story is made up. Presumably this is because we forget, at some level, that the story is fiction and take what's told to us somewhat seriously.

If we wanted, we could re-tell the fictional narrative a different way:

How does this novel end?
Any way you want it to.

Sometimes I change fictional stories to be more to my liking. But I still experience some of the highs and lows of the characters in the original story.

One mundane reason why we might care how a story proceeds is that if it goes in direction A rather than B, then the subsequent action that we read or watch will be of one sort rather than another, and we might have a preference over what sorts of action we read or watch. For example, maybe we don't want a certain character to be killed because we find that character's actions enjoyable. But this doesn't seem to be the only reason we have preferences over how fictional stories turn out, since we also care how a story ends, even though this doesn't affect subsequent action (assuming no sequels).

Update, 27 Apr. 2015: I learned that this topic already has a name: the paradox of fiction.

Life satisfaction not intrinsically valuable

First written: 1 Mar. 2015; last update: 14 Sep. 2017

I live an extremely good life. I'm usually excited to wake up and begin the day's work, and I feel very fulfilled by what I accomplish. Yet I don't (any longer) think that happiness is morally valuable or that it's good to create more happy lives like mine. Why not?

There are many types of happiness, but following are two broad categories:

  1. Hedonic pleasure: Food, orgasms, exercise, etc.
  2. Fulfillment: Knowing you did the right thing, helping others, achieving goals, etc.

I feel as though hedonic pleasure doesn't really matter, except when it fills an emotional void and thereby prevents suffering. As many people say, hedonic pleasures are "empty" and meaningless in a broader sense. Of course, my brain craves them, but life would be at least as good if I had no strong hedonic pleasures and no cravings than it is having both.b Addiction is not pleasant, even though satisfying the addiction can sometimes feel good. We could potentially create minds in the future that experience hedonic pleasures without cravings, but these pleasures would still be "empty" and not morally valuable.

Fulfillment is a stronger candidate for having moral value, since it feels more meaningful. But I don't think goal achievement per se is valuable; rather, the state of the goals being achieved is what's desirable. In my case, it's not that I think it's morally good to experience the feeling of being a person working to reduce suffering; what's morally good is having less suffering in the world. Life is not something I would "play back" over and over again to re-experience it's joys. Rather, I want to get to a state where my goals have been advanced.

There's an expression: "Life is a journey, not a destination." The idea is that the process of accomplishing goals is valued, not the end goal. This is the opposite of how I experience life. I think the destination is what matters, and the journey has no intrinsic importance. I would not simulate exact copies of my life's history in order to re-live my life for its intrinsic value any more than I would create a work of art, destroy it, create another work of art, destroy it, and so on, in order to experience many "journeys" of creating art work.

But if goal accomplishment per se is not important, then neither of the types of happiness I outlined above matters intrinsically. Instead what matters is actually achieving my goals, which means reducing suffering.c Hedonic suffering doesn't seem empty in the same way that hedonic happiness does. Rather, extreme hedonic suffering appears to be the worst possible thing.

In another piece, I elaborate on the difference between valuing

  1. the act of preference satisfaction vs.
  2. the content of those preferences.

My point above is that #1 isn't intrinsically valuable, only #2. #2 for myself is to reduce suffering. But I could also count the contents of others' preferences, and I do a little bit, just not nearly as strongly as I count the contents of my own preferences.

Two of life's pleasures that have felt very meaningful to me have been learning (including the thrill of gaining a major new insight into reality) and being in love. Still, I feel like there's no moral urgency to create such things, while there is moral urgency to prevent torture. If I imagine being in love, it seems great, and I want it. But if I don't exist, there's no wanting it and hence no moral imperative to create it.

In an episode of The 80,000 Hours Podcast, Robert Wiblin (at around 20 minutes in) gives an argument why we might care about humanity's survival even if we don't intrinsically care about future lives:

The current generation does just care a lot that their actions today have a long-term meaning behind them. [...] Imagine that you found out that everyone was going to die in the year 2040. Then just so much of the significance of life is completely stripped away, because an enormous amount of what we do is about trying to build up future generations and improve the long-term trajectory of civilization. And a lot of the research we're doing, having children, building buildings, trying to produce great works of art. If you know that it's all just going to come to an end in 20 or 30 years' time, then the whole point of life is much reduced.

As a reply, I'll give an analogy. Suppose you expect that rates of illness will continue along a similar trajectory as they have in the past. In light of population growth in a city, you decide you need to build a new hospital to meet expected future capacity. The hospital costs $500 million and requires many years of hard work on the part of many people. Now suppose that someone invents a magic pill that prevents all human illnesses. No one needs the hospital anymore. Is it a tragedy that all the effort was wasted on the hospital? In some sense, yes, because the effort could have been better directed elsewhere. But overall, it's actually wonderful that no one needs to use the hospital.

In a similar way, our efforts to reduce suffering in the future are meaningful because of the suffering they'll eventually reduce. But if future suffering were eliminated in some other way, such that our current suffering-reduction efforts were "wasted", this wouldn't be a tragedy.

What "toy boat" says about cognitive science

1 Mar. 2015

Electric Boat toyIn the past I had assumed that tongue twisters, like saying "toy boat" five times fast, were difficult because of the difficulty of physically moving the tongue quickly. However, today I noticed that if I say "toy boat" in my "mind's ear" without any mouth movements, I still get tripped up more easily than if I speak to myself a normal sentence. HowStuffWorks reports that a similar finding has been discovered by scientists, suggesting that tongue twisters may be at least partly in the brain rather than in the tongue.

When I say something silently to myself, it feels as though my mouth muscles are on the verge of moving but stop themselves short of actually moving. This is consistent with Daniel Dennett's hypothesis (Consciousness Explained, p. 195-97) that talking to oneself began as literally speaking aloud to oneself, and gradually the process became internalized without using the mouth.

There are times when I literally talk aloud to myself. If I suddenly stop allowing my mouth to move, my train of thought also stops, and for a split second, I can't recover the train of thought unless I move my mouth again.

Seek positive friends

9 Feb. 2015

There's a famous quote: "Before you diagnose yourself with depression or low self esteem, first make sure you are not, in fact, just surrounded by assholes." I think this is a really important piece of advice.

I've had friends who seemed really smart, but they rarely gave positive feedback and usually made me feel stupid by comparison with them. People with this personality tended to cluster together, so I wondered if maybe I really was dumb by comparison, since all the people in that group seemed arrogant. As I got older and learned more, I began to understand better the subject matter that those friends had been talking about, and I realized that I wasn't dumb after all; rather, I had just assumed the material was difficult because those who understood it felt superior.

I think this is not an uncommon tendency. I remember one upperclass student telling my freshman math class: "This material is so easy compared with what I'm studying now." It may have been true that the freshman material was easier for that upperclassman because he already knew it. But for the freshmen themselves, I don't think the material was easier than upperclass material was to upperclassmen. The student's remark may have reflected a failure to empathize with the freshmen's situation.

Adults often tell children: "You're naive and don't understand how the world works." I'm ostensibly an adult now, and I can say that actually, there is no great mystery that only adults are privileged to discover during their initiation ceremonies. (Or if there is, I guess I missed it!) Rather, adults have just gradually accumulated wisdom, slowly making their judgments more nuanced. If you don't understand something now, that's not because others are intrinsically so much smarter; it's because you haven't yet had the opportunity to learn enough about the subject matter.

There's no proper place for arrogance. If your friends look down on you, get new friends. There are plenty of nice people in the world. The friends you surround yourself with make a huge difference to your well-being, and there's not really a reason to tolerate unkindness—except in exceptional circumstances, like if you're earning to give on Wall Street and can tolerate the aggressive atmosphere in exchange for being able to donate a lot more. If your friends make you feel bad on more than rare occasions, it's a problem with them, not with you.

Name choices

9 Feb. 2015

I won't be having a child, but if I were to do so, I would presumably name it in a way that would optimize for success later in life, based on society's biases. My impression is that most parents choose names more haphazardly, based on what sounds good to them.

I'm glad to have a first name that can't easily be shortened, since this avoids a lot of confusion. In school, when teachers learned students' names at the beginning of the year, they would always ask questions like: "Matthew, do you prefer to go by 'Matt'?" This would happen for most students in the class, for every teacher, for every grade. Cumulatively it probably consumed several hours of each student's life.

Changing one's last name seems like a bad idea to me, because it once again causes massive confusion. For instance, if you've written academic papers, your h-index and other metrics might become messed up. The name change is also likely to cause confusion for people who read your work. Unless you deliberately want to dissociate yourself from your past writings and other artifacts, changing your name seems like a huge headache.

Stories from elementary school

9 Feb. 2015

Kindergarten

1st grade

2nd grade

3rd grade

4th grade

5th grade

Shyness

9 Feb. 2015

When I was young, I was extremely shy. Apparently I almost never spoke during all of kindergarten. The teachers wondered if I had a speech impairment and so sent me to an in-school speech-therapy counselor along with a few other students. There we played various games that involved speaking words.

By 1st grade, I was speaking more, both to teachers and fellow students, but I was still on the shy end of the spectrum.

In the final day of my 5th-grade year, the teachers distributed pens for signing yearbooks. Students had to go up to a teacher to get a pen, and since I was still somewhat shy, I refrained. Throughout the day, I borrowed pens from other students to sign their yearbooks.

In 7th grade, my English teacher required students to read one of their pieces aloud. I volunteered to go first in order to get the unpleasantness done with rather than waiting nervously through other presentations. After I anxiously read my speech and returned to my seat, my teacher told me: "You looked like you were going to the guillotine."

In 8th grade, I became inspired by Ralph Nader to become politically active. In 9th grade I joined some activist clubs at my high school. I tried to speak more because I knew that speaking was important to the success of the clubs, especially once I became president of one of them in 10th grade. I also spoke more openly with friends and during class debates about political topics; my passion gave me more courage and motivation.

In 11th grade, realizing the importance of public speaking to making a difference, I voluntarily took a half-year course in public speaking, which involved five different presentations before the class. This was five rounds of acute stress, but I felt it was worth it.

In 12th grade, the school held a mock 2004 presidential debate, with maybe ~1/4 of the school in attendance. I asked the first audience question. (It was along the lines of: "If we should take a precautionary approach toward Iraq's WMDs even if they may not exist, why not take a precautionary approach to climate change even if it may not exist?".) Later that school year, I volunteered to be the Green Party presidential candidate in a fake election in my public-policy course, which included making TV commercials and engaging in a debate in front of the class.

My passion for altruism continued to encourage me to be social going forward, because I knew the value of making connections and broadening my world view.

As of 2015, I'm not particularly shy, especially not online. I may continue to be somewhat exhausted by talking with strangers, but this is more because I find that it requires a lot of energy to navigate social interaction in a way that avoids awkwardness rather than because I'm actually nervous. I still get some anxiety before public speeches, but fortunately I don't do them very often. (If I did them a lot, probably I wouldn't be that nervous.) I'm probably less afraid of public speaking than 70% of the population, though.

For most of my life, I've often been fairly introverted in the sense of preferring to spend most time alone. Originally this was motivated by shyness, but now it's motivated by wanting to get more done. I'm generally more productive by myself than when working with others, because (1) there's so much overhead with communication, and (2) I don't think as well when I'm in a social environment. Social interaction feels like a small degree of what I assume intoxication involves (I'm just guessing, since I've never had alcohol): It's fun, but it also makes focusing more difficult.

I find that Facebook and email produce small doses of the same brain effects as in-person communication, and for this reason, online interaction seems to be a sufficient substitute for meeting people in person. Somes people tell me that the Internet is not "real" the way doing things in the physical world is. I disagree. We are always inhabiting a virtual reality of sorts—one created by our own brains. Marvin Minsky:

The fact is that the parts of ourselves which we call "self aware" comprise only a small portion of our mind. They work by building simulated worlds of their own––worlds which are greatly simplified, as compared with either the real world outside, or with the immense computer systems inside the brain: systems which no one can pretend, today, to understand. And our worlds of simulated awareness are worlds of simple magic, wherein each and every imagined object is invested with meanings and purposes. [...]

And so––let's face it now, once and for all: each one of us already has experienced what it is like to be simulated by a computer!

Huffman coding of language

9 Feb. 2015

Around 2005, I suggested that we could replace long English words with shorter ones. For instance, why do we need so many characters in the simple word "through"? I suggested that people could collect word usage on the entire Internet, find word frequencies, and then develop new ways of writing those words, in which the most frequently used words would contain the smallest number of characters. For instance, "the" might be shortened to one character.

In 2006, I learned that this roughly tracks the idea of Huffman coding, or of optimal coding more generally.

Coincidences

29 Jan. 2015

Some religious believers consider coincidences to be signs from God. Even people who don't subscribe to traditional theism sometimes believe in something supernatural because of unexplained coincidences in their lives. I can think of three friends offhand for whom this is true.

I've noticed that coincidences seem to happen to me a lot as well. But far from being "signs" of anything significant, they're often random—not the kinds of events that would come from a purposeful supernatural force. Two examples that come to mind from within the last 8 months:

The supernatural only helps explain coincidences that a supernatural force would be expected to produce. If the supernatural's coincidences are pretty random, then postulating a supernatural entity adds less explanatory power.

I suppose a supernatural random-coincidence generator might still have some explanatory value to account for why there are more coincidences than we expect, but obviously such a hypothesis would incur a huge Occam's-razor cost. Instead, presumably our brains are just highly attuned to coincidences in order to learn important associations from small data sets. In general, when two events coincide, they do so for a reason. So we sensibly infer that there's a reason on any given occasion. If we aren't careful to apply Occam's-razor regularization to our ontological commitments, it's easy to see how superstitions could accumulate.

Footnotes

  1. I mean "map" in the sense of a function mapping from inputs to outputs. Actually, each neuron in the brain is like a miniature function that collects inputs at its dendrites and produces an output (or not) along its axon. The brain—a collection of neurons—is like a big functional program.  (back)
  2. For example, I would probably turn off my desire to eat and miss out on the pleasure of eating if I could, not just to save time but also to avoid the annoyance of having hunger cravings. Likewise, I would probably turn off my sex drive if I could, and one friend of mine has said the same. We might imagine that life would be cold and barren without these cravings and pleasures, but in my hypothetical, I'm assuming that cravings and pleasures would be replaced by calm contentment, not crushing anhedonia. For example, as a very young child I had no sexual cravings, but my life wasn't worse because of that.  (back)
  3. Actually, it suggests that what's valuable is achieving the goals of all agents who have goals, while not valuing the creation of new agents just so that they can satisfy their goals. However, I'm selfish and prefer to optimize for my own moral values rather than others' values.  (back)