A Single Version of Truth

There is a certain elegance to a belief that a single version of truth exists; that, by following a deterministic, codified process, we can definitively answer questions; that, in the face of a disagreement, you and I can discuss our way to the correct answer, whereby one of us is right and the other is wrong. A single version of truth would certainly provide comfort and confidence to any relationship, any team, any organization.

The organization I worked at held a firm belief that there exists a single version of truth. It took much longer than it should have, but I've thought and experienced enough to vehemently disagree. There is no single version of truth.

And I'm not talking about the results of Gödel and the like who proved that some statements are unprovable. This is one fact that my mathematician friends, with a detectable grin on their faces, used to always bring up as I was going through my period of agnosticism. The problem with using that as an argument is that while it's mathematically correct, it doesn't really help me in everyday decision-making. Statements whose truthiness I wanted to ascertain were not crazy self-referential constructs built to make a point. They seemed much more "real life".

I'm also not talking about visibly nondeterministic statements such as "It will rain five week from now". Of course, if the processes involved are chaotic (nonlinear) and the statement is a prediction or in some other way requires the collection of a large body of data, there is no clear truth.

I'm talking about these nice, "linear" statements that you may encounter at home or at work. The kinds of statements your boss makes and convinces you that they are truth – i.e. that they can be derived from facts using logic.

The problem is that logic is not the mechanism by which truth is created from nowhere; logic is simply the process of transformation one statement into another statement that has the same truthiness. Logic, in short, is the creation of equivalent truths. And in most "real life" conversations we don't start with facts; we start with our individualized set of axioms–assumptions that we believe to be true.

I've witnessed countless occasions where smart men and women performed logical jujitsu, totally blind to the fact that the points they were starting from very different assumptions, disagreeing at a fundamental level of basic assumptions and philosophies. The problem is that we rarely state these assumptions. Here is a good exercise: list, say, fifteen axioms that guide you in life. Odds are, you will find it very difficult. Axioms are tacit knowledge, something we accumulate over long periods of time. I would argue that our axioms are a manifestation of our wisdom.

Even if we are self-reflective (and honest!) enough to see that, we trick ourselves thinking that our philosophies are by and large equivalent, and so we're starting from the same point. Wrong! Axioms are derived from our philosophies in a messy, nonlinear, nondeterministic way. These derivations are full of biases. And the philosophies themselves – the most fundamentally held beliefs – are highly individualized, and often impossible to describe in words. So how can I claim that my philosophy, let alone my axioms, is consistent with yours?

"Is it true?" is therefore relevant only given a particular individual's background and the context. In other words, there isn't "the" truth -- there are many truths based on our assumptions and backgrounds. A blind belief in the notion of a single truth is therefore particularly dangerous, because it promotes intellectual bullying – the ability to outlogic someone regardless of the quality of the statement itself. In fact, environments that tend to believe in a single version of truth also tend to specialize in people who are good at debates – erudites who know the tricks of logic and have mastered the art of pseudoproof, the sleight of hand-waiving or the placement of the unsound (and critical!) logical leap on a page break.

No, I'm not advocating to abandon logic whatsoever. In general, being able to use the laws of logic to make decisions and to communicate is arguably the most effective way to achieve good outcomes. However, a shallow understanding of "logical thinking" can be detrimental. Logical thinking is nothing else than the process of manipulating statements in a way that preserves their truth value. So what should individuals and organizations do? Invite others to a discourse but be open to the idea that whenever we converse, we start with two different sets of assumptions, and thus consequently we can both be logical but arrive at wildly different results. Spend less energy engineering logical transitions, and more energy digging into the underlying axioms. They are likely different. When you notice a difference, work on resolving it. Perhaps we can avoid the jam and pick a subset of our assumptions that we do agree on, assuming that we can deduce a relevant decision. Or maybe we can both compromise and merge our axioms into one set (so long as it's consistent!). Most likely, whoever is responsible for the decision gets to call the shots – but this time, everyone is clear on how a decision was made and was it was conditional upon. That way, if the outcome backfires, the decision-maker can trace it back to her axioms and refine them, rather than creating an illusion that it was not the fault of the decision-maker.

Betting on the Timing of an Event

 There are times when two or more parties disagree over when a particular event will happen and the disagreement is so strong that people are willing to bet each other. It is common to place “over-under” bets — if the event happens before time T, Andrew gets the money, otherwise Bob gets it. Usually the further the event is from T, the more money exchange hands.

I don’t like this style of betting because it’s not expressive enough. Instead, I prefer to bet by specifying my probability distribution of the timing of the event, and then using these distributions to determine payouts. It's a fun activity, requiring very little mathematics to execute well.

Essentially, each party graphs a probability distribution of the timing of the event — a histogram with the time on the horizontal axis and the probability density function on the vertical axis. The probability density function is simply “the relative probability that the event will happen around the time specified on the horizontal axis”. So if the histogram is twice as tall around 8pm than around 7pm, the event is twice as likely to happen around 8pm than around 7pm.

That’s all each person really needs to do. No need to worry about the area of the histogram summing up to 1 since the vertical axis can be scaled up appropriately. The two people should also agree on how money they are willing to bet — say k dollars each.

When the event actually occurs at time T, the two people compare the value of the probability density function (the height of the bar) at time T on their graphs and pay up based on the difference in these values. The height of each bar needs to be scaled appropriately so that the area under each curve adds up to 1 – that way, no player can cheat by making their graph "taller".

To do the scaling well, we need to calculate the area under the graph. A few heuristics that work include: 

  • Limiting oneself to "aliased" curves on graph paper, so that the area is simply the number of squares under the curve
  • Limiting oneself to piecewise linear curves and doing relatively simple math to compute the area
  • Scanning the graph and using a graphics editing application to determine the area under the graph (using e.g. the flood fill and histogram tools) 

The Theories of Time Travel

(Originally published on October 22, 2010) 

Let’s assume that what we all secretly hope for is true: that backwards time travel is possible (with a fast enough rocket you can travel forward in time already, thanks to Einstein). It’s unclear what such time travel would look like — there are many different theories and, consequently, interesting implications on the Universe, causality (why hasn't anyone visited us from the future?) the existence of paradoxes, and the existence and the nature of time loops. Some include interesting design constraints: my favorite is the theory of time travel put forth in Primer.

Note that to help myself think through this, I have a human being travel in time; this may lead to inaccuracies and further questions — in most of the cases below, we can probably replace me with a photon, or even a quark, and get more precise results (“memory” becomes “momentum” or “spin”, etc.). But it’s more fun to think about people traveling in time.

 

Possibility one: there is only one version of the Universe

  • If the links between causes and effects are not maintained, we have a consistent (paradox-free) time travel: moving backwards in time rewrites history and the previous version is lost. The I that travels back in time (call it I1) is not the same as the I that I1 meets in the past (I2). Whether I2 enters the time machine or not is irrelevant to I1. If I1 kills I2′s grandfather, I2 will not be born but I1 will not be affected in any way. It’s a very safe theory of time travel.
  • If the links between cause and effect are maintained (but their temporal relationship isn’t, necessarily), the Universe has to decide how to handle duplicates of matter/energy: it may choose to allow them, or not, or have an opinion somewhere in between.
    • If duplicates are allowed, I1 is identical to I2 but they are allowed to co-exist. If I1 prevents I2 from entering the time machine, I1 will cease to exist. What if I1 kill’s I2′s grandfather (who is also I1′s grandfather)?
    • It’s possible that I1 will simply not be able to do this — this is the theory where the Universe maintains its consistency (by making it prohibitively expensive — either by requiring you to put a lot of energy into your action or outright generating laws that locally forbid you to perform it), somewhat akin to what the writers of Lost did in the show. This energy-effect constrained time travel — the Universe not letting me kill my grandfather — is interesting. In order to maintain its consistency, the Universe would need to propagate all actions forward (“play them out”). If there is a sequence of actions that cause an inconsistency, the energy required to continue along this sequence would increase, proportionally to the probability of an inconsistency. It would be like an invisible magnetic field that steers actions in a particular direction. This could be implemented by a biased averaging out of quantum effects: let’s take light for example. We know that according to quantum theory, the movement of photons from A to B is realized through an infinite number of different paths which average out to a straight line. However, if the probabilities of the paths are different (due to the fact that some paths may cause an inconsistency in the future), the paths could actually average to something that’s not a straight line. To us it would seem that light travels in curved paths (without the presence of any “real” field, such a gravitational one)! Of course, these probabilities change gradually so no obviously apparent deviations from the norm would occur at first. For example, if I’m intending to kill my grandfather, the Universe will start steering me away from my intention through a small sequence of very likely events. If I persist in my intentions, the events increase in magnitude, but it’s possible (because there are just so many possible events that can influence me) that I will never realize my intention without even seeing anything strange with the Universe.
    • Otherwise, we have a phenomenon known as the Grandfather Paradox. I1 may create an unstable point in the spacetime: I1 (and thus I2, and the grandfather) will both exist and not exist at the same time, in a kind of macro-Shrödinger effect. What’s worse, anything that either was caused by I2 or the grandfather or would have been caused by I1 will also both exist and not exist. It’s unclear what effect this will have on the rest of the Universe — as these effects ripple through time, they expand their scope (the events that the grandfather caused themselves caused other events) but decrease their magnitude (think of it as a sound wave propagating through space, maybe bouncing off objects).
    • It’s possible that over time, as soon as they become small enough to be captured by quantum uncertainty, they stabilize so the ripple has a finite size (I can’t visualize what the ripple would actually look like, maybe a really fast-flashing grandfather).
    • Or the Universe could cease to exist.
    • If duplicates of matter/energy are not allowed, I1 would need to replace I2 (for this to work, the Universe would somehow need to have a unique identifier for everything in it). It’s difficult to think about replacing something complex like a human being because he or she is made of many building blocks, each having a different identifier, so let’s simplify and think of something that consists of a single block (say, a photon). The photon would replace its version from the past. Does this photon have “memory”, that is, its future state?
    • If so, the photon will likely change its course (behave differently than I1 did). This may mean that I2 may never end up traveling in time, but that’s fine because there is only version of it. This is equivalent to the theory of rewritten history.
    • If not, I1 simply merges into I2 — I2 enters a time loop which it will never be able to leave. It’s not aware of that, however, so to I1, the time travel ends its consciousness.
    • If somehow we can maintain this option at a macro scale, it’s possible that an individual may travel back in time and maintain his or her memory, provided that the interval of time travel is small (for example, if I1 travels back to before I2 was born, I1–an individual–would have to replace a bunch of particles which aren’t even part of a human being. That will very likely result either in the destruction of I2–I2 will not be possible given the new state that all of its particles will have assumed before they created it–or in the destruction of I1–the “memory” that each particle has will be insignificant and so I1′s consciousness will end as soon as he travels in time)
    • Another way not to allow duplicates would be for I1 and I2 to “swap” places: as soon as I1 travels backwards in time, it takes I2′s place and I2 takes I1′s place in the future. When I1 gets to the time when it first traveled in time, he ceases to exist. There is no paradox because time travel transfers both I1 and I2. It doesn’t matter whether I1 actually enters the time machine the second time around or not, because his existence ceases past that point anyway.
    • Finally, the Universe may choose some option in between, for example, I1 and I2 will be entangled in a way that doesn’t increase entropy. This may look like a kind of constrained time travel, where paradoxes are not possible because they are prevented by the entanglement of I1 and I2 (in other words, I1′s and I2′s actions will either make both of them survive the interval of their co-existence, or make them both self-destruct. At the event of time travel, I2 goes back and I1 is the only entity remaining. ** This brings me to an interesting idea: what if time travel and quantum theory are actually one and the same? What if the time interval where I1 exists in the past (and influences outcomes) is equivalent to the cat being both alive and dead: it cannot be inspected, and nothing can be said about what happened or what any of the outcome that I1 could have influenced was. The instant at which I1 entered the time machine would then correspond to the box being open — we find out what all those outcomes were.

Possibility two: there are many versions of the Universe

This is similar to the first case (rewriting history) but if the Universe bifurcates with every time travel, an awful lot of energy is needed to do time travel. Alternatively, the Universe may already exist in its virtually infinite forms, each form corresponding to a different possible unfolding of an event. We know from Newton that at a high level the world seems deterministic, but at a quantum level it’s not — this randomness I see as a basis for the different unfolding of the events (hence, once and for all answering the problem of free will: there is no free will, but there is also no determinism — what we perceive as “choosing” is just a particular folding up of all the quantum uncertainties). So every time we put a cat in a box, there are Universes in which the cat is dead and Universes in which it’s alive. We know which path we’re on as soon as we open the box. Time travel would then simply be an opportunity to follow a different path.

There is one problem with many of the sub-theories above, and that is a problem of the sudden injection of matter/energy: if I appeared from the future, the mass/energy of the universe wouldn’t be conserved (I could build a kind of automated time travel machine that continuously adds energy or mass to the universe, or–even better–reduce the entropy of the entire universe). So my arrival needs to be coupled with disappearance of energy or mass. Or maybe some other matter/energy is transferred into the future (where the travel originated). Possibly an arrangement such as one in Primer is needed where travel is only possible to a limited point in time, where all the prep work has been done, for example enough energy has been set aside to be “displaced” by the newly arriving energy. It may also be that the time travel portal has a standby energy consumption — it consumes energy at some rate, like a leaking pipe, all the time — this would allow energy of at most that rate to be transferred from the future.

Another way to solve the sudden injection problem is to borrow me for the duration of the time travel episode from the time chronologically after the event of time travel. That is, if in the year 2010 I go back to the year 2005, my extra existence for five years between 2005 and 2010 will be borrowed from what would have been my existence between 2010 and 2015. In other words, as soon as I reach the year 2010 the second time around, I jump to the year 2015. This is a kind of quantum entanglement, but not of I1 and I2, but rather of I1 and the future version of I1.

The Pencil Curve

(Originally published on September 26, 2010)

What is the shape of the curve traced by one tip of a pencil as you roll it up a cylinder (the pencil being tangent to the cylinder at all times)? The pencil starts just touching the cylinder. As the pencil moves closer to the cylinder, the tip first moves away, then quickly moves back, eventually to stop as the pencil becomes vertical.

Below is the diagram of the pencil’s start point, the end point, and an arbitrary point.

The pencil rolling on a cylinder. The tip tracing a curve is marked.

The pencil rolling on a cylinder. The tip tracing a curve is marked.

It’ll be easier to parameterize the curve: determine the coordinates of each point as a function of the distance of the bottom tip of the pencil to the bottom of the cylinder (where it is tangent with the table). Initially the pencil’s tip is just touching the cylinder and the distance \(t\) is equal to \(l\), the length of the pencil. At the end, as the pencil is vertical, \(t=r\) the radius of the cylinder.

We have, from the arbitrary point, \[\begin{aligned} \frac{y}{x+t}&=tan 2\alpha = \frac{2tan \alpha}{1-tan^2\alpha} = \frac{2r/t}{1-r^2/t^2} = \frac{2rt}{t^2-r^2}\\ \frac{x+t}{l}&=\cos 2\alpha = 2\cos^2\alpha-1 = \frac{2t^2}{r^2+t^2}-1 = \frac{t^2-r^2}{t^2+r^2} \end{aligned}\] Therefore, \[\begin{aligned} x &= \frac{l(t^2-r^2)}{t^2+r^2}-t\\ y &= \frac{2rt}{t^2-r^2}\cdot (x+t) = \frac{2rt}{t^2-r^2}\cdot\frac{l(t^2-r^2)}{t^2+r^2} = \frac{2rtl}{t^2+r^2} \end{aligned}\] We can plot the curve as a function of \(t\):
Plotted pencil curve.

Plotted pencil curve.

An Interchange

 (Originally published on September 22, 2010) 

Here is my version of a stack interchange — a system of two highways intersecting such that cars coming from any direction can either go straight, turn right onto the intersecting highway, or turn left in the opposite direction of the intersecting highway (I didn’t allow U-turns so as not to complicate things too much).

My highway interchange: the red arrows show the paths that drivers going in a particular direction could take.  Note that at most two lanes intersect at a point which makes it conceivable to build the interchange with two levels only; in the di…

My highway interchange: the red arrows show the paths that drivers going in a particular direction could take.  Note that at most two lanes intersect at a point which makes it conceivable to build the interchange with two levels only; in the diagram the broken lane is below the other

And here is a slightly different version that doesn’t suffer from the problem of a center being too dense — if you look very carefully, you’ll see that in the version above, the centers of each of the arcs would meet unless the drop is not uniform.

Addressing the problem of a dense center

Addressing the problem of a dense center

It seems to me that it can be built with two levels only — although something makes me think that it’s not that viable to build (since existing stack interchanges require four or more levels), plus to ensure a practical speed it would either have to have a rather large surface area, or only allow passenger cars driving with reduced speed. Here it is in three dimensions:

My stack interchange in three dimensions

My stack interchange in three dimensions

My stack interchange, zoomed in.  The lane splits into three lanes and each separate lane takes you into one of the three directions

My stack interchange, zoomed in.  The lane splits into three lanes and each separate lane takes you into one of the three directions

 

You can download my Google SketchUp file here .

A New Color Picker

(Originally published in 2010)

I bet you've seen color pickers before. They are neat UI elements that allow you to select a particular color that you may have in mind. They do that by organizing the entire color space in a way that's easily browsed. Usually, pickers show you a 2D panel that displays all colors along two of the dimensions, and a slider for the third dimension; or they only show you a small-ish subset of all the colors.

I’m fascinated with color, especially when there’s math or technology involved. And so I set out to build a picker that displays all the colors, yet requires only a single two-dimensional surface.

To learn all the details of how I generated this new color picker, see this post. In short, however, the idea is this: we want to map a 3D space (0..255, 0..255, 0..255) into a 2D space (0..4095, 0..4095) in a smooth way, so we'll use space-filling curves. "Walking" the R, G and B dimensions, however, gives a pretty unsmooth picker, so instead I "walked" the color intensity, and for each intensity, "walked" over all possible colors of that intensity. I then picked the order in which the colors would appear by sorting by R, G and then B.

The resulting color picker has the interesting property that it displays all possible colors (up to the image's resolution) in a single image:

A New Color Picker

A New Color Picker

Blinker Frequency

 (Originally published on October 9, 2009) 

How your blinkers never blink with the same frequency as those of the car in front of you

If you’re a driver, you no doubt spend a nontrivial amount of time waiting at an intersection, another car in front of you, you both wanting to turn left. You probably noticed that the turn signal of the car in front of you doesn’t blink with quite the same frequency as one in your car.

In fact, I have a strong suspicion that no two cars have the same frequency–at least that’s what it seems to me since I can never find turn signals to be in phase.

And so here I am, waiting for the light to turn green, with two blinkers flashing with different frequencies. I don’t like to do nothing, so I often figure out what this difference in frequencies is. It’s not as hard as it may seem–and it involves no measuring devices! It’s a pretty cool trick that takes advantage of the fact that while it’s hard to measure or compare quantities (such as speed, frequency), it’s relatively easy to detect synchronicity. First, figure out which blinker is faster. Then wait for both blinkers to be momentarily synchronized (i.e. for both to flash at the same time). Count how many times the faster blinker flashed before both are synchronized again (make sure you don’t “skip” a cycle). If the faster one blinked n times, and you captured the cycle correctly, the slower one blinked n-1 times so the faster one is n/(n-1) times faster. I like to go a step further and memorize what fractions of the form n/(n-1) come out to be as percentages to impress people with some percentage estimates while sitting in the car, with no calculator. For example, if the slow one blinks 9 times and the fast one blinks 10 times, the fast one is 11% faster.

This trick works for car blinkers in part because most cars’ blinkers flash with similar but not the same frequency (if the ratio of frequencies is fairly large, you will find it hard to not skip steps–that is, the blinkers will not synchronize fast enough). I also like to go to the gym, get on the treadmill and figure out how fast the person next to me is running by observing the synchronicity of the markings on the treadmill (treadmills have their brand names displayed on the belts)–the trick works because most people run at similar speeds (between 6.5 and 8.5 mph) and, moreover, because people tend to run at quantized speeds, I am often able to figure out the speed precisely.

 

A large group trying to go somewhere

(Originally published on September 18th, 2009)

Very frequently I’ve been in a fairly large group of people (5 or more) and we were all trying to go somewhere, say, a movie theater. I noticed that the amount of time it took us to actually get going increased pretty rapidly with the size of the group. This fact by itself should be no surprise to anybody; I have a feeling, though, that the amount of time is super-linear with the number of people, and perhaps it’s even super-polynomial. Let’s see if we can derive this relationship. We’ll make some simplifying assumptions but the gist of the problem should be captured.

Suppose you have \(n\) people in a group. Every person is mostly ready to leave, with the exception of a small number of tasks the person has to do (or can do, given enough time) — you know, the “If we’re not going to leave in the next five minutes, I’m just going to quickly go to the restroom” sort. Assume that each person experiences some distribution of such “events” which derail the effort of leaving. The duration of such events is also a random variable. The group can’t leave if at least one of its members is currently occupied with an event. Let’s say that the probability at any given time that a person is not occupied with an event is \(p\) (\(p\), therefore, is the measure of “readiness”; of course, if \(p=0\), the group will never leave; if \(p=1\), the group will leave at time \(t=0\)).

Assume it takes \(n\) people time \(t\) to leave. Now add a new person to the group. The group will leave at the expected time \(t\) only if the new person happens to be free at that time (with probability \(p\)). If not, the group will have to wait; assuming the events are independent, this will take another time \(t\) (at the end of which the new person may or may not be free). The expected time is therefore

\[tp + 2t(1-p)\cdot p + 3t(1-p)(1-p)\cdot p + 4t(1-p)(1-p)\cdot p + \cdots\\= tp\left(1 + 2(1-p) + 3(1-p)^2\right) + \cdots\]

Let

\[S=1+2a+3a^2+4a^3+\cdots\]

Then

\[S = (1+a+a^2+a^3+\cdots) + (a + 2a^2 + 3a^3+\cdots) = \frac{1}{1-a} + aS\]

Hence

\[S = \frac{1}{(1-a)^2}\]

So the above becomes

\[\frac{tp}{(1-(1-p))^2} = \frac{tp}{p^2} = \frac{t}{p}\]

Suppose \(p=0.5\). The expected time is \(2t\): an additional person doubles the amount of time it takes for the group to leave. The amount of time is therefore not only super-linear in the number of people, it’s actually exponential!