What is Luck?

At one point in life I got interested in the concept of luck. It was the one thing I wish more speakers in business school attributed at least part of their success to. It was also something that everyone seemed to have a strong opinion of, usually with very little justification (let alone a good framework for understanding it or compelling, objective data).

In my reflections I’ve arrived at the following tenets

  1. Luck happens to everyone – it’s the naturally occurring variance of outcomes

  2. Luck is an important factor that affects outcomes – usually not what people give it credit for. Very likely (because of the Fundamental Attribution Error), people overestimate the impact of bad luck, and underestimate the impact of good luck

  3. Luck is subjective and nuanced – it can be good or bad depending on the circumstance, but also depending on the perspective (someone’s good luck may be another’s bad luck), and even controlling for the circumstance and the perspective, it may look different depending on the subject’s mindset and the time horizon being considered (bad luck being rejected from a university may lead to good luck coming up with a brilliant invention)

  4. We cannot control luck

  5. However, we can control how we react to luck, and by reacting well, we can significantly improve our outcomes

I tried to bring that last point home by building a “luck simulator”. The simulator uses randomness to chart some outcome. For simplicity, let’s assume the outcome can take a range of values from very positive to very negative, and time proceeds in quanta.

“Pure luck” is just a random walk, no bias either way. You may end up with something like this:

What “pure luck” could look like.

What “pure luck” could look like.

However, we can react to luck well by exposing ourselves to more events, each of which has a component of luck, and choosing the path that gives us a better outcome. Let’s imagine a scenario where for 1/3 of the time we let “pure luck” take course, but for the remainder of the time we explore two paths in parallel and at the end of each period, pick the better outcome. This creates a biased random walk that might look like this:

Explore two alternate paths, two-thirds of the time.

Explore two alternate paths, two-thirds of the time.

Finally, imagine an even more drastic version of such selection, where we always explore four paths in parallel and pick the best one every, say, six steps. As you can imagine, this is biased significantly upwards:

Explore four paths at a time, pick one every six steps. We run out of space on the Y axis…

Explore four paths at a time, pick one every six steps. We run out of space on the Y axis…

(The expected value of each strategy is left as an exercise for the reader. Hint: EV(A)=0)

The takeaway, at least for me, is to harness luck – explore multiple paths in strategic moments, and pursue the ones that look promising. With advanced modeling we could extend this to the idea of time horizons as well. In the meantime, you can play with the simulator here:

Luck Simulator

Five Stages of Technology Sophistication

Much has been said about the transformative effect that technology (to be more precise, computer software) has had on various industries. But the effect is not new; nor is the transformation anywhere close to being complete. Looking across industries, we can notice themes. Specifically, as industries evolve, while the degree to which they utilize technology at a particular moment in time varies (for example, today marketing has at its disposal a much more sophisticated set of tools and technology practices than mortgages do) they seemed to embrace technology along the same predefined path. I've isolated five distinct stages, described below.

Stage One: Pre-technology

Industries usually begin as human endeavors. There are, of course, exceptions, most notably industries that were made possible in large part thanks to technology (such as bioengineering), but even today, surrounded by the abundance of tools and services that make it easy (so to speak) for the masses to technologize their efforts, we are fundamentally, as a species, not technology-literate the way we are capable of speaking or writing. Walk into any household or a company. Technology is relegated to technology-builders, and everyone else does everything but. And true, there are pockets where these proportions are out of whack, but even a technically-sophisticated companies like Google boast an order more non-technology than technology solutions.

On one hand, this is not surprising. Technology–and especially software–is very different from other human-adopted systems, and one of its biggest strengths is, in my view, also the reason why it's so hard for everyone to adopt. It's just so damn precise. There is very little if at all redundancy built in. Computer programs are recipes that are rigid like rail lines–they very explicitly move state from A to B under very specific conditions. Computer languages are compact (only a handful of keywords) and highly grammatical; but expressiveness suffers. Try replacing one character in your Ruby or Java code and your program will likely not compile. This means that to be truly technology literate, one has to have a rather seriously mathematical and systematic mind, capable of not just creating abstract representations of objects (which every brain does all the time, quietly) but also able to describe these representations very explicitly. This is why attempts to make programming "user-friendly" by using graphical diagramming-type metaphors or WYSIWYG controls always falls short of expectations.

On the other hand, I can't imagine this state of things persisting much longer (in a grander scale of things... say a century or so). But that's a topic of conversation for another time.

Stage Two: Local technology

As industries focus on systematization and cost-cutting, technology is usually brought in. This takes a very specific shape: I call it local technology to denote solutions that usually help individuals, or at most solve very small, specific problems. Part of why local technology is the earliest stage in such an evolution is historical: computers were only connected in the early 80s, while software existed much before then. But local technology is easier to get one's head around, and, most importantly, it doesn't violate interfaces: as a worked in company X, with local technology, your inputs and outputs didn't change; you still interacted with the same people and had the same responsibilities. Technology simply made you more efficient.

My recently favorite industry, mortgages and mortgage origination, illustrates this stage perfectly well. The complexity of the use cases (the completion of a mortgage application file) quickly outgrew human processes and the systematization took advantage of technology of the 80s, which was local technology. Each of the "stations" in the backend – loan officers, processors, underwriters – optimized their environment, but the interface remained because technology couldn't strongly connect the stations. The industry became deeply entrenched with this form of technology, so much so that it's only relatively recently that we're beginning to see a transformation from heavily optimized individual stations to optimized systems that include all stations (and perhaps redefine the stations altogether).

Stage Three: Connected technology

Easily the most exciting and enabling stage in the evolution of any industry. Of course, we are now living in the Renaissance of Connectedness: not a day goes by without me learning about a new problem space being "disrupted" by technology, which really just means a company successfully innovating on a business model to take advantage of the existence of the omnipresent and all-so-connecting Internet. That's exactly what Uber, Airbnb, Facebook, Twitter, Dropbox, Spotify, and dozens of other large private tech companies (can we really call them startups anymore?) are doing. Late 1990s saw a flurry of activity, but the environment wasn't quite ripe for true connectedness (made possible with near-universal broadband and then mobile data). Truly connected technology had to wait for mid-to-late 2000s and there's still a lot to connect.

Stage Four: Analysis and insight

The connecting power of technology simply cannot be underestimated. It will continue its run until the majority of industries are shaken up and redefined by the notion of connectedness. But the value of technology doesn't end here. Most industries, once connected, then turn to data and to the understanding of what's behind the data. Except in limited cases, analytics and insight only become tremendously more useful once the data from all connected endpoints are pooled and analyzed. That's a big idea behind the somewhat hype'y term Big Data.

Analysis and insight are a frontier beyond just connectedness. In search for further efficiency, industries must begin to understand the processes they put in place (via local technology) and connected (via connected technology). And while we hear a lot of companies whose competitive edge lies in analytics, the aforementioned Big Data slogan just won't die, Data Scientists are now the most sought-after professionals, we are only at the very beginning of useful analytics.

Stage Five: Artificial Intelligence

No matter how sophisticated, analytics is just as good as the heuristics that humans can think of or the trends that naturally emerge. In a quest for insight beyond that, we turn to technology's ultimate promise: one of emulating human intelligence. While there are a handful of companies that can boast truly artificially intelligent systems (some that come to mind are IBM, Google's computers hidden deep in the company's basements somewhere and processing facts under Peter Norvig's careful watch and perhaps Tesla), the solution space is nowhere near as "toolified": local technology has applications going back to early computers, connected technology has the Web, analysis and insight has machine learning and data visualization, but Artificial Intelligence is really mostly kept in the realm of research papers and some companies' secret projects.

***

What is the significance of these five stages and what can we learn from this taxonomy? First of all, understanding a particular industry's evolution, overlaid on top of the evolution of software helps us understand the challenges and opportunities inherent in that industry. If an industry grew tremendously before connected technology became reality, the existing players are likely plagued by legacy local technology with a modicum of connective tissue. If an industry just seems to have become connected (the transformation from on-premise to SaaS solutions, for example for an enterprise space), an opportunity to look out for will likely come in the insights that are now possible with the proliferation of data. Finally, it's useful to distinguish between analytics and intelligence, even though we still have ways to go before we see a wave of new "disruptions" which will really be business model innovations stemming from the fact that we will be able to near-infinitely scale human intelligence.

Judging by Outcomes, versus Judging by Process

In the past, I've encountered organizations that believed in two very different methods for evaluating performance. There are the Outcome-driven evaluations: by and large, it doesn't matter how you arrived at an answer (provided it was legal – presumably! – and ethical – to some extent), so long as the end result was good. And then there are the Process-driven evaluations: by and large, the outcome itself is not as important in evaluating performance as the process one used to arrive at the outcome. I've spent some time thinking about which camp I fall in, and why.

There is a logic to both approaches. For the proponents of the Outcome-driven approach, the logic is simple: outcomes are all that matters. The process does not put bread on the table. A good process is not that important, because over time, those that do have a good process that generates good outcomes will continue being rewarded. It's possible that one gets lucky once or twice, but it's unlikely that someone will continue getting lucky so those who rely on luck (or in some other way have a substandard process) will drop out sooner or later.

The proponents of the Process-driven approach, on the other hand, argue that the outcomes are just too noisy to get any sort of information. It's much better to evaluate someone on a good process they used to arrive at an answer, because with a good process, they may get unlucky once or twice, but by and large they will produce good outcomes. Rewarding people on process also encourages them to think about their process early on, which is more efficient than letting them "figure out" what works by observing the rewards from the outcomes.

The Process-driven approach used to appeal to me intellectually. It eliminates the element of luck, provides a clear structure for others to operate, and lends itself better to systemization (since the fundamental "building block"–the thing being reviewed, fine-tuned, and shared with others–is the process). It's less arbitrary and more elegant.

But as I saw more organizations, and thought about it more, I ultimately switched camps. I believe in Outcome-driven evaluations.

Why? The more I looked at companies advocating the Process-driven approach, the more systemic failure I noticed. A massive risk in evaluating based on the approach one took is that such an evaluation is inherently subjective. What makes a good process? Presumably those whose own process is good can judge others' process to be good. But that creates a vicious cycle: it's not that long before everyone praises and self-congratulates one another on an outstanding process while the outcomes deteriorate because of a lack of any kind of feedback loop. It also creates a structural excuse to poor performance: "Well, my approach was fine but I just got unlucky" becomes the commonplace response. Finally, it leads to misattribution of errors: the focus on process causes managers to want to alter the process if bad outcomes stream in, rather than to address the root cause of the problem, which usually is an underperforming individual.

Of course, many criticisms of the Outcome-driven approach are valid. But there are easy fixes. That's true, outcomes are noisy, but if we perform rich diagnoses that collect many data points, essentially turning an outcome from a binary one to a multifaceted one, we can extract a lot of information from any one outcome. It's unfair to evaluate someone based on bad luck – but there are ways to take that into account (for example, by having a three-strike policy). And the focus on making decisions systematically is possible to achieve through the creation of a culture that encourages debate around process, emphasizes success stories (i.e. good outcomes) that incorporate systematic decision making, and focuses on process in employee training and education (education, rather than training, is good for aspects of performance that involve long time frames).

On the other hand, I view the Process-driven approach as structurally and fundamentally flawed and thus unfixable. It's nearly impossible to objectively evaluate a process in the absence of an outcome (as Process-driven approach stipulates) – since that involves counterfactuals. Discussions around process often end up being theoretical in nature – good logicians often win. Finally, a focus on process prevents creative albeit unorthodox approaches from prevailing even if they are desirable and superior to others.

Venturing a generalization, I believe that any approach that relies on intelligent designers rather than feedback systems has fundamental flaws which limit its effectiveness.

A Single Version of Truth

There is a certain elegance to a belief that a single version of truth exists; that, by following a deterministic, codified process, we can definitively answer questions; that, in the face of a disagreement, you and I can discuss our way to the correct answer, whereby one of us is right and the other is wrong. A single version of truth would certainly provide comfort and confidence to any relationship, any team, any organization.

The organization I worked at held a firm belief that there exists a single version of truth. It took much longer than it should have, but I've thought and experienced enough to vehemently disagree. There is no single version of truth.

And I'm not talking about the results of Gödel and the like who proved that some statements are unprovable. This is one fact that my mathematician friends, with a detectable grin on their faces, used to always bring up as I was going through my period of agnosticism. The problem with using that as an argument is that while it's mathematically correct, it doesn't really help me in everyday decision-making. Statements whose truthiness I wanted to ascertain were not crazy self-referential constructs built to make a point. They seemed much more "real life".

I'm also not talking about visibly nondeterministic statements such as "It will rain five week from now". Of course, if the processes involved are chaotic (nonlinear) and the statement is a prediction or in some other way requires the collection of a large body of data, there is no clear truth.

I'm talking about these nice, "linear" statements that you may encounter at home or at work. The kinds of statements your boss makes and convinces you that they are truth – i.e. that they can be derived from facts using logic.

The problem is that logic is not the mechanism by which truth is created from nowhere; logic is simply the process of transformation one statement into another statement that has the same truthiness. Logic, in short, is the creation of equivalent truths. And in most "real life" conversations we don't start with facts; we start with our individualized set of axioms–assumptions that we believe to be true.

I've witnessed countless occasions where smart men and women performed logical jujitsu, totally blind to the fact that the points they were starting from very different assumptions, disagreeing at a fundamental level of basic assumptions and philosophies. The problem is that we rarely state these assumptions. Here is a good exercise: list, say, fifteen axioms that guide you in life. Odds are, you will find it very difficult. Axioms are tacit knowledge, something we accumulate over long periods of time. I would argue that our axioms are a manifestation of our wisdom.

Even if we are self-reflective (and honest!) enough to see that, we trick ourselves thinking that our philosophies are by and large equivalent, and so we're starting from the same point. Wrong! Axioms are derived from our philosophies in a messy, nonlinear, nondeterministic way. These derivations are full of biases. And the philosophies themselves – the most fundamentally held beliefs – are highly individualized, and often impossible to describe in words. So how can I claim that my philosophy, let alone my axioms, is consistent with yours?

"Is it true?" is therefore relevant only given a particular individual's background and the context. In other words, there isn't "the" truth -- there are many truths based on our assumptions and backgrounds. A blind belief in the notion of a single truth is therefore particularly dangerous, because it promotes intellectual bullying – the ability to outlogic someone regardless of the quality of the statement itself. In fact, environments that tend to believe in a single version of truth also tend to specialize in people who are good at debates – erudites who know the tricks of logic and have mastered the art of pseudoproof, the sleight of hand-waiving or the placement of the unsound (and critical!) logical leap on a page break.

No, I'm not advocating to abandon logic whatsoever. In general, being able to use the laws of logic to make decisions and to communicate is arguably the most effective way to achieve good outcomes. However, a shallow understanding of "logical thinking" can be detrimental. Logical thinking is nothing else than the process of manipulating statements in a way that preserves their truth value. So what should individuals and organizations do? Invite others to a discourse but be open to the idea that whenever we converse, we start with two different sets of assumptions, and thus consequently we can both be logical but arrive at wildly different results. Spend less energy engineering logical transitions, and more energy digging into the underlying axioms. They are likely different. When you notice a difference, work on resolving it. Perhaps we can avoid the jam and pick a subset of our assumptions that we do agree on, assuming that we can deduce a relevant decision. Or maybe we can both compromise and merge our axioms into one set (so long as it's consistent!). Most likely, whoever is responsible for the decision gets to call the shots – but this time, everyone is clear on how a decision was made and was it was conditional upon. That way, if the outcome backfires, the decision-maker can trace it back to her axioms and refine them, rather than creating an illusion that it was not the fault of the decision-maker.

On the Blankian Imperative

Business plans never survive first contact with customers, [so] get out of the building, talk to your customers, and iterate [until] you see their pupils dilate!

– Steve Blank

 

One of the best things about Stanford is that you get to interact with some great people. I had the pleasure of taking a class taught by Steve Blank. The class was called Lean Launchpad and it was an inverted-classroom lab where teams of four people worked on their startup ideas. The only requirement was that teams had to follow the lean methodology: form hypotheses about the various elements of their business (such as customer segments, the value proposition of their company, the revenue model, etc.) and run tests to either confirm or reject these hypotheses. If you haven't heard about the lean startup, you should really read up on it. You also needed to test hypotheses primarily by talking to customers, not by doing market research or sitting and thinking about them. The main guiding principle is that evidence for your hypotheses cannot have originated in your head.

And so in Lean Launchpad, 90% of the work happened "outside the building" – not in the classroom or at home, but in the field, interviewing potential customers. With 10 meaningful customer interviews per week for 10 weeks, the class was exhausting, but – as Steve was quick to point out several times – it's only a tiny preview of the workload, the intensity and the emotional roller coaster of what having an actual startup will subject you to.

I say "the pleasure of taking the class" for a couple of reasons. First, Steve Blank does not filter. He says it as he sees it, he does not wrap criticism in niceties. He will interrupt you mid-sentence. Some find it too abrasive, but increasingly having to deal with people who rarely say what they think and often fluff things out, I found it refreshing. I felt like I learned a lot more once I was out there, vulnerable, facing the Wrath of Blank after I failed to talk to enough customers.

Secondly, I have come to cherish experiences in my life that have, over a relatively short time period, made me truly internalize a concept, and Lean Launchpad with its guiding principles of lean startup and customer-driven development was one of them. Prior to taking the class, I would have told you "Sure, it's important to talk to customers". After all, everything that Steve Blank talks about makes sense, and as commonsensical, I would dismiss the teachings as something that I was already doing. But what I had done was merely intellectualize his point. When it came to actually work on my venture idea, I would forget the point and go back to my old ways.

What were these old ways? I would short-circuit talking to customers, believing that I know ex ante what the customer would say. I would avoid thinking about the hypotheses implicit in my business ideas, and even if I did write them down, I would avoid testing them out of fear of being wrong. I was also being lazy, and avoided talking to customers because talking to customers is uncomfortable (at least at first), hard and takes too much time.

Even when I did eventually talk to customers, I realized that I wasn't really doing a good job. I wasn't listening. You know how sometimes you talk to someone, and when they speak you politely nod but all you're thinking of is what you're going to say next? That's what I was doing. I would also often skillfully steer the conversation in a direction that may confirm my assumptions, leading the customer in a direction I wanted him or her to go, rather than learning new information.

Finally, in avoid talking to customers, I used an excuse – a sneaky, dangerous excuse that I often hear people use. It was that "Some ideas just don't lend themselves well to customer development". I would pontificate how the iPhone would never have been invented if Steve Jobs talked to customers.

Steve Blank helped me realize what it means to talk to customers well. Afterwards, I wrote up my experiences:

The problem is that there are good ways of talking to customers, and bad ways of talking to customers. There is a kind of art to interviewing people in a way that lets you extract meaningful information that avoids many of the common biases that as a founder you will likely be prone to. Such biases are generally a manifestation of a very simple problem: we are by nature egotistical. We care about our idea, our product more than someone else's pain. We fall in love with our product and subconsciously, we lead with questions that avoid the possibility of uncovering uncomfortable truth about our product or the need we believe in. We also constrain our customer to give us meaningless feedback about something specific, rather than offering valuable feedback about something broader.

To avoid this mistake, don't start with the product. And definitely don't start by showing the product to the customer. When I did that, instead of getting creative, insightful comments that correspond to actual need, I "Wow"ed the customers and got very superficial feedback. You see, when you start with a product (especially the visualization of the product) rather than establishing a need first, you're forcing the customer to adopt the role of your product's user. This means that your customer will make lots of assumptions about what the product does, what it's for, and how it should be used. You're, in effect, increasing the distance between your customer and her need. You're also turning your customer into a refiner who'll try to respond at a granular level to what's in front of her. You're numbing her creative and discriminating facilities because you've given her something very tangible to respond to. Finally, you're preventing yourself from learning about your customer. Maybe if you had understood more about your customer's needs (and pains in their everyday job), you would be able to come up with a much better painkiller.

Instead, I like to start very generally. What does the customer do every day? How does she do it? What is working and what isn't? What are the biggest pains? In an ideal world, what would she not need to be going? How would she want to be doing her jobs? Once I establish this broad landscape, I zoom in, focusing on the big needs and pains (which should hopefully be aligned with what your product does – if not, explore them more before turning your customer's attention away from her biggest pain and towards your solution).

Finally, be present in the interview. It's tough to do, because we want the conversation to go somewhere. We want to manufacture insight. And insight can't be manufactured. If a customer wants to tell you a story, let him do it. If a customer seems more interested in what's in your opinion a minor aspect of your product, listen.

 

Only when I've internalized – and I mean truly internalized – the above (which means erring on the side of customer development and hypothesis-driven approach even if it didn't feel efficient, or useful), did I allow myself to step back and think about the limitations of the Blankian Imperative. For it's not a panacea, suffers from its own biases, and should not be indiscriminately applied to everything.

First, I encourage you to separate hypothesis-driven development from the lean startup model. In lean startup, you want to fail fast. Failing fast gives you momentum and is a good forcing mechanism to avoid going down rabbit holes (and it also makes for good stories and makes it possible to have a class like Lean Launchpad). But failing fast also means that you run the risk of never committing to something you believe in. This may manifest itself as you setting a low bar for rejecting a hypothesis. Many of my friends in the class would drop a hypothesis after talking to just one person. In theory, failing fast is not a big problem because all you're looking for is one true positive, and so it's better to stumble across a false positive than miss a false negative. But if you fail fast, you may miss the really big problems waiting to be solved, but disguised as noise.

In fact, you can run a full hypothesis-driven process with customer discovery but not fail fast. This means that your hypotheses take a little longer to prove or disprove. If you don't have a problem motivating yourself (and don't need constant momentum to keep you going), these "slower" tests won't pose a threat to your startup.

Secondly, I would choose a well-run intuitive process to a poorly-run hypothesis-driven process any day. I've seen a number of teams that think they are going down a good road only because they followed the process. Usually, this manifests itself as vaguely-phrased hypotheses (or statements that don't really sound like hypotheses: a hypothesis is a belief, an assumption that can be rejected in the presence of appropriate evidence), tests that act like checklists of hypotheses ("Oh, I've heard the customer mention this, so check!") and rejections/acceptances that simply add the checks up.

Once you decide you want to be hypothesis-driven, think of your startup like a scientist would. Are your hypotheses falsifiable? Would two reasonable people agree on whether particular evidence rejects a particular hypothesis? Do the tests provide strong evidence? It may help to help a part of the team try to keep the hypothesis and another part of the team try to kill it.

Finally, yes, you might be thinking of a product that unearths new demand, as was the case with an iPhone. Customer development is much more difficult in a case like that, because the pains are likely horizontal (i.e. lots of little seemingly unrelated pains that are all solved with your product), and muted (because your customers are so used to dealing with the world without your product that they came to terms with its pains). But all this means is that you have to be smarter and more creative about your customer interviews. Don't hand an iPhone (or whatever equivalent you've envisioned) to your customer, asking what pains it would solve. Your customer would likely not know, even though he might be wowed by the gadget. Instead, observe your customer go through their daily lives, do the workflows that you think your product might address, and take notes. I'm a big fan of shadowing people – following your customer for a day may seem like the most inefficient way to get information, but sometimes it's really the only reliable way of getting information that didn't originate in your head.

From the Archives: Ephemeral Dice and Fire Effects

I was going over my archives, and stumbled on a couple of video games I was working on around 2002. There were two visual effects that I liked in particular, and the genesis of each of them had an interesting element of chance, so I thought I would bring them back. The original code was written in a mixture of C and assembly on Windows 95, so it took a little porting, but the concepts in computer graphics haven't changed all that much in the past decade, so the port wasn't that complicated.

 

Dice

I used this effect in an implementation of the game Risk. I wanted to illustrate a roll of dice. The whole game had a kind of ephemeral feel to it, and I wanted the dice to feel that way too. At first, I wanted to cycle through random faces, which would fade out before a new face appeared, but I accidentally committed an off-by-one error, which made the effect look much better.

The way I faded the dice was to cycle through each pixel of the display, read the color value of each pixel, fade the color a little (by making it get a little closer to black – in the original version of the game – or white in this port) and then draw the pixel back on the display, this time with the right color. By mistake, I drew the color one pixel higher than I should have. As a result, the dice not only faded, but also shifted up, which made them look like they were evaporating:

Fire

Another time I wanted to simulate fire in a platform game that I was designing. The fire would come from a torch attached to a cave. The torch was pretty small, but I wanted the fire to look elaborate. And so, I wanted to model three aspects of it:

  • The flame, which has a generally upward shape, broad at the bottom and narrowing towards the top, and which follows a color spectrum consistent with fire (from red to yellow)

  • Sparks, which are visible separately from the flame, also changing in color as they move away from the fire source

  • The fact that the fire make the surrounding portion of the cave brighter

I decided to simulate the motion of 80 light particles, which had a position and a temperature (which defined their color and presence – if a particular got too cool, it would disappear and a new one would appear at the source of the light). In addition to this, I would lighten the cave within a particular radius of the center of the light source. It took me a while to create the flame, but then, quite by chance, I decided to blur each light particle, and realized that doing that pretty much created the flame for me. Come to think about it, it makes sense – the complete flame is just an averaging of the individual light particles.

 

References

  • To see the dice effect on a new page, click here.

  • To see the fire effect on a new page, click here.

  • The source code for the dice and fire effect can be found in this repository.

Is it the horse or the jockey?

"Is it the horse or the jockey? And the answer is yes."

– Andy Rachleff

 

Is entrepreneurial success dependent mostly on the quality of the team, or the idea? Over the past year or so I've heard this question asked many times. I've also been asked my opinion on the question, which I found funny since I don't have much experience forming entrepreneurial teams. Curiously, this question divides most VCs, professors, and entrepreneurs I've talked to into highly polarized camps. There are those who swear that it's all about the team. After all, given smart people who work well together, enough time, and determination, they will come up with a great idea; conversely, a bad team won't be able execute even the best of ideas – they will just screw it up and forfeit their chance. And then there are those who equally confidently claim that it's all the idea: with a crappy idea, any team, no matter how good, will fail. More – they claim – if a crappy team is paired with a good idea, you can always replace the team.

I am of the opinion that your perspective depends on where you stand. More precisely, the answer will depend on what you are invested in, what you control, and what you're optimizing for. Based on these metrics, "success" may mean very different thing. For example, if you are a VC, you are tasked with creating value for your LPs. You know the entrepreneurial teams well, and you have access to a wide network of people. Assuming that you can evaluate ideas accurately, you would consider them more valuable than the teams, which you can modify more or less at will. So it's natural that you would prefer a horse to a jockey. And, from where you stand, you would be right – you don't have time to let teams experiment with ideas; you have access to a lot of ideas and can change teams more easily than you can change ideas (the latter of which, as a VC, you really shouldn't be doing).

On the other hand, if you are an entrepreneur in an early stage of a venture, you are invested in a particular choice of a team, and you believe that you can change the world. Then naturally you believe in jockeys. And, from where you stand, you are right – assuming the team is good, and you have time, and you don't give up, you are bound to produce results.

However, obscured in the above assumption is what makes a team good. Clearly, it's not just a bunch of smart people who like each other. In fact, intelligence and good team dynamics are sometimes not even required. I would argue that to beat a bad idea, a team must have a good idea development process: knowing when to give an idea up, knowing how to evaluate ideas, assessing what works and what doesn't work in a team. In fact, I would say that the process is actually more important than the team, because a good process will help all systemic issues surface, and assuming that the team acts on these issues, with enough time the team should stumble on a good idea.

Of course, the problem with the entrepreneur's perspective is that this may take a long time. If you have good people available, and if you know an idea is good, you're better off changing the team. For VCs, for whom time is scarce and people are plentiful, this is a rational preference.

But a problem with the VC's perspective is that it's usually not clear whether an idea is good or terrible. So, the entrepreneur retorts, that's where the team comes in. Leave it to the team to pursue an idea even if it initially seems ridiculous.

As usual, the answer to the question at the top is "it's both". We should take the more holistic view, and agree with Andy Rachleff who gets this question a lot, but has a witty answer ready. It's both the team and the idea. One may be more valuable to you than the other, but that depends on where you stand, what you can control, and what you're more invested in.

Truth Table Generator

I originally created this tool in 2002 – yes, 2002. I was recently going through my old work and thought it might be fun to put up. I refreshed it a little (to use Bootstrap and jQuery) and made the code a little cleaner, and put up online again.

 

Truth Table Generator is a very simple tool. It allows you to enter a logic expression (using the shorthand notation, i.e. + for OR, nothing for AND, quote or apostrophe for NOT) for which it will show you a truth table, displaying the result of the expression for each of the possible set of values of the variables. It works for up to 26 variables, though it (obviously) gets very slow for 12+ variables – and, anyway, I would say that the value of a truth table is seriously diminished if more than 6 variables are used...

 

It's all client-side (which makes it fast, and allows it to work offline), which is a common thing these days but in 2002 it was sort of a new thing.

Click through to use the tool.

Click through to use the tool.

 

References

Faster Horse? Nonsense!

I am involved in a lot of classes, panels, workshops etc. about entrepreneurship, and about every three days, I hear the all-too-familiar statement:

Before the automobile, if you asked people what they wanted, they would tell you that they wanted a faster horse.

Usually, the person bringing it up is attempting to make a point that customers don't know what they want and that sometimes one just ought to have a strong vision for a product rather than talk to customers. Often, it's a response to an attack of the "build it and they will come" philosophy, which many smart people (including myself) have at some point subscribed to, or still do.

The problem with the "faster horse" allegory is that it is wrong. It's based on a false premise that what you should be doing when around customers is asking them what they want. Given that premise, yes, you're better off not talking to customers. But the corollary is not to stop talking to customers.

To illustrate, let me build a little more (updated with today's lingo) context around the above quote. 

It's before 1886 and you're interested in the personal transportation space. Maybe you've worked at a carriage company, or maybe you've worked for an owner of a few horses. You believe that the space has a lot of untapped potential – it's a large market, but the users aren't served as well as they could be. Maintenance is relatively expensive, and it's very difficult to scale the business.

You decide to do customer discovery. You find people who travel by horse (probably via a stagecoach) and sit down with them. You ask them for their experience. Maybe you ask them to tell you a story about their last travel. You tell them that you want to make personal transportation better. You ask them to tell you what they want.

Of course they will tell you that they want a faster horse.  Speed is likely the biggest pain for them. It takes 3 days (given the average mileage of 60 to 70 miles traveled a day) to travel from Boston to New York.

It's unsurprising that the customers will tell you that they want a faster horse, because by the time you asked them what they want, they have already pictured a horse, they recalled the pain of traveling at 5 miles per hour, they connected the two and they can't wait to share their need with you.

Now, if you were a good interviewer, you would never, ever conduct an interview in this way. A good interviewer would extract precisely the kind of information he or she would need to build a car. The key thing is to get the customer out of their familiar frame of reference. The key is to understand what motivates them, what they value, and why they do what they do. The best customer interview rarely invokes the image of a horse.

After all, we're not talking to the customer to hope that he or she will come up with an automobile. That's our job. Our hope should be to understand the various dimensions of personal travel, and our customers' valuation of each. By doing this, we can hopefully understand how much of a pain it is to spend 3 days en route between Boston and New York – and that it's become more and more of a pain over time. Knowing this makes a prospect of a more expensive, but significantly faster method of transportation very appealing.

In my experience building products, I have found talking to people to be the single most important element of my work. Talking to people has two primary purposes, and they are both invaluable.

  1. It helps you solidify your vision. By forcing yourself to talk to someone who is not you, you need to phrase your area of interest and your ideas and thoughts in a way that's understandable by others. It's a great way to find the weak spots in your thinking. (Incidentally, that's the same reason for having a blog)
  2. It also allows you to get original data, rather than permuting the existing data that you already have in your head. Too often we stay in our heads, tricking ourselves into an illusion of progress, while in reality bathing in the soup of old ideas. The issue with surrounding ourselves with the same thoughts over and over again is that in the absence of "thought competition" our brains begin to confuse longevity with truth, and we begin to really hold on to these unvalidated ideas. Making a habit out of talking to people ensures a fresh flow of ideas, which creates a ripe environment for the selection of the fittest ones.

Whenever I hear the "faster horse" analogy, I can't help but think that it's a poor excuse. Maybe whoever used it got burnt on a bad user interview before. More likely, they are not a good "needs finder". Or, worst of all, they just want an excuse not to talk to customers. So no matter whether you are just trying to come up with a faster horse or an automobile, start by talking to customers. And do it well.

Software Defects and non-Software Defects

Some years ago I've had the pleasure to work with somebody who didn't know anything about software engineering but was smart and ambitious. He had majored in civil engineering in college and his beginner's (fresh, unspoiled) view on some basic software engineering concepts taught me a lot about the nature of software engineering.

One question in particular gave me some pause. He couldn't understand why developers didn't produce bug-free software. In civil engineering, after all, the results are by and large bug-free, very much unlike software – he argued. Cars don't spontaneously explode; buildings don't collapse, but software crashes all the time.

On one level, it is easy to dismiss his point. Physical stuff is defective, sometimes obviously , sometimes subtly, sometimes dubiously. The defects just look very different.

But come to think about it, there is a difference. The defect rate is certainly higher in software than in the physical world (true for hardware, and definitely true for large physical structures). There are lots of tiny bugs, and some that are incredibly frustrating (for example, bugs that cause your computer to crash and lose that paper you've been working on). We tolerate this higher defect rate because the bugs affect us less than, say, a defective bridge would [1]. Moreover, software isn't bound by laws of physics, chemistry, and material sciences. It's bound by information and complexity theory, but compared to the former, it's a much more lenient power. Those factors help us build faster and more cheaply than we otherwise would, which keeps the software evolving, keeps new products coming, and keeps innovation going. You can't build a good bridge in a weekend, but you could build a good website.

The flip side of software allowing us this freedom is that as we build on top of old software, which is built on top of even older software, the complexity of our solutions increases. A typical product consists of a "stack" that may be 20 or 30 layers of abstraction deep. That's a lot that can go wrong. To make things worse, software interacts with a rich, nonlinear environment. There are relatively few variables to consider as "inputs" to a bridge (for instance, the weight of the objects that cross it) while there are hundreds, if not thousands of "inputs" to an operating system.

But there are also factors which can be mitigated. Maybe we tolerate buggy software to a fault, giving engineering teams the latitude to cut more corners than would be optimal, to ensure a less frustrating user experience. Moreover, software engineering is still rather immature – we've been building bridges for thousands of years, but writing software only for seventy or so.  As we standardize our practices, we will get better at managing the input and environment complexity. Our code will become shorter, smarter and more expressive. Software engineering will continue to borrow from other fields (as it has done, for example, with the lean manufacturing model) [2]. As new paradigms, frameworks and best practices emerge, we should expect software to be less crappy.

While it's easy to think that software engineering as just another process that generates defects, it's helpful to look at it from a broader point of view. Let's not get complacent about software engineering only because it's more complex. Let's use other disciplines to show us how we are deficient, and let's address these deficiencies.

 

 

[1] This is usually true, but not always. There have been some very expensive software mistakes in the past. 

[2] Conversely, precisely due to its complexity, software engineering has had to work out a bag of tricks that I think other engineering sciences should adopt. Unit testing or continuous integration come to mind.