# Product Development Misconceptions

I am blessed to have started as a hard-core developer (with all the advantages and all the biases accompanying such a role) and then, partly through necessity, and partly through curiosity, to have started dabbling in product, only for this dabbling to have escalated into true product ownership. With the power came responsibility and I am lucky to have failed spectacularly in my first real product assignment, so much so that the many learnings of good product development have been etched in my memory for ever. Some errors I made several times (each time the problem looked just different enough to have fooled me), which helped me internalize what makes good product development even more.

The hardest part about good product development is that unlike the resulting product, it is unintuitive. It's particularly treacherous for those who have good (and smart) ideas, or those who lack experience getting stuff built, but feel that they can conceptualize what it takes to get stuff built ("I haven't built anything, but I could probably do it if I put my mind to it, it doesn't look that hard"). Because that's where the biggest impedance mismatch occurs between our confidence of our intuitions and reality.

I encounter one or more of these misconceptions on a weekly basis: talking to nontechnical managers, talking to technical non-managers, talking to venture capitalists (who are the worst violators!),  CEOs, users, stakeholders, you name it. I think the biggest factor contributing to these myths persisting is the very fact that technology is so empowering. Unlike building, say, a real bridge or automobile, with technology you can wish so much into being with a bunch of keystrokes and some electricity. And this magic (it's hard to find a better word for it) makes us forget reality.

The other reason why these misconceptions are alive and well is the continuing reinforcement of poor analogies and superficial descriptions of products. Go to many Enterprise products' websites and I bet you there is a page that lists or describes or compares features (and implies that more features is better). So we end up basically equating products with the features they offer. Or look at Intel's microprocessor roadmap and you'll be sure that roadmaps basically list releases into the future. As developers, we think about how long it took us to implement a feature, and so this (I'll argue) first-order consequence is often basically all that remains.

In all these, "basically" is the killer word. These things are not basically equivalent. They are equivalent at a rather simplistic level, and that superficial equality contributed to many challenges doing product development.

I'm not advocating for us to stop dreaming. But let's do a better job separating the dream from reality (such as a comfortable bed and enough sleep, to stick with the metaphor) that is required to allow us to dream in the first place.

## Reality #1: Product ≠ a series of features

It's tempting to describe a product by listing its features. But a product is not just a sum of its features. In fact, I'd argue that a features is a manifestation of something much more fundamental (and important):

A feature is a vehicle delivering value to a user.

Focusing on features when developing a product is like describing all the different entrees when developing a restaurant. Sure, the entrees are important, but focusing on just the entrees risks missing the fact that behind the entrees are diners who want to have a great experience and get some nutrition. And you don't develop a restaurant by listing all the entrees that will be served. You think about the needs you want to satisfy, and from that you derive the ambiance, the decor, the overall cuisine, and ultimately the food.

Feature-centric product development carries a deadly risk: that you equate product success with the richness, or sophistication, or multitude, or cleverness of its features. At the end of the day the product needs to deliver value to its users. If you get buried in feature lists and forget who the user is, what she needs and wants, how she thinks and behaves and what she expects, your features will be nothing but a useless laundry list. But you'll be patting yourself on the back for a job well done.

It's easy to think of features. We all do it when we use products, when we critique them, when we imagine what products could do. Just like ideas, features are cheap. Features didn't use to be so abundant (and cheap), but I believe that this is because we simply used products less in the past, and the products used to be physical (with physical constraints which make it harder to think of features) and so you didn't have everyone and their mother come up with features for all sorts of products. And so maybe we just haven't caught up with reality yet, and just like we all think we are above-average drivers, we all think we are above-average feature creators. This is particularly hazardous if you're technical, because then you're likely smart and you can also build the feature, which makes you think you are a particularly good feature creator.

If a product is not a series of features, what it is? Here is a better (not perfect) definition: A product is a creation that offers a certain (usually small) set of value propositions to a certain (usually well-defined) set of users. The value propositions satisfy the users' needs or wants, address their pains or make their jobs easier. To develop a product, rather than listing its features:

• Start with the user (and the customer, if you want to be able to capture the value you're creating – the two are not always the same)
• Understand the user incredibly well
• Define the value propositions that align with what the user needs, and that are accessible to the user
• Define how you're going to provide this value (only now is this beginning to look like a "feature", even though it's more like "implementation hint")

More to come... in the next installment, I cover another critical misconception. It turns out that a roadmap is not a list of releases. Stay tuned.

# Five Stages of Technology Sophistication

Much has been said about the transformative effect that technology (to be more precise, computer software) has had on various industries. But the effect is not new; nor is the transformation anywhere close to being complete. Looking across industries, we can notice themes. Specifically, as industries evolve, while the degree to which they utilize technology at a particular moment in time varies (for example, today marketing has at its disposal a much more sophisticated set of tools and technology practices than mortgages do) they seemed to embrace technology along the same predefined path. I've isolated five distinct stages, described below.

### Stage One: Pre-technology

Industries usually begin as human endeavors. There are, of course, exceptions, most notably industries that were made possible in large part thanks to technology (such as bioengineering), but even today, surrounded by the abundance of tools and services that make it easy (so to speak) for the masses to technologize their efforts, we are fundamentally, as a species, not technology-literate the way we are capable of speaking or writing. Walk into any household or a company. Technology is relegated to technology-builders, and everyone else does everything but. And true, there are pockets where these proportions are out of whack, but even a technically-sophisticated companies like Google boast an order more non-technology than technology solutions.

On one hand, this is not surprising. Technology–and especially software–is very different from other human-adopted systems, and one of its biggest strengths is, in my view, also the reason why it's so hard for everyone to adopt. It's just so damn precise. There is very little if at all redundancy built in. Computer programs are recipes that are rigid like rail lines–they very explicitly move state from A to B under very specific conditions. Computer languages are compact (only a handful of keywords) and highly grammatical; but expressiveness suffers. Try replacing one character in your Ruby or Java code and your program will likely not compile. This means that to be truly technology literate, one has to have a rather seriously mathematical and systematic mind, capable of not just creating abstract representations of objects (which every brain does all the time, quietly) but also able to describe these representations very explicitly. This is why attempts to make programming "user-friendly" by using graphical diagramming-type metaphors or WYSIWYG controls always falls short of expectations.

On the other hand, I can't imagine this state of things persisting much longer (in a grander scale of things... say a century or so). But that's a topic of conversation for another time.

## Stage Two: Local technology

As industries focus on systematization and cost-cutting, technology is usually brought in. This takes a very specific shape: I call it local technology to denote solutions that usually help individuals, or at most solve very small, specific problems. Part of why local technology is the earliest stage in such an evolution is historical: computers were only connected in the early 80s, while software existed much before then. But local technology is easier to get one's head around, and, most importantly, it doesn't violate interfaces: as a worked in company X, with local technology, your inputs and outputs didn't change; you still interacted with the same people and had the same responsibilities. Technology simply made you more efficient.

My recently favorite industry, mortgages and mortgage origination, illustrates this stage perfectly well. The complexity of the use cases (the completion of a mortgage application file) quickly outgrew human processes and the systematization took advantage of technology of the 80s, which was local technology. Each of the "stations" in the backend – loan officers, processors, underwriters – optimized their environment, but the interface remained because technology couldn't strongly connect the stations. The industry became deeply entrenched with this form of technology, so much so that it's only relatively recently that we're beginning to see a transformation from heavily optimized individual stations to optimized systems that include all stations (and perhaps redefine the stations altogether).

## Stage Three: Connected technology

Easily the most exciting and enabling stage in the evolution of any industry. Of course, we are now living in the Renaissance of Connectedness: not a day goes by without me learning about a new problem space being "disrupted" by technology, which really just means a company successfully innovating on a business model to take advantage of the existence of the omnipresent and all-so-connecting Internet. That's exactly what Uber, Airbnb, Facebook, Twitter, Dropbox, Spotify, and dozens of other large private tech companies (can we really call them startups anymore?) are doing. Late 1990s saw a flurry of activity, but the environment wasn't quite ripe for true connectedness (made possible with near-universal broadband and then mobile data). Truly connected technology had to wait for mid-to-late 2000s and there's still a lot to connect.

## Stage Four: Analysis and insight

The connecting power of technology simply cannot be underestimated. It will continue its run until the majority of industries are shaken up and redefined by the notion of connectedness. But the value of technology doesn't end here. Most industries, once connected, then turn to data and to the understanding of what's behind the data. Except in limited cases, analytics and insight only become tremendously more useful once the data from all connected endpoints are pooled and analyzed. That's a big idea behind the somewhat hype'y term Big Data.

Analysis and insight are a frontier beyond just connectedness. In search for further efficiency, industries must begin to understand the processes they put in place (via local technology) and connected (via connected technology). And while we hear a lot of companies whose competitive edge lies in analytics, the aforementioned Big Data slogan just won't die, Data Scientists are now the most sought-after professionals, we are only at the very beginning of useful analytics.

## Stage Five: Artificial Intelligence

No matter how sophisticated, analytics is just as good as the heuristics that humans can think of or the trends that naturally emerge. In a quest for insight beyond that, we turn to technology's ultimate promise: one of emulating human intelligence. While there are a handful of companies that can boast truly artificially intelligent systems (some that come to mind are IBM, Google's computers hidden deep in the company's basements somewhere and processing facts under Peter Norvig's careful watch and perhaps Tesla), the solution space is nowhere near as "toolified": local technology has applications going back to early computers, connected technology has the Web, analysis and insight has machine learning and data visualization, but Artificial Intelligence is really mostly kept in the realm of research papers and some companies' secret projects.

***

What is the significance of these five stages and what can we learn from this taxonomy? First of all, understanding a particular industry's evolution, overlaid on top of the evolution of software helps us understand the challenges and opportunities inherent in that industry. If an industry grew tremendously before connected technology became reality, the existing players are likely plagued by legacy local technology with a modicum of connective tissue. If an industry just seems to have become connected (the transformation from on-premise to SaaS solutions, for example for an enterprise space), an opportunity to look out for will likely come in the insights that are now possible with the proliferation of data. Finally, it's useful to distinguish between analytics and intelligence, even though we still have ways to go before we see a wave of new "disruptions" which will really be business model innovations stemming from the fact that we will be able to near-infinitely scale human intelligence.

# Truth Table Generator

I originally created this tool in 2002 – yes, 2002. I was recently going through my old work and thought it might be fun to put up. I refreshed it a little (to use Bootstrap and jQuery) and made the code a little cleaner, and put up online again.

Truth Table Generator is a very simple tool. It allows you to enter a logic expression (using the shorthand notation, i.e. + for OR, nothing for AND, quote or apostrophe for NOT) for which it will show you a truth table, displaying the result of the expression for each of the possible set of values of the variables. It works for up to 26 variables, though it (obviously) gets very slow for 12+ variables – and, anyway, I would say that the value of a truth table is seriously diminished if more than 6 variables are used...

It's all client-side (which makes it fast, and allows it to work offline), which is a common thing these days but in 2002 it was sort of a new thing.

Click through to use the tool.

References

# Software Defects and non-Software Defects

Some years ago I've had the pleasure to work with somebody who didn't know anything about software engineering but was smart and ambitious. He had majored in civil engineering in college and his beginner's (fresh, unspoiled) view on some basic software engineering concepts taught me a lot about the nature of software engineering.

One question in particular gave me some pause. He couldn't understand why developers didn't produce bug-free software. In civil engineering, after all, the results are by and large bug-free, very much unlike software – he argued. Cars don't spontaneously explode; buildings don't collapse, but software crashes all the time.

On one level, it is easy to dismiss his point. Physical stuff is defective, sometimes obviously , sometimes subtly, sometimes dubiously. The defects just look very different.

But come to think about it, there is a difference. The defect rate is certainly higher in software than in the physical world (true for hardware, and definitely true for large physical structures). There are lots of tiny bugs, and some that are incredibly frustrating (for example, bugs that cause your computer to crash and lose that paper you've been working on). We tolerate this higher defect rate because the bugs affect us less than, say, a defective bridge would [1]. Moreover, software isn't bound by laws of physics, chemistry, and material sciences. It's bound by information and complexity theory, but compared to the former, it's a much more lenient power. Those factors help us build faster and more cheaply than we otherwise would, which keeps the software evolving, keeps new products coming, and keeps innovation going. You can't build a good bridge in a weekend, but you could build a good website.

The flip side of software allowing us this freedom is that as we build on top of old software, which is built on top of even older software, the complexity of our solutions increases. A typical product consists of a "stack" that may be 20 or 30 layers of abstraction deep. That's a lot that can go wrong. To make things worse, software interacts with a rich, nonlinear environment. There are relatively few variables to consider as "inputs" to a bridge (for instance, the weight of the objects that cross it) while there are hundreds, if not thousands of "inputs" to an operating system.

But there are also factors which can be mitigated. Maybe we tolerate buggy software to a fault, giving engineering teams the latitude to cut more corners than would be optimal, to ensure a less frustrating user experience. Moreover, software engineering is still rather immature – we've been building bridges for thousands of years, but writing software only for seventy or so.  As we standardize our practices, we will get better at managing the input and environment complexity. Our code will become shorter, smarter and more expressive. Software engineering will continue to borrow from other fields (as it has done, for example, with the lean manufacturing model) [2]. As new paradigms, frameworks and best practices emerge, we should expect software to be less crappy.

While it's easy to think that software engineering as just another process that generates defects, it's helpful to look at it from a broader point of view. Let's not get complacent about software engineering only because it's more complex. Let's use other disciplines to show us how we are deficient, and let's address these deficiencies.

[1] This is usually true, but not always. There have been some very expensive software mistakes in the past.

[2] Conversely, precisely due to its complexity, software engineering has had to work out a bag of tricks that I think other engineering sciences should adopt. Unit testing or continuous integration come to mind.

# Good Design and Empathy

Let me pick on DMV a little, but for a good reason – not just to complain about this much-disliked organization.

I registered my car in the California DMV more than a year ago. Since then I've familiarized myself with HOV lanes and their rules, and, consequently, the Clean Air Vehicle decals. It was learning through osmosis: I kept seeing these diamond shapes on the highway and the signs that informed me of the HOV lane rules. Later, by observing other cars on the road, I've noticed enough of the decals around me to spot a theme and wonder what they are. A quick google search confirmed what I had suspected: if your car satisfies certain requirements, you can purchase a decal, which lets you ride in the HOV lane when normally you wouldn't be allowed to.

I thought it might be good to get one, especially that I knew that my car satisfies the conditions. So I scrolled half way down the DMV webpage that Google found for me and found the link to the fillable PDF, which I downloaded and filled out. The form was mostly intuitive. In a few places I had to refer to my documents. There was a section at the top of Page 2 that I almost missed. And once I printed out the form, I noticed that the PDF software (Preview) messed up a few fields, which I had to correct in black ink. Then I put the form in an envelope, enclosed a check for 8, and mailed it out. Two or three weeks later, I receive my decals in the mail. I haven't put them on my car – the decals are sort of ugly looking, surprisingly large, and you need to literally plaster your car (with three of them, one from each side). So I'm waiting until I need to use one, for example, I'm in a rush and need to use the highway during rush hour. So far, fortunately, that hasn't happened. Nothing about the above should be shocking to you. I'm sure that you go through a similar workflow multiple times in a week (though – hopefully! – not all of them involve the DMV). It's actually one of the least painful of the DMV workflows. But from the point of view of good design, it's an awful one. It highlights the problem that I've noted hundreds of times, a kind of death of a thousand little daggers. All these suboptimal experiences set a kind of expectation in us; they numb us to the fact that poor user experience surrounds us. How could this experience be better? I like to think of user experience as layers of an onion, and peel one layer after another, at each level asking a simple question or two: what causes the most pain? What creates the most friction? In the case of my DMV decal experience, I think the most surface-level pain was filling the PDF out. It took me a long time, but, more painfully, when I finished filling out the PDF, I wasn't confident that the printed out version will contain all my edits. Why not? Try to fill this form out yourself on a Mac. Some fields (YEAR) replace my value with a 0 on blur. Some fields (UNIT NO) are misaligned. Some fields (top fields on page 2) seem like they are linked to the respective fields on page 1, but the values only show up on focus. Reliability is key, especially when paperwork is required (since the cost of an error is relatively high). So, in this first layer, I'd say that the PDF filling experience could be more reliable and consistent. That would be great. If that was solved, my next complaint would be the efficiency and complexity of the workflow. Why should I have to print out the PDF, then put it in an envelope, and mail it out? Ironically, the technology for fillable PDFs is significantly more advanced that the technology that makes online forms possible. That would save some trees, and save me and the DMV lots of time (let alone simplify my workflow of having to have an envelope, a stamp, and a nearby mailbox). Even in the most basic form (no error checking) it saves me 5 minutes to do the printing and mailing, and it saves the DMV somewhere between 5 and 20 minutes of bureaucracy. In the second layer, the filing/submission experience could be simpler and more efficient. With that out of the way, I'd focus on redundancy. Why do I need to enter all my personal info? The DMV has my address on file; if my address hasn't changed (the 98% use case!), I should not have to specify it. Similarly, the DMV has the VIM, the make, and the model of my car. In fact, all I should have to do is enter my unique identifier (I'd love a username that I keep with the DMV, but a physical ID, or name+license number will suffice) and the engine type/whatever the DMV requires to ensure that I qualify for the decal. Hell, even the latter isn't necessary – the DMV has the VIN, the vehicle make, model and year. Again, in 95% of all use cases, the car automatically satisfies the requirements. The input experience could be more minimal. From here it's not hard to realize that as a user, I should not even have to fill out a form. The DMV knows my registration info, and manages the eligibility rules. So the DMV should automatically – proactively – do the matching and simply send me the decals. They could add the8 fee to the annual vehicle registration fee. In fact, I should get the decals as soon as I register my car – I shouldn't have to wonder why some cars have decals and what they mean.

Of course, a good user experience designer will automatically see through all these layers. In fact, it's relatively easy to envision the ideal workflow that I described above. But  the point is not that the experience could be better. It should be better, and the fact that it's not points to fundamental gaps in how product and service designers (both the fillable PDF makers and the DMV) perceive users, their context, needs, and experience.

What causes this?  At a proxy level, there are incentive issues (the DMV doesn't feel the acute pressure to make the user experience better), use case complexity (I'm sure the makers of Preview tested their software, but I'm guessing fillable PDFs are very difficult to test exhaustively), responsibility issues (there is probably no single person responsible for the entire decal user workflow; and in many cases, the left hand in the organization is not talking to the right hand). But at the fundamental level, this is an issue of empathy.

Many experienced Product Managers will tell you that empathy differentiates a good PM from a great one. But empathy is a quality that needs to be shared by everyone in an organization, not just select employees. It's like common sense or the ability to communicate. And yet, it's far from prevalent. It's that way because companies don't emphasize it when hiring, they don't value it internally, and they rarely reward it. Worse, empathy is often misunderstood. Empathy is the capacity to recognize emotions felt by other people. It doesn't mean the ability to be compassionate (though it's a prerequisite for the latter). It does not mean being "nice" or "fair". To have empathy means to observe the other (easy), to actually listen to the other (hard), and to suspend one's ego (very, very hard).

There were probably dozens, maybe hundreds, or people involved in the design and implementation of the decal application experience. If just one in ten put themselves in the shoes of the end user (a little bit of empathy), they would invariably see the pain that the end user is feeling. If just one in fifty stopped to think about what would make the experience as painless as possible for the end user (a lot of empathy), I wouldn't have needed to write this post.

What could we, mere users, do to make a difference? The simplest thing each of us can do is expect more. Expect to be delighted. Expect to have a pleasurable experience. Train yourself to identify the pain, no matter how small, and talk about it. Complain, be vocal, fill out reviews, call Customer Service. Make it known that user experience is something you value a lot. Be critical of products and services which are unreliable, inefficient, redundant. Don't tolerate even a single little dagger. Organizations – even the DMV – respond to what their customers value. One voice is not enough, but one voice, multiplied a thousand times, is no longer a voice. It's a roar.

# Technology, Venture, and Design

Before embarking on this second incarnation of my blog, I reflected on the past four years of my experiences in an effort to extract a theme closest to my heart. What stood out was an intersection of three areas. I think it's a particularly powerful intersection, one worth taking a deeper look into. These three areas are TechnologyVenture, and Design.

But let's step back a little.

We live in a bimodal age of both specialization and "skilllessness". Some of us specialize – they earn their PhDs, they deepen their understanding of a particular subject matter, they become comfortable in their very narrow and specific job roles. Others go in the opposite direction – they remain "generalists," a term which, more often than not, means having no particular skill but boasting the general ability to deal with people, with maybe an uncommon amount of common sense sprinkled on top. If successful, the former become fantastic individual contributors, and the latter become great managers.

But if we want to aim higher – if we want to change the world in small ways or in big ways, we need to change the way we look at it, and we need to revisit our role in it. We need to know enough about everything to tackle the increasingly complex problems, straggling the increasingly larger number of distinct fields. But we also need to know a lot about enough things to be able to actually make a difference. You may have heard of "T-shaped professionals", having some breadth of knowledge and one area of depth. I want to go one step further, advocating for the need to have deep expertise and experience in more than one area, in addition to the breadth of knowledge elsewhere. At the most basic level, it's because the best insight comes out of understanding several things so well that you can spot the subtlest of connections between them.

There are lots of combinations of areas which can offer such synergy, but I want to focus on three which I think are the most powerful.

- - -

### Technology

Technology truly is the driver of mankind's progress. Technology – literally the act of applying our knowledge – has a transformative ability. An insanely powerful flavor of technology that emerged in the twentieth century – software – is the best testament of that. Software frees us from most physical constraints. You can stack software on top of other software, which lets us create remarkable leverage. Fluency in technology – and software, specifically – is indispensable in the twenty-first century.

Many of the people I interact with hope that a superficial understanding of technology will do. After all, they can outsource the technical work. Well, as many firsthand experiences have taught me, there is absolutely nothing worse than having someone who does not understand technology attempt to manage, or in some meaningful way contribute to a problem that requires technology. Those people are like that broken wheel in the grocery cart – yes, it supports the cart, but it's really not that much more difficult with just three wheels, and boy is that broken wheel frustrating! You have to stop all the time, adjust the wheel and hope that it will continue moving in the right direction.

Here in business school, some of my classmates hope that they can just take an intro CS class and check the technical box. While they will no doubt do very well in that class, I'm afraid it's not enough. The flip side of that ability of technology to provide massive leverage is that to understand (let alone to harness) technology means to have to dig very deep, layer beneath layer, to achieve proficiency. You not only have to be able to write a Hello World! application; you have to understand what makes the computer print Hello World. It's a deep stack to understand, and for that, you also need to have a good command of mathematics. One CS class just won't cut it. It's about a mindset, a way of seeing the world.

I was fortunate to be exposed to leverage-providing technology very early in my life, thanks to my father who foresaw the rise of software and smuggled a computer into the country for me to play with. For those who believe in the importance of technology, but who don't have the background, the best advice I can give is to unconditionally immerse oneself in it. Set a goal – to write a photo-sharing app, or something – and be relentless in getting to that goal. At first, you won't even know what questions to ask. Struggle! Get help, google incessantly, learn by failing a hundred times which stackoverflow comment is useful and which one is useless.

- - -

### Venture

The desire to build a business, or a deep understanding of what makes businesses successful and unsuccessful, is an ability that I only learned to appreciate relatively late in life.

Business is the ultimate applied science. The best way to test an idea is to build some life support around it and open it up to the world, to see if it can survive. You may have arrived upon the best theoretical result, but to change the world, you should turn your theory into a sustainable business. Being venture-minded is also a great way to ensure that you don't solve problems that nobody has, and that you don't just create a science fair project. Subject your ideas to the harshness of reality. If they blossom, you have come up with something of great value.

The best way to acquire a business intuition is to be in business. You have to have enough exposure to what makes a business tick. I spent six years at a company, but – while the management was wonderfully transparent, allowing me to learn what a company should and should not do – I barely saw the tip of the iceberg. That's why I recommend joining a company small enough so that you can understand fully what it does and why it does what it does.

You can also start a venture. The learning curve is steepest, and the things you learn you will never forget; but you will be subjected to the ebbs and flows of luck.

If you haven't had much experience with business, you can try business school. That's what I chose – and I'm glad I did. By immersing myself in a rich ecosystem focused on business issues, I've acquired an intuition I haven't had before. I think about the world differently now. But all the theory, the cases, the conversations are just one part of the equation. You have to go out and apply what you learned.

- - -

### Design

Design, or rather, the art of human experience, is another area that I consider essential. You may have the best technology, and the best business model, but to be truly successful, you have to understand how your product or service integrates with humans, their workflows, their pains, needs and desires.

You can't change the world if you don't interact with humans, be it a product that you design that people want to use (Tesla is changing the world – it's doing so by creating products that people love), or an institution you establish (which consists of a number of human beings), or even a book you write (which is read by humans).

Many of my friends think of design – or the experience that humans have with their creations – as secondary. I think that's what differentiates good solutions from great solutions. You can't outsource design: if you think you can, you betray the critical fact that you don't understand your product and your customer enough.

- - -

Understanding technology, being venture-minded, and caring about design and the human experience are incredibly powerful. In the next few decades, as software continues to eat the world, as technology roles remain the most lucrative, and as techniques such as analytics and hypothesis-driven experimentation push their way into most job descriptions; as consumers and businesses alike continue to demand human-centric products and services that understand user needs and reduce frictions to use; and as more ideas start seeing the light of day in the form of viable businesses, these three areas will seem just as indispensable as the ability to communicate or to use a computer is now.

Better be on the forefront than try to catch up when it's too late.

# Crowdsourced Art

I had this idea to get my friends together to create art. I created a simple tool which allowed them to draw on a small canvas. I gave them a small number of pixels each, but they could cooperate (and earn bonus pixels) or overwrite others' work.

I ran a small iteration of this project back in 2009 with my friends. People had fun so I decided to add a few new features. Currently, the tool has an ability to "mine" tokens directly on the page - with enough patience, you can earn free pixels just by staying on the page. I'm also trying to make it a little easier to draw and explore what others have done.

I hope to be done in a couple of weeks, at which point I'll give all my classmates a chance to contribute with a relatively large number of pixels each. In the meantime, you can play with the test run, by mining the pixels (you can also email me to get a token). The canvas so far is shown below. Click through to contribute!

This is the canvas of the current crowdsourcing art experiment.

By the way, in the 2009 iteration, my friends came up with this:

This is the result of the 2009 iteration of the project.

# Disappearing Messages

(Idea originally conceived in 2009)

I’m fascinated by color. When I was younger I dreamt of discovering a color that nobody has ever seen before. And since most computer languages give you easy ways to manipulate graphics, I played with color on a computer.

One day I saw an interesting effect on some certificate I received. When I photocopied it in black and white, a word appeared in the photocopy ("COPY") that wasn’t visible in the original. This "security measure" took advantage of the photocopier’s inability to faithfully replicate the document (the word COPY was actually visible in the original but it was difficult to see because it consisted of a myriad of tiny dots). I wondered whether it would be possible to hide messages in documents even if the copier were to produce faithful facsimiles, but converted color to black & white images.

Introducing Disappearing Messages. If you're interested in the theory behind this effect, see this post and this post.

In this first experiment, text is visible in a color image, but when you render the image in black & white, the text will disappear.

Text to make disappear:

In the second experiment, text is only visible once you render the image in black & white. I've added a "distraction" text that you will see in the color rendering. (On your computer screen, you may still see both strings because of your browser's image color correction or color variability of your LCD screen. But if you print the image out in color, you will only see the "distraction" text.)

Text to make disappear:
Distraction text:

# A New Color Picker

(Originally published in 2010)

I bet you've seen color pickers before. They are neat UI elements that allow you to select a particular color that you may have in mind. They do that by organizing the entire color space in a way that's easily browsed. Usually, pickers show you a 2D panel that displays all colors along two of the dimensions, and a slider for the third dimension; or they only show you a small-ish subset of all the colors.

I’m fascinated with color, especially when there’s math or technology involved. And so I set out to build a picker that displays all the colors, yet requires only a single two-dimensional surface.

To learn all the details of how I generated this new color picker, see this post. In short, however, the idea is this: we want to map a 3D space (0..255, 0..255, 0..255) into a 2D space (0..4095, 0..4095) in a smooth way, so we'll use space-filling curves. "Walking" the R, G and B dimensions, however, gives a pretty unsmooth picker, so instead I "walked" the color intensity, and for each intensity, "walked" over all possible colors of that intensity. I then picked the order in which the colors would appear by sorting by R, G and then B.

The resulting color picker has the interesting property that it displays all possible colors (up to the image's resolution) in a single image:

A New Color Picker

# A Homebrew Computer Alarm

(Originally published on January 7th, 2010)

I wanted to wake up to NPR. There’s a good alarm application for the Mac called Alarm Clock which allowed me to play an arbitrary iTunes playlist on schedule (with bells and whistles such as gradually increasing the volume), but the free version couldn’t deal with playing audio streams (such as, in my case, wnyc.org).

No problem — I used cron as well as OS X’s built-in wake-on-schedule functionality.

First, in System Preferences > Energy Saver, I set a schedule to wake up the computer on weekdays at 6am. Then I edited the crontab: in Terminal I typed

crontab -e

and typed in the editor

1 6 * * 1-5 osascript /Users/strozek/wnyc.applescript

The above tells OS X to run the command osascript at 6:01am Monday through Friday. The script I pass to osascript is the following script:

set volume 2
tell application “Safari”
activate
open location “http://wnyc.org/flashplayer/player.html#/playStream/fm939″
end tell
delay 20
set volume 2
delay 20
set volume 2.75
delay 20
set volume 3.25
delay 20
set volume 3.75
delay 20
set volume 4.25
delay 20
set volume 5

And voilà! I just need to remember to keep the computer plugged in at night and not close the lid.